Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Contribute to GitLab
Sign in / Register
Toggle navigation
C
CharIP-Electron
Project
Project
Details
Activity
Cycle Analytics
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Charts
Issues
0
Issues
0
List
Board
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Charts
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Charts
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
ali
CharIP-Electron
Commits
25325063
Commit
25325063
authored
Dec 01, 2023
by
ali
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
feat: 视频数字人
parent
9d261b2d
Hide whitespace changes
Inline
Side-by-side
Showing
9 changed files
with
539 additions
and
12 deletions
+539
-12
6fa9a127-2ce5-43ea-a543-475bf9354eda.png
...public/2023-11-2/6fa9a127-2ce5-43ea-a543-475bf9354eda.png
+0
-0
93ffb6a7-ae93-4918-944e-877016ba266b.png
...public/2023-11-2/93ffb6a7-ae93-4918-944e-877016ba266b.png
+0
-0
index.ts
src/renderer/router/index.ts
+9
-1
ShowVideo.vue
src/renderer/screens/ShowVideo.vue
+323
-0
VideoScreen.vue
src/renderer/screens/VideoScreen.vue
+126
-7
index.ts
src/renderer/screens/index.ts
+2
-1
index.ts
src/renderer/store/index.ts
+3
-1
photo.ts
src/renderer/store/photo.ts
+2
-2
video.ts
src/renderer/store/video.ts
+74
-0
No files found.
src/renderer/public/2023-11-2/6fa9a127-2ce5-43ea-a543-475bf9354eda.png
0 → 100644
View file @
25325063
453 KB
src/renderer/public/2023-11-2/93ffb6a7-ae93-4918-944e-877016ba266b.png
0 → 100644
View file @
25325063
This diff is collapsed.
Click to expand it.
src/renderer/router/index.ts
View file @
25325063
import
{
PhotoScreen
,
ErrorScreen
,
VideoScreen
,
ShowPhoto
}
from
'@/renderer/screens'
import
{
PhotoScreen
,
ErrorScreen
,
VideoScreen
,
ShowPhoto
,
ShowVideo
}
from
'@/renderer/screens'
import
{
createRouter
,
createWebHashHistory
}
from
'vue-router'
export
default
createRouter
({
...
...
@@ -26,6 +26,14 @@ export default createRouter({
isHeader
:
false
}
},
{
path
:
'/show-video'
,
component
:
ShowVideo
,
meta
:
{
titleKey
:
'展示视频数字人'
,
isHeader
:
false
}
},
{
path
:
'/error'
,
component
:
ErrorScreen
,
...
...
src/renderer/screens/ShowVideo.vue
0 → 100644
View file @
25325063
<!-- eslint-disable no-unused-vars -->
<!-- eslint-disable camelcase -->
<
script
setup
lang=
"ts"
>
import
{
onMounted
,
ref
}
from
'vue'
import
{
useRoute
,
useRouter
}
from
'vue-router'
import
type
{
ServerMessagePartialResult
,
ServerMessageResult
,
Model
}
from
'@/renderer/plugins/asr/index'
import
{
audioAiTTS
,
localTTS
}
from
'../plugins/tts'
import
useStore
from
'@/renderer/store'
const
router
=
useRouter
()
const
route
=
useRoute
()
const
{
settings
,
video
:
useVideo
}
=
useStore
()
const
sampleRate
=
48000
const
recordVolume
=
ref
(
0
)
const
url
=
route
.
query
.
url
as
string
const
role
=
useVideo
.
list
.
find
(
i
=>
i
.
url
===
url
);
const
microphoneState
=
ref
<
'waitInput'
|
'input'
|
'loading'
|
'disabled'
>
(
'waitInput'
)
const
videoElement
=
ref
<
HTMLVideoElement
|
null
>
(
null
);
onMounted
(()
=>
{
// init();
});
async
function
init
(){
const
videoEle
=
videoElement
.
value
;
}
router
.
beforeEach
((
g
)
=>
{
if
(
!
g
.
query
.
url
)
return
router
.
push
(
'/error'
)
})
async
function
initVosk
({
result
,
partialResult
}:
{
result
?:
(
string
)
=>
void
partialResult
?:
(
string
)
=>
void
})
{
const
channel
=
new
MessageChannel
()
const
model
=
await
settings
.
downLoadVoskModel
();
const
recognizer
=
new
model
.
KaldiRecognizer
(
sampleRate
)
model
.
registerPort
(
channel
.
port1
)
recognizer
.
setWords
(
true
)
recognizer
.
on
(
'result'
,
(
message
)
=>
{
result
&&
result
((
message
as
ServerMessageResult
).
result
.
text
)
})
recognizer
.
on
(
'partialresult'
,
(
message
)
=>
{
partialResult
&&
partialResult
((
message
as
ServerMessagePartialResult
).
result
.
partial
)
})
return
{
recognizer
,
channel
}
}
function
analyzeMicrophoneVolume
(
stream
:
MediaStream
,
callback
:
(
number
)
=>
void
)
{
const
audioContext
=
new
AudioContext
()
const
analyser
=
audioContext
.
createAnalyser
()
const
microphone
=
audioContext
.
createMediaStreamSource
(
stream
)
const
recordEventNode
=
audioContext
.
createScriptProcessor
(
2048
,
1
,
1
)
const
audioprocess
=
()
=>
{
const
array
=
new
Uint8Array
(
analyser
.
frequencyBinCount
)
analyser
.
getByteFrequencyData
(
array
)
let
values
=
0
const
length
=
array
.
length
for
(
let
i
=
0
;
i
<
length
;
i
++
)
{
values
+=
array
[
i
]
}
const
average
=
values
/
length
callback
(
Math
.
round
(
average
))
}
analyser
.
smoothingTimeConstant
=
0.8
analyser
.
fftSize
=
1024
microphone
.
connect
(
analyser
)
analyser
.
connect
(
recordEventNode
)
recordEventNode
.
connect
(
audioContext
.
destination
)
// recordEventNode.addEventListener('audioprocess', audioprocess);
recordEventNode
.
onaudioprocess
=
audioprocess
inputContext
.
audioContext2
=
audioContext
inputContext
.
scriptProcessorNode
=
recordEventNode
}
const
inputContext
:
{
mediaStream
?:
MediaStream
audioContext
?:
AudioContext
audioContext2
?:
AudioContext
scriptProcessorNode
?:
ScriptProcessorNode
model
?:
Model
ws
?:
WebSocket
;
}
=
{}
async
function
startAudioInput
()
{
if
(
microphoneState
.
value
===
'loading'
)
return
if
(
microphoneState
.
value
===
'input'
)
{
endAudioInput
();
return
}
microphoneState
.
value
=
'loading'
const
{
recognizer
,
channel
}
=
await
initVosk
({
result
:
onAsr
,
partialResult
:
(
text
)
=>
{
// console.log('----------------> partialResult:', text)
}
})
const
mediaStream
=
await
navigator
.
mediaDevices
.
getUserMedia
({
video
:
false
,
audio
:
{
echoCancellation
:
true
,
noiseSuppression
:
true
,
channelCount
:
1
,
sampleRate
}
})
const
audioContext
=
new
AudioContext
()
await
audioContext
.
audioWorklet
.
addModule
(
new
URL
(
'/vosk/recognizer-processor.js'
,
import
.
meta
.
url
)
)
const
recognizerProcessor
=
new
AudioWorkletNode
(
audioContext
,
'recognizer-processor'
,
{
channelCount
:
1
,
numberOfInputs
:
1
,
numberOfOutputs
:
1
})
recognizerProcessor
.
port
.
postMessage
({
action
:
'init'
,
recognizerId
:
recognizer
.
id
},
[
channel
.
port2
])
recognizerProcessor
.
connect
(
audioContext
.
destination
)
const
source
=
audioContext
.
createMediaStreamSource
(
mediaStream
)
source
.
connect
(
recognizerProcessor
)
await
analyzeMicrophoneVolume
(
mediaStream
,
(
val
)
=>
{
recordVolume
.
value
=
val
})
microphoneState
.
value
=
'input'
inputContext
.
mediaStream
=
mediaStream
inputContext
.
audioContext
=
audioContext
}
function
endAudioInput
()
{
microphoneState
.
value
=
'waitInput'
inputContext
.
mediaStream
?.
getTracks
().
forEach
((
track
)
=>
track
.
stop
())
inputContext
.
audioContext
?.
close
()
inputContext
.
audioContext2
?.
close
()
inputContext
.
scriptProcessorNode
&&
(
inputContext
.
scriptProcessorNode
.
onaudioprocess
=
null
)
inputContext
.
model
?.
terminate
()
// inputContext.ws?.close()
}
async
function
onAsr
(
question
:
string
)
{
endAudioInput
();
console
.
log
(
'---------------->'
,
question
);
const
videoEle
=
videoElement
.
value
as
HTMLVideoElement
;
if
(
!
role
||
!
videoEle
)
return
;
question
=
question
.
replace
(
/
\s
/g
,
''
);
for
(
let
i
=
0
;
i
<
role
.
qa
.
length
;
i
++
)
{
const
{
q
,
url
}
=
role
.
qa
[
i
];
console
.
log
(
question
+
' : '
+
q
);
if
(
q
.
includes
(
question
))
{
videoEle
.
src
=
url
;
videoEle
.
load
();
videoEle
.
play
();
}
}
}
function
initSocket
():
Promise
<
WebSocket
>
{
const
ws
=
new
WebSocket
(
settings
.
llmUrl
);
return
new
Promise
((
resolve
,
reject
)
=>
{
ws
.
onopen
=
()
=>
resolve
(
ws
);
ws
.
onerror
=
reject
;
});
}
let
isTTSRunning
=
false
;
async
function
runTTSTask
(
tasks
:
string
[])
{
if
(
isTTSRunning
)
return
;
isTTSRunning
=
true
;
try
{
while
(
tasks
.
length
)
{
const
task
=
tasks
.
shift
()
if
(
!
task
)
break
;
console
.
time
(
task
+
' TTS: '
);
const
res
=
await
localTTS
({
url
:
settings
.
ttsHost
,
text
:
task
});
console
.
log
(
'----------------> TTS:'
,
res
);
console
.
timeEnd
(
task
+
' TTS: '
);
}
}
catch
(
error
)
{
console
.
error
(
error
);
}
isTTSRunning
=
false
;
}
// eslint-disable-next-line no-unused-vars
async
function
xfTTS
(
text
:
string
)
{
const
tone
=
settings
.
source
.
find
(({
sourceId
})
=>
settings
.
selectSource
===
sourceId
)
if
(
!
tone
)
return
const
res
=
await
audioAiTTS
({
host
:
settings
.
ttsHost
,
text
,
speed
:
3
,
speaker
:
tone
.
sourceId
,
provider
:
tone
.
provider
})
console
.
log
(
'----------------> tts:'
,
res
)
}
</
script
>
<
template
>
<div
style=
"width: 100%; height: 100%"
class=
"d-flex justify-center align-center"
:style=
"
{ background: '#000' }"
>
<video
id=
"videoElement"
ref=
"videoElement"
:src=
"url"
class=
"video-ele"
></video>
</div>
<div
class=
"voice"
>
<v-btn
icon=
""
color=
"#fff"
variant=
"elevated"
size=
"x-large"
:disabled=
"microphoneState === 'loading' || microphoneState === 'disabled'"
@
pointerdown=
"startAudioInput"
>
<v-icon
v-if=
"microphoneState === 'waitInput'"
icon=
"mdi-microphone"
></v-icon>
<v-icon
v-if=
"microphoneState === 'loading'"
icon=
"mdi-microphone-settings"
></v-icon>
<v-icon
v-if=
"microphoneState === 'disabled'"
icon=
"mdi-microphone-off"
></v-icon>
<template
v-if=
"microphoneState === 'input'"
>
<img
width=
"30"
height=
"30"
src=
"/images/microphone-input.svg"
alt=
""
srcset=
""
/>
<div
class=
"progress"
>
<span
class=
"volume"
:style=
"
{
'clip-path': `polygon(0 ${100 - recordVolume}%, 100% ${
100 - recordVolume
}%, 100% 100%, 0 100%)`
}"
>
</span>
</div>
</
template
>
</v-btn>
</div>
<div
class=
"q-list"
>
<v-chip
v-for=
"(item, index) in role?.qa"
:key=
"index"
class=
"mb-2 chip"
color=
"white"
variant=
"outlined"
@
click=
"onAsr(item.q)"
>
<v-icon
start
icon=
"mdi-help-circle-outline"
></v-icon>
{{ item.q }}
</v-chip>
</div>
</template>
<
style
scoped
>
.voice
{
display
:
flex
;
justify-content
:
center
;
position
:
fixed
;
left
:
0
;
right
:
0
;
top
:
70%
;
margin
:
auto
;
}
.progress
{
position
:
absolute
;
top
:
21px
;
left
:
28px
;
width
:
8px
;
height
:
16px
;
overflow
:
hidden
;
border-radius
:
36%
;
}
.progress
.volume
{
display
:
block
;
width
:
100%
;
height
:
100%
;
background
:
#2fb84f
;
border-radius
:
36%
;
}
.video-ele
{
position
:
absolute
;
}
.q-list
{
position
:
fixed
;
bottom
:
0
;
display
:
flex
;
justify-content
:
space-between
;
flex-wrap
:
wrap
;
}
.chip
{
cursor
:
pointer
;
;
}
</
style
>
src/renderer/screens/VideoScreen.vue
View file @
25325063
<
script
setup
lang=
"ts"
>
import
{
onMounted
}
from
'vue'
import
useStore
from
'@/renderer/store'
import
{
storeToRefs
}
from
'pinia'
const
{
video
:
useVideo
,
settings
}
=
useStore
()
const
video
=
storeToRefs
(
useVideo
)
onMounted
(():
void
=>
{})
async
function
handleOpen
(
event
:
Event
,
url
:
string
)
{
const
img
=
event
.
target
as
HTMLVideoElement
await
window
.
mainApi
.
send
(
'openWindow'
,
`
${
location
.
origin
+
location
.
pathname
}
#show-video?url=
${
url
}
`
,
{
width
:
img
.
videoWidth
/
2
,
height
:
img
.
videoHeight
/
2
,
fullscreen
:
settings
.
isFullscreen
===
'yes'
}
)
}
function
handleEnter
(
e
:
Event
)
{
const
target
=
e
.
target
as
HTMLVideoElement
;
target
.
play
();
}
function
handleLeave
(
e
:
Event
)
{
const
target
=
e
.
target
as
HTMLVideoElement
;
target
.
pause
();
}
// const validateURL = (url: string) => {
// const regex = /^(https?|ftp):\/\/([\w/\-?=%.]+\.[\w/\-?=%.]+)$/
// return regex.test(url)
// }
// const urlValue = ref('')
// const videoLoading = ref(false)
// async function appendVideo(url: string) {
// urlValue.value = url
// if (!validateURL(url)) return '请输入正确的 url!如(url(https://xxx.png)'
// try {
// videoLoading.value = true
// const video = document.createElement('video');
// video.src = url
// await new Promise((resolve, reject) => {
// video.onload = resolve
// video.onerror = reject
// })
// videoLoading.value = false
// } catch (error) {
// videoLoading.value = false
// return '视频加载失败!'
// }
// video.list.value.push({ url, loading: false })
// urlValue.value = ''
// return true
// }
// function removePhoto(index: number) {
// video.list.value.splice(index, 1)
// }
</
script
>
<
template
>
<v-container>
<v-row
no-gutters
align=
"center"
class=
"text-center"
>
<v-col
cols=
"12"
>
<v-icon
icon=
"mdi-emoticon-cool-outline"
size=
"250"
color=
"#009f57"
/>
</v-col>
<v-col
cols=
"12"
class=
"my-4"
>
{{
$t
(
'desc.second-desc'
)
}}
</v-col>
</v-row>
<!--
<v-container
class=
"d-flex mt-6 pb-0"
>
<v-text-field
label=
"自定义视频 url(https://xxx.webm)"
:model-value=
"urlValue"
:loading=
"videoLoading"
:rules=
"[(v) => appendVideo(v)]"
validate-on=
"blur lazy"
></v-text-field>
</v-container>
-->
<v-container
class=
"d-flex flex-wrap"
>
<v-sheet
v-for=
"item in video.list.value"
:key=
"item.url"
v-ripple
:elevation=
"3"
width=
"200"
class=
"video-wrap d-flex spacing-playground pa-6 mr-4 mt-4"
rounded
>
<video
class=
"video-item"
loop
:src=
"item.url"
muted
@
click=
"handleOpen($event,item.url)"
@
pointerenter=
"handleEnter"
@
pointerleave=
"handleLeave"
></video>
<!--
<v-btn
density=
"compact"
elevation=
"1"
icon=
"mdi-close"
class=
"mt-n7"
@
click=
"removePhoto(index)"
></v-btn>
-->
</v-sheet>
</v-container>
</
template
>
<
style
scoped
>
.video-item
{
width
:
100%
;
object-fit
:
cover
;
}
.video-wrap
{
position
:
relative
;
}
.video-wrap
:hover
.video-overlay
{
opacity
:
1
;
}
.video-overlay
{
position
:
absolute
;
top
:
0
;
left
:
0
;
width
:
100%
;
height
:
100%
;
background
:
rgba
(
0
,
0
,
0
,
0.4
);
display
:
flex
;
justify-content
:
center
;
align-items
:
center
;
transition
:
0.4s
;
opacity
:
0
;
}
.overlay-hover
{
opacity
:
1
!important
;
}
</
style
>
\ No newline at end of file
src/renderer/screens/index.ts
View file @
25325063
...
...
@@ -2,5 +2,6 @@ import ErrorScreen from '@/renderer/screens/ErrorScreen.vue'
import
PhotoScreen
from
'@/renderer/screens/PhotoScreen.vue'
import
VideoScreen
from
'@/renderer/screens/VideoScreen.vue'
import
ShowPhoto
from
'@/renderer/screens/ShowPhoto.vue'
import
ShowVideo
from
'@/renderer/screens/ShowVideo.vue'
export
{
ErrorScreen
,
PhotoScreen
,
VideoScreen
,
ShowPhoto
}
export
{
ErrorScreen
,
PhotoScreen
,
VideoScreen
,
ShowPhoto
,
ShowVideo
}
src/renderer/store/index.ts
View file @
25325063
import
useSettings
from
'./settings'
import
usePhoto
from
'./photo'
import
useVideo
from
'./video'
export
default
function
useStore
()
{
return
{
settings
:
useSettings
(),
photo
:
usePhoto
()
photo
:
usePhoto
(),
video
:
useVideo
(),
}
}
src/renderer/store/photo.ts
View file @
25325063
...
...
@@ -10,10 +10,10 @@ const usePhotoStore = defineStore('photo', {
({
list
:
[
{
url
:
'
https://resources.laihua.com
/2023-11-2/93ffb6a7-ae93-4918-944e-877016ba266b.png'
url
:
'/2023-11-2/93ffb6a7-ae93-4918-944e-877016ba266b.png'
},
{
url
:
'
https://resources.laihua.com/2023-6-19
/6fa9a127-2ce5-43ea-a543-475bf9354eda.png'
url
:
'
/2023-11-2
/6fa9a127-2ce5-43ea-a543-475bf9354eda.png'
}
]
})
as
IPhoto
,
...
...
src/renderer/store/video.ts
0 → 100644
View file @
25325063
import
{
defineStore
}
from
'pinia'
type
IVideo
=
{
list
:
{
url
:
string
;
name
:
string
;
qa
:
{
url
:
string
;
q
:
string
;
a
:
string
}[]
}[]
}
const
useVideoStore
=
defineStore
(
'video'
,
{
persist
:
true
,
state
:
()
=>
({
list
:
[
{
url
:
'/libai/wait.mp4'
,
name
:
'李白'
,
qa
:
[
{
url
:
'/libai/1.mp4'
,
q
:
'李白是谁?'
,
a
:
'李白是中国唐代著名的诗人,被誉为“诗仙”。他的诗作以豪放、想象力丰富而著称。'
},
{
url
:
'/libai/2.mp4'
,
q
:
'李白生活在哪个时期?'
,
a
:
'李白生活在唐朝,大约在公元701年到762年之间。'
},
{
url
:
'/libai/3.mp4'
,
q
:
'李白的诗有什么特点?'
,
a
:
'李白的诗以其浪漫主义风格、对自然景观的细腻描绘和对自由无拘无束的追求而闻名。'
},
{
url
:
'/libai/4.mp4'
,
q
:
'李白最著名的作品是哪些?'
,
a
:
' 李白最著名的作品包括《将进酒》、《庐山谣》、《静夜思》等。'
},
{
url
:
'/libai/5.mp4'
,
q
:
'李白的诗歌反映了哪些主题?'
,
a
:
'李白的诗歌主题多样,包括对自然的赞美、对友情和饮酒的颂扬,以及对道教思想的探索。'
},
{
url
:
'/libai/6.mp4'
,
q
:
'李白的作品在中国文学中有什么影响?'
,
a
:
'李白的作品对中国文学产生了深远的影响,他的诗歌被视为中国古典诗歌的高峰,影响了后世无数诗人。'
},
{
url
:
'/libai/7.mp4'
,
q
:
'李白的诗歌风格与其他唐代诗人有何不同?'
,
a
:
'与其他唐代诗人相比,李白的诗歌更加注重个人情感的表达,风格更为豪放不羁。'
},
{
url
:
'/libai/8.mp4'
,
q
:
'李白在历史上有哪些著名的轶事?'
,
a
:
'李白有许多著名轶事,例如他在月光下划船、醉酒作诗等,这些故事体现了他自由奔放的生活态度。'
},
{
url
:
'/libai/9.mp4'
,
q
:
'李白的诗歌对现代文化有什么影响?'
,
a
:
'李白的诗歌对现代文化仍有深远影响,不仅在中国,也在世界各地,他的作品被翻译成多种语言,被广泛阅读和研究。'
},
{
url
:
'/libai/10.mp4'
,
q
:
'如何评价李白在中国文学史上的地位?'
,
a
:
'李白在中国文学史上占据着极其重要的地位,他的作品不仅丰富了诗歌的艺术表现形式,也反映了唐代社会的精神风貌。'
}
]
}
]
})
as
IVideo
,
getters
:
{},
actions
:
{}
})
export
default
useVideoStore
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment