Commit c071d170 authored by yang.jie's avatar yang.jie

add

parents
Pipeline #16099 failed with stages
demo/*.wasm
demo/*.wasm.*
demo/mp4encoder.js
demo_transcode_mp3/build
.DS_Store
.vscode
*.wasm
wasm
\ No newline at end of file
# media-wasm
## Introduction
https://developer.mozilla.org/zh-CN/docs/WebAssembly/Concepts
## Webassembly api
https://emscripten.org/docs/api_reference/index.html
## 教程
https://cntofu.com/book/150/zh/ch1-quick-guide/readme.md
# 编译环境
https://hub.docker.com/r/emscripten/emsdk/tags
拉取远程镜像
```bash
docker pull emscripten/emsdk:2.0.24
```
# 启动容器
映射自己的目录
```bash
docker run -d -it --name mediawasm -v d:/.../media-wasm:/code emscripten/emsdk:2.0.24 /bin/bash
```
# 编译
```bash
./build-demo.sh
```
会在项目目录下生成 ```mp4encoder.js``````mp4encoder.wasm``` 其中js文件文件为胶水代码,使得C++接口能在浏览器js中识别。实际的C++逻辑存放于wasm文件中。
在容器外的项目目录下, 启动web静态服务器查看效果
# 调试
介绍
https://developer.chrome.com/blog/wasm-debugging-2020/
下载安装插件
https://goo.gle/wasm-debugging-extension
1、打开开发者工具,设置(F1) -> 实验 -> 勾选 WebAssembly Debug
![](./pics/debug1.png)
2、因为项目在Docker下编译,需要把Docker路径映射到本地磁盘
![](./pics/debug2.png)
![](./pics/debug3.png)
3、打开开发者工具 -> 源代码 -> 左侧file://下即可找到源代码 -> 设置断点 -> 右侧监视添加变量
![](./pics/debug4.png)
# 需求
demo中示例只展示了图片转视频的做法
后续需要把音轨加入到视频中,并且音轨时长需要裁剪合并。
# 参数传递
## 简单参数传递
```js
//把js string 转成 utf8 array
var getCStringPtr = function (jstr) {
var lengthBytes = lengthBytesUTF8(jstr) + 1;
var p = MP4Encoder._malloc(lengthBytes);
stringToUTF8(jstr, p, lengthBytes);
return p;
}
//allocateUTF8功能相同
//值类型可以直接传递,string必须先转array
var strPtr = getCStringPtr("/tmp/demo2.mp4");
var ret = MP4Encoder._createH264(strPtr, 1920, 1080, 25);
```
## 大块内存拷贝
```js
//js 数组内存
let fileBuffer = new Uint8Array(imagedata.data.buffer);
//wasm 数组内存
let bufferPtr = MP4Encoder._malloc(fileBuffer.length);
//js -> wasm
MP4Encoder.HEAP8.set(fileBuffer, bufferPtr);
var ret = MP4Encoder._addFrame(bufferPtr);
```
```_malloc``` 和 ```_free``` 这些系统方法是模块默认导出的。如果想查看其他方法是否可用,可以控制台打印MP4Encoder模块看其挂载的方法
# 文件读写
```html
<input type="file" value="选择文件" onchange="inputJsFile(event)"></input>
```
```js
var inputJsFile = function (event) {
let file = event.target.files[0];
file.arrayBuffer().then(t=>{
console.log(t)
//创建文件夹
FS.mkdir('/working');
//写入二进制数据到wasm虚拟目录
FS.writeFile('/working/input.txt', new Uint8Array(t), { flags:'w+' });
//查看写入成功
console.log(FS.stat('/working/input.txt'))
//从wasm读取到js
var buff = FS.readFile('/working/input.txt', { encoding: 'binary' });
console.log(buff)
var pStr = getCStringPtr("/working/input.txt");
var ret = MP4Encoder._openTestFile(pStr);
});
}
```
# 扩展阅读
https://zhuanlan.zhihu.com/p/337739927
https://zhuanlan.zhihu.com/p/260031610
\ No newline at end of file
#!/bin/bash
#set -eo pipefail
WORKPATH=$(cd $(dirname $0); pwd)
DEMO_PATH=$WORKPATH/demo
echo "WORKPATH"=$WORKPATH
rm -rf ${WORKPATH}/demo/mp4encoder.js ${WORKPATH}/demo/mp4encoder.wasm
FFMPEG_ST=yes
EMSDK=/emsdk
THIRD_DIR=${WORKPATH}/lib/third/build
DEBUG=""
DEBUG="-g -fno-inline -gseparate-dwarf=/code/demo/temp.debug.wasm -s SEPARATE_DWARF_URL=http://localhost:5000/temp.debug.wasm"
#--closure 压缩胶水代码,有可能会造成变量重复定义。生产发布可设为1
OPTIM_FLAGS="-O1 $DEBUG --closure 0"
if [[ "$FFMPEG_ST" != "yes" ]]; then
EXTRA_FLAGS=(
-pthread
-s USE_PTHREADS=1 # enable pthreads support
-s PROXY_TO_PTHREAD=1 # detach main() from browser/UI main thread
-o ${DEMO_PATH}/mp4encoder.js
)
else
EXTRA_FLAGS=(
-o ${DEMO_PATH}/mp4encoder.js
)
fi
FLAGS=(
-I$WORKPATH/lib/ffmpeg-emcc/include -L$WORKPATH/lib/ffmpeg-emcc/lib -I$THIRD_DIR/include -L$THIRD_DIR/lib
-Wno-deprecated-declarations -Wno-pointer-sign -Wno-implicit-int-float-conversion -Wno-switch -Wno-parentheses -Qunused-arguments
-lavdevice -lavfilter -lavformat -lavcodec -lswresample -lswscale -lavutil -lpostproc
-lm -lharfbuzz -lfribidi -lass -lx264 -lx265 -lvpx -lwavpack -lmp3lame -lfdk-aac -lvorbis -lvorbisenc -lvorbisfile -logg -ltheora -ltheoraenc -ltheoradec -lz -lfreetype -lopus -lwebp
$DEMO_PATH/encode_v.c
-s FORCE_FILESYSTEM=1
-s WASM=1
-s USE_SDL=2 # use SDL2
-s INVOKE_RUN=0 # not to run the main() in the beginning
-s EXIT_RUNTIME=1 # exit runtime after execution
-s MODULARIZE=1 # 延迟加载 use modularized version to be more flexible
-s EXPORT_NAME="createMP4Encoder" # assign export name for browser
-s EXPORTED_FUNCTIONS="[_main,_malloc,_free]" # export main and proxy_main funcs
-s EXPORTED_RUNTIME_METHODS="[FS, cwrap, ccall, setValue, writeAsciiToMemory]" # export preamble funcs
-s INITIAL_MEMORY=134217728 # 64 KB * 1024 * 16 * 2047 = 2146435072 bytes ~= 2 GB
# -s ALLOW_MEMORY_GROWTH=1
# --pre-js $WORKPATH/pre.js
# --post-js $WORKPATH/post.js
$OPTIM_FLAGS
${EXTRA_FLAGS[@]}
)
echo "FFMPEG_EM_FLAGS=${FLAGS[@]}"
emcc "${FLAGS[@]}"
#ifndef EM_PORT_API
#if defined(__EMSCRIPTEN__)
#include <emscripten.h>
#if defined(__cplusplus)
#define EM_PORT_API(rettype) extern "C" rettype EMSCRIPTEN_KEEPALIVE
#else
#define EM_PORT_API(rettype) rettype EMSCRIPTEN_KEEPALIVE
#endif
#else
#if defined(__cplusplus)
#define EM_PORT_API(rettype) extern "C" rettype
#else
#define EM_PORT_API(rettype) rettype
#endif
#endif
#endif
\ No newline at end of file
#include "api.h"
#include <libavformat/avformat.h>
#include <libavcodec/avcodec.h>
#include <libavutil/imgutils.h>
#include <libavutil/opt.h>
#include <libswscale/swscale.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#define FAILED 0;
#define SUCCESS 1;
EM_PORT_API(void)
close();
FILE* pOutFile;
AVFormatContext* pFmtCtx;
AVOutputFormat* fmt;
AVStream* video_st;
AVCodecContext* c = NULL;
struct SwsContext* swsCtx = NULL;
AVFrame* frame;
AVFrame* rgbaFrame;
AVPacket* pkt;
const AVCodec* codec;
int frameIdx = 1;
int width = 0;
int height = 0;
int framerate = 25;
EM_PORT_API(int)
openTestFile(const char* inpath) {
FILE* pFile;
char buff[255] = {0};
pFile = fopen(inpath, "r");
if (pFile != NULL)
{
fgets(buff, 255, pFile);
printf("%d read input file: %s\n",buff[0], buff);
fclose(pFile);
} else {
printf("Failed to open input file! \n");
}
return 0;
}
EM_PORT_API(int)
createH264(const char* outpath, int wid, int heig, int fps)
{
int ret;
frameIdx = 0;
width = wid;
height = heig;
framerate = fps;
av_register_all();
//init Format
avformat_alloc_output_context2(&pFmtCtx, NULL, NULL, outpath);
fmt = pFmtCtx->oformat;
if (avio_open(&pFmtCtx->pb, outpath, AVIO_FLAG_READ_WRITE) < 0)
{
printf("Failed to open output file! \n");
return -1;
}
// c = avcodec_alloc_context3(codec);
// if (!c)
// {
// fprintf(stderr, "Could not allocate video codec context\n");
// return FAILED;
// }
video_st = avformat_new_stream(pFmtCtx, 0);
c = video_st->codec;
c->codec_id = fmt->video_codec;
c->codec_type = AVMEDIA_TYPE_VIDEO;
c->bit_rate = 40000000;
c->width = width;
c->height = height;
c->time_base = (AVRational){ 1, fps };
c->framerate = (AVRational){ fps, 1 };
c->qmin = 10;
c->qmax = 51;
/* if frame->pict_type is AV_PICTURE_TYPE_I, gop_size is ignored*/
c->gop_size = 10;
c->max_b_frames = 1;
c->pix_fmt = AV_PIX_FMT_YUV420P; //libx264 not support rgba
// if (pFmtCtx->oformat->flags & AVFMT_GLOBALHEADER)
// c->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
// Set Option
AVDictionary* param = 0;
if (c->codec_id == AV_CODEC_ID_H264)
{
av_dict_set(&param, "preset", "slow", 0);
av_dict_set(&param, "tune", "zerolatency", 0);
}
av_dump_format(pFmtCtx, 0, outpath, 1);
codec = avcodec_find_encoder(c->codec_id);
if (!codec)
{
fprintf(stderr, "Codec '%d' not found\n", c->codec_id);
return FAILED;
}
ret = avcodec_open2(c, codec, &param);
if (ret < 0)
{
fprintf(stderr, "Could not open codec: %s %d %d \n", av_err2str(ret), c->time_base.den, c->time_base.num);
return FAILED;
}
ret = avcodec_parameters_from_context(video_st->codecpar, c);
if (ret < 0)
{
fprintf(stderr, "Failed to copy the stream parameters. "
"Error code: %s\n",
av_err2str(ret));
return FAILED;
}
pkt = av_packet_alloc();
if (!pkt)
return FAILED;
frame = av_frame_alloc();
frame->format = c->pix_fmt;
frame->width = c->width;
frame->height = c->height;
ret = av_frame_get_buffer(frame, 0);
if (ret < 0)
{
fprintf(stderr, "Could not allocate the video frame data\n");
return FAILED;
}
swsCtx = sws_getCachedContext(swsCtx, width, height, AV_PIX_FMT_RGBA,
width, height, AV_PIX_FMT_YUV420P,
SWS_BICUBIC,
NULL, NULL, NULL);
avformat_write_header(pFmtCtx, NULL);
return SUCCESS;
}
static int encode_write(AVFrame* frame)
{
int ret = 0;
AVPacket enc_pkt = *pkt;
av_init_packet(&enc_pkt);
enc_pkt.data = NULL;
enc_pkt.size = 0;
if ((ret = avcodec_send_frame(c, frame)) < 0)
{
fprintf(stderr, "Error during encoding. Error code: %s\n", av_err2str(ret));
goto end;
}
while (1)
{
ret = avcodec_receive_packet(c, &enc_pkt);
if (ret)
break;
enc_pkt.stream_index = video_st->index;
ret = av_interleaved_write_frame(pFmtCtx, &enc_pkt);
if (ret < 0)
{
fprintf(stderr, "Error during writing data to output file. "
"Error code: %s\n",
av_err2str(ret));
return -1;
}
}
end:
if (ret == AVERROR_EOF)
return 0;
ret = ((ret == AVERROR(EAGAIN)) ? 0 : -1);
return ret;
}
EM_PORT_API(int)
addFrame(uint8_t* buff)
{
fflush(stdout);
if (av_frame_make_writable(frame) < 0)
{
printf("av_frame_make_writable(frame) < 0 \n");
return -1;
}
rgbaFrame = av_frame_alloc();
rgbaFrame->format = AV_PIX_FMT_RGBA;
rgbaFrame->height = height;
rgbaFrame->width = width;
avpicture_fill((AVPicture*)rgbaFrame, buff, AV_PIX_FMT_RGBA, width, height);
//转换的YUV数据存放在frame
int outSliceH = sws_scale(swsCtx, (const uint8_t* const*)rgbaFrame->data, rgbaFrame->linesize, 0, height,
frame->data, frame->linesize);
//printf("sws_scale %d %d %d %d\n",rgbaFrame->height, frame->height, height, outSliceH);
if (outSliceH <= 0)
{
printf("outSliceH <= 0 \n");
return -1;
}
frame->pts = frameIdx * (video_st->time_base.den) / ((video_st->time_base.num) * framerate);
//Encode
int ret = encode_write(frame);
return ++frameIdx;
}
EM_PORT_API(void)
close()
{
/* flush the encoder */
int ret = encode_write(NULL);
if (ret < 0)
printf("Flushing encoder failed\n");
av_write_trailer(pFmtCtx);
if (video_st)
{
avcodec_close(video_st->codec);
av_frame_free(&frame);
av_frame_free(&rgbaFrame);
av_packet_free(&pkt);
avcodec_free_context(&c);
sws_freeContext(swsCtx);
}
AVIOContext* s = pFmtCtx->pb;
if (s)
{
avio_flush(s);
s->opaque = NULL;
av_freep(&s->buffer);
av_opt_free(s);
avio_context_free(&s);
}
avformat_free_context(pFmtCtx);
}
int main(int argc, char const* argv[])
{
return 0;
}
/*
* FileSaver.js
* A saveAs() FileSaver implementation.
*
* By Eli Grey, http://eligrey.com
*
* License : https://github.com/eligrey/FileSaver.js/blob/master/LICENSE.md (MIT)
* source : http://purl.eligrey.com/github/FileSaver.js
*/
// The one and only way of getting global scope in all environments
// https://stackoverflow.com/q/3277182/1008999
var _global = typeof window === 'object' && window.window === window
? window : typeof self === 'object' && self.self === self
? self : typeof global === 'object' && global.global === global
? global
: this
function bom (blob, opts) {
if (typeof opts === 'undefined') opts = { autoBom: false }
else if (typeof opts !== 'object') {
console.warn('Deprecated: Expected third argument to be a object')
opts = { autoBom: !opts }
}
// prepend BOM for UTF-8 XML and text/* types (including HTML)
// note: your browser will automatically convert UTF-16 U+FEFF to EF BB BF
if (opts.autoBom && /^\s*(?:text\/\S*|application\/xml|\S*\/\S*\+xml)\s*;.*charset\s*=\s*utf-8/i.test(blob.type)) {
return new Blob([String.fromCharCode(0xFEFF), blob], { type: blob.type })
}
return blob
}
function download (url, name, opts) {
var xhr = new XMLHttpRequest()
xhr.open('GET', url)
xhr.responseType = 'blob'
xhr.onload = function () {
saveAs(xhr.response, name, opts)
}
xhr.onerror = function () {
console.error('could not download file')
}
xhr.send()
}
function corsEnabled (url) {
var xhr = new XMLHttpRequest()
// use sync to avoid popup blocker
xhr.open('HEAD', url, false)
try {
xhr.send()
} catch (e) {}
return xhr.status >= 200 && xhr.status <= 299
}
// `a.click()` doesn't work for all browsers (#465)
function click (node) {
try {
node.dispatchEvent(new MouseEvent('click'))
} catch (e) {
var evt = document.createEvent('MouseEvents')
evt.initMouseEvent('click', true, true, window, 0, 0, 0, 80,
20, false, false, false, false, 0, null)
node.dispatchEvent(evt)
}
}
// Detect WebView inside a native macOS app by ruling out all browsers
// We just need to check for 'Safari' because all other browsers (besides Firefox) include that too
// https://www.whatismybrowser.com/guides/the-latest-user-agent/macos
var isMacOSWebView = /Macintosh/.test(navigator.userAgent) && /AppleWebKit/.test(navigator.userAgent) && !/Safari/.test(navigator.userAgent)
var saveAs = _global.saveAs || (
// probably in some web worker
(typeof window !== 'object' || window !== _global)
? function saveAs () { /* noop */ }
// Use download attribute first if possible (#193 Lumia mobile) unless this is a macOS WebView
: ('download' in HTMLAnchorElement.prototype && !isMacOSWebView)
? function saveAs (blob, name, opts) {
var URL = _global.URL || _global.webkitURL
var a = document.createElement('a')
name = name || blob.name || 'download'
a.download = name
a.rel = 'noopener' // tabnabbing
// TODO: detect chrome extensions & packaged apps
// a.target = '_blank'
if (typeof blob === 'string') {
// Support regular links
a.href = blob
if (a.origin !== location.origin) {
corsEnabled(a.href)
? download(blob, name, opts)
: click(a, a.target = '_blank')
} else {
click(a)
}
} else {
// Support blobs
a.href = URL.createObjectURL(blob)
setTimeout(function () { URL.revokeObjectURL(a.href) }, 4E4) // 40s
setTimeout(function () { click(a) }, 0)
}
}
// Use msSaveOrOpenBlob as a second approach
: 'msSaveOrOpenBlob' in navigator
? function saveAs (blob, name, opts) {
name = name || blob.name || 'download'
if (typeof blob === 'string') {
if (corsEnabled(blob)) {
download(blob, name, opts)
} else {
var a = document.createElement('a')
a.href = blob
a.target = '_blank'
setTimeout(function () { click(a) })
}
} else {
navigator.msSaveOrOpenBlob(bom(blob, opts), name)
}
}
// Fallback to using FileReader and a popup
: function saveAs (blob, name, opts, popup) {
// Open a popup immediately do go around popup blocker
// Mostly only available on user interaction and the fileReader is async so...
popup = popup || open('', '_blank')
if (popup) {
popup.document.title =
popup.document.body.innerText = 'downloading...'
}
if (typeof blob === 'string') return download(blob, name, opts)
var force = blob.type === 'application/octet-stream'
var isSafari = /constructor/i.test(_global.HTMLElement) || _global.safari
var isChromeIOS = /CriOS\/[\d]+/.test(navigator.userAgent)
if ((isChromeIOS || (force && isSafari) || isMacOSWebView) && typeof FileReader !== 'undefined') {
// Safari doesn't allow downloading of blob URLs
var reader = new FileReader()
reader.onloadend = function () {
var url = reader.result
url = isChromeIOS ? url : url.replace(/^data:[^;]*;/, 'data:attachment/file;')
if (popup) popup.location.href = url
else location = url
popup = null // reverse-tabnabbing #460
}
reader.readAsDataURL(blob)
} else {
var URL = _global.URL || _global.webkitURL
var url = URL.createObjectURL(blob)
if (popup) popup.location = url
else location.href = url
popup = null // reverse-tabnabbing #460
setTimeout(function () { URL.revokeObjectURL(url) }, 4E4) // 40s
}
}
)
_global.saveAs = saveAs.saveAs = saveAs
if (typeof module !== 'undefined') {
module.exports = saveAs;
}
\ No newline at end of file
This source diff could not be displayed because it is too large. You can view the blob instead.
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>demo2</title>
</head>
<script src="./html2canvas.min.js"></script>
<script src="./fileSaver.js"></script>
<body>
<div id="root2" style="width: 1920px;height: 1080px;">
<div id="root"
style="width: 1920px;height: 1080px;background-color: sandybrown; position: absolute; left: 0; top: 0;">
<div id="move" style="width: 100px; height: 100px; background-color: red; position: absolute; left: 0; top: 0;">
</div>
</div>
</div>
<div>
<!-- <button onclick="ready()" style="font-size: large;">ready => </button> -->
<button onclick="go()" style="font-size: large;">go => </button>
<progress id="go-pgrs" value="0" max="1200"></progress>
<button onclick="download()" style="font-size: large;">download !</button>
<input type="file" value="选择文件" onchange="inputJsFile(event)"></input>
<!-- <button onclick="clip()" style="font-size: large;">clip</button> -->
<div>
<span>FrameIndex: </span>
<span id="frameindex">0</span>
<span>;TotalFrame: 800</span>
<span>;width: 1920</span>
<span>;height: 1080</span>
<span>;FPS: 25</span>
<span>;CODEC: H264</span>
</div>
</div>
</body>
<script>
var MP4Encoder = {};
</script>
<script src="./mp4encoder.js"></script>
<script>
var MP4Encoder = {}
var FS = {}
var lengthBytesUTF8 = null;
var stringToUTF8 = null;
createMP4Encoder().then(m=>{
console.log(m)
MP4Encoder = m;
FS = MP4Encoder.FS
lengthBytesUTF8 = MP4Encoder.lengthBytesUTF8
stringToUTF8 = MP4Encoder.stringToUTF8
})
// const wasmInstanceFromFile = await WebAssembly.instantiateStreaming(await fetch('add.wasm'));
// let sum = wasmInstanceFromFile.instance.exports.add(1,2);
var inputJsFile = function (event) {
let file = event.target.files[0];
file.arrayBuffer().then(t=>{
console.log(t)
FS.mkdir('/working');
FS.writeFile('/working/input.txt', new Uint8Array(t), { flags:'w+' });
console.log(FS.stat('/working/input.txt'))
var buff = FS.readFile('/working/input.txt', { encoding: 'binary' });
console.log(buff)
var pStr = getCStringPtr("/working/input.txt");
var ret = MP4Encoder._openTestFile(pStr);
});
}
var getCStringPtr = function (jstr) {
var lengthBytes = lengthBytesUTF8(jstr) + 1;
var p = MP4Encoder._malloc(lengthBytes);
stringToUTF8(jstr, p, lengthBytes);
return p;
}
//file-saver
let root = document.getElementById("root");
let root2 = document.getElementById("root2");
let move = document.getElementById("move");
let idx = document.getElementById("frameindex");
let pgrs = document.getElementById("go-pgrs");
let d = 0;
let stop = false;
let isrunning = false;
var ready = function () {
console.log(MP4Encoder);
stop = false;
var pStr = getCStringPtr("/tmp/demo2.mp4");
var ret = MP4Encoder._createH264(pStr, 1920, 1080, 25);
console.log("ready =>", ret);
}
let step = async () => {
move.style.left = d + "px";
move.style.top = (d > 1080 - 100 ? (1080 - 100) * 2 - d : d) + "px";
d += 2;
idx.innerText = d;
pgrs.value = d;
let canvas = await html2canvas(root, {
x: 0,
y: 0,
width: 1920,
height: 1080,
scale: 1
});
let ctx = canvas.getContext("2d");
let imagedata = ctx.getImageData(0, 0, 1920, 1080); //rgba
let fileBuffer = new Uint8Array(imagedata.data.buffer);
let bufferPtr = MP4Encoder._malloc(fileBuffer.length);
MP4Encoder.HEAP8.set(fileBuffer, bufferPtr);
var ret = MP4Encoder._addFrame(bufferPtr);
MP4Encoder._free(bufferPtr);
};
var go = async () => {
if (isrunning) return;
isrunning = true;
ready();
for (let i = 0; i < 600; i++) {
let ret = await step();
if (ret < 0 || stop) return;
}
stop = true;
MP4Encoder._close();
};
var download = () => {
var buff = FS.readFile('/tmp/demo2.mp4', { encoding: 'binary' });
saveAs(new Blob([buff]), `demo2.mp4`);
}
var clip = async () => {
let canvas = await html2canvas(root2, {
x: 0,
y: 0,
width: 1080,
height: 720,
scale: 1
});
//var offscreen = new OffscreenCanvas(1080, 720);
let ctx = canvas.getContext("2d");
let imagedata = ctx.getImageData(0, 0, canvas.width, canvas.height); //rgba
root2.removeChild(root);
root2.appendChild(canvas);
let fileBuffer = new Uint8Array(imagedata.data.buffer);
console.log("imagedata.data.buffer", imagedata.data.buffer, fileBuffer);
};
</script>
</html>
\ No newline at end of file
abc,123、你好。
\ No newline at end of file
cmake_minimum_required(VERSION 3.10.0)
project(transcode_mp3)
set(CMAKE_CXX_STANDARD 17)
set(CMAKE_EXECUTABLE_SUFFIX ".html") # 编译生成.html
#pre.js and pro.js path
# set(PRE_PATH "/code")
# set(POST_PATH "/code")
include_directories("../lib/ffmpeg-emcc/include")
link_directories("../lib/ffmpeg-emcc/lib")
include_directories("../lib/third/build/include")
link_directories("../lib/third/build/lib")
include_directories(.)
aux_source_directory(. DIR)
add_executable(${PROJECT_NAME} ${DIR})
add_subdirectory(audio)
add_subdirectory(ffmbase)
set_target_properties(transcode_mp3 PROPERTIES LINK_FLAGS "-s FORCE_FILESYSTEM=1")
set_target_properties(transcode_mp3 PROPERTIES LINK_FLAGS "-s WASM=1")
set_target_properties(transcode_mp3 PROPERTIES LINK_FLAGS "-s USE_SDL=2")
set_target_properties(transcode_mp3 PROPERTIES LINK_FLAGS "-s INVOKE_RUN=0")
set_target_properties(transcode_mp3 PROPERTIES LINK_FLAGS "-s EXIT_RUNTIME=1")
set_target_properties(transcode_mp3 PROPERTIES LINK_FLAGS "-s MODULARIZE=1")
set_target_properties(transcode_mp3 PROPERTIES LINK_FLAGS "-s EXPORT_NAME=${PROJECT_NAME}")
set_target_properties(transcode_mp3 PROPERTIES LINK_FLAGS "-s EXPORTED_FUNCTIONS=[_main,_malloc,_free]")
set_target_properties(transcode_mp3 PROPERTIES LINK_FLAGS "-s EXPORTED_RUNTIME_METHODS=[FS, cwrap, ccall, setValue, writeAsciiToMemory]")
set_target_properties(transcode_mp3 PROPERTIES LINK_FLAGS "-s INITIAL_MEMORY=134217728")
# set_target_properties(transcode_mp3 PROPERTIES LINK_FLAGS "-s TOTAL_MEMORY=134217728")
# set_target_properties(transcode_mp3 PROPERTIES LINK_FLAGS "-s ALLOW_MEMORY_GROWTH=1")
# set_target_properties(transcode_mp3 PROPERTIES LINK_FLAGS "--pre-js ${PRE_PATH}/pre.js")
# set_target_properties(transcode_mp3 PROPERTIES LINK_FLAGS "--post-js ${POST_PATH}/post.js")
target_link_libraries(${PROJECT_NAME} PRIVATE audio)
target_link_libraries(${PROJECT_NAME} PRIVATE ffmbase)
target_link_libraries(${PROJECT_NAME} PRIVATE avdevice avfilter avformat avcodec avutil swresample swscale)
target_link_libraries(${PROJECT_NAME} PRIVATE postproc m harfbuzz fribidi ass x264 x265 vpx wavpack mp3lame fdk-aac vorbis vorbisenc vorbisfile ogg theora theoraenc theoradec z freetype opus webp)
target_link_libraries(${PROJECT_NAME} PRIVATE -lpthread)
#ifndef CFFPMEG_H
#define CFFPMEG_H
#ifdef __cplusplus
extern "C" {
#endif
#include "libavcodec/avcodec.h"
#include "libavformat/avformat.h"
#include "libavdevice/avdevice.h"
#include "libswscale/swscale.h"
#include "libswresample/swresample.h"
#include "libavfilter/avfilter.h"
#include "libavfilter/buffersink.h"
#include "libavfilter/buffersrc.h"
#include "libavutil/time.h"
#include "libavutil/imgutils.h"
#include "libavutil/opt.h"
#include "libavutil/pixdesc.h"
#include "libavutil/audio_fifo.h"
#include "libavutil/samplefmt.h"
#include "libavutil/avstring.h"
#ifdef __cplusplus
}
inline void initRegister(){
static bool isLoad = false;
if (!isLoad)
{
avcodec_register_all();
av_register_all();
avfilter_register_all();
avdevice_register_all();
isLoad = true;
}
}
#ifndef EM_PORT_API
#if defined(__EMSCRIPTEN__)
#include <emscripten.h>
#if defined(__cplusplus)
#define EM_PORT_API(rettype) extern "C" rettype EMSCRIPTEN_KEEPALIVE
#else
#define EM_PORT_API(rettype) rettype EMSCRIPTEN_KEEPALIVE
#endif
#else
#if defined(__cplusplus)
#define EM_PORT_API(rettype) extern "C" rettype
#else
#define EM_PORT_API(rettype) rettype
#endif
#endif
#endif
#endif
#endif
cmake_minimum_required(VERSION 3.10.0)
# cmake_policy(SET CMP0079 NEW)
Set(MODULE_NAME audio)
include_directories(.)
aux_source_directory(. SRCFILE)
add_library(
${MODULE_NAME}
# SHARED
STATIC
${SRCFILE}
)
#include "FFAudioDecoder.h"
#include <string>
#include <thread>
namespace FFM {
void show_dshow_device_option(const char* cameraName, int index) {
std::string name = "video=" + std::string(cameraName);
AVInputFormat *iformat = av_find_input_format("dshow");
AVFormatContext *pFormatCtx = avformat_alloc_context();
AVDictionary* options = nullptr;
av_dict_set(&options, "list_devices", "true", 0);
avformat_open_input(&pFormatCtx, "video=dummy", iformat, &options);
avformat_close_input(&pFormatCtx);
avformat_free_context(pFormatCtx);
AVFormatContext *_pFormatCtx = avformat_alloc_context();
// AVDictionary ò key =rtbufsize , value = 18432000
AVDictionary *format_opts = nullptr;
//av_dict_set_int(&format_opts, "video_device_number", 0, 0);
int ret = avformat_open_input(&_pFormatCtx, name.c_str(), iformat, &format_opts);
if (ret < 0)
{
printf("Open camera: %s failed\n", cameraName);
avformat_close_input(&_pFormatCtx);
avformat_free_context(_pFormatCtx);
return;
}
avformat_close_input(&_pFormatCtx);
avformat_free_context(_pFormatCtx);
}
FFAudioDecoder::FFAudioDecoder(std::string &fileName) :
FFMBase(fileName)
{
FFMBase::m_fileName = fileName;
initRegister();
if (!openFile(fileName)) {
m_stream_index = findStream(m_pFormatCtx);
openCodec(m_pFormatCtx, m_stream_index);
}
}
FFAudioDecoder::~FFAudioDecoder()
{
if (m_pFrame) {
av_frame_free(&m_pFrame);
}
printf("~FFAudioDecoder()\n");
}
void FFAudioDecoder::startDecoder()
{
std::thread t([this]() {
decodeAudio();
});
t.join();
}
int FFAudioDecoder::decode(int *gotframe)
{
int ret = -1;
*gotframe = 0;
if (m_pPkt) {
ret = avcodec_send_packet(m_pCodecCtx, m_pPkt);
if (ret < 0 && ret != AVERROR_EOF)
return ret;
}
ret = avcodec_receive_frame(m_pCodecCtx, m_pFrame);
if (ret < 0 && ret != AVERROR(EAGAIN)) {
return ret;
}
if (ret >= 0) {
*gotframe = 1;
if (m_pPkt->pts == AV_NOPTS_VALUE)
{
printf("AV_NOPTS_VALUE\n");
if (m_pPkt->pts == AV_NOPTS_VALUE) {
//Write PTS
AVRational time_base1 = m_pCodecCtx->time_base;
//Duration between 2 frames (us)
int64_t calc_duration = (double)AV_TIME_BASE / av_q2d(m_pFormatCtx->streams[m_stream_index]->r_frame_rate);
printf("calc_duration: %ld\n", calc_duration);
//Parameters
m_pPkt->pts = (double)(m_frameIndex*calc_duration) / (double)(av_q2d(time_base1)*AV_TIME_BASE);
m_pPkt->dts = m_pPkt->pts;
m_pPkt->duration = (double)calc_duration / (double)(av_q2d(time_base1)*AV_TIME_BASE);
m_frameIndex++;
}
}
}
return ret;
}
int FFAudioDecoder::decodeAudio()
{
m_pFrame = av_frame_alloc();
m_pPkt = (AVPacket *)av_malloc(sizeof(AVPacket));
av_init_packet(m_pPkt);
m_pPkt->data = nullptr;
m_pPkt->size = 0;
uint64_t start_time = av_gettime();
int ret = -1;
int count = 0;
for (;;)
{
int gotframe = 0;
++count;
ret = av_read_frame(m_pFormatCtx, m_pPkt);
if (ret < 0) {
av_packet_unref(m_pPkt);
break;
}
else{
if (m_pPkt->stream_index == m_stream_index) {
ret = decode(&gotframe);
if (gotframe){
if (m_audioEncoder) {
m_audioEncoder->encodeAudio(m_pCodecCtx, m_pFrame, start_time);
//printf("linesize: %d, nb_samples: %d \n", m_pFrame->linesize, m_pFrame->nb_samples);
}
}
if (ret == AVERROR_EOF) {
break;
}
//ret = avcodec_send_packet(m_pCodecCtx, m_pPkt);
//if (ret < 0 && ret != AVERROR_EOF) {
// printf("decoding error.\n");
// continue;
//}
//
//printf("-->>> m_pkt.size: %d\n", m_pPkt->size);
//while (avcodec_receive_frame(m_pCodecCtx, m_pFrame) >= 0)
//{
// //printf("decoding.\n");
// if (m_audioEncoder) {
// m_audioEncoder->encodeAudio(m_pCodecCtx, m_pFrame, start_time);
// //printf("linesize: %d, nb_samples: %d \n", m_pFrame->linesize, m_pFrame->nb_samples);
// }
// /*if (FrameCallBack) {
// ret = FrameCallBack(data, m_swr->getBufferSize());
// printf("bufsize:%5d\t pts:%lld\t packet size:%d\n", m_swr->getBufferSize(), m_pPkt->pts, m_pPkt->size);
// if (ret < 0) {
// return ret;
// }
// }*/
//
//}
}
}
av_packet_unref(m_pPkt);
}
printf("------------------count=%d\n", count);
/*if (m_audioEncoder) {
m_audioEncoder->closeEncoder();
}*/
return 0;
}
}
\ No newline at end of file
#ifndef FFAUDIODECODER_H
#define FFAUDIODECODER_H
#include <functional>
#include <memory>
#include "ffmbase/FFMBase.h"
#include "FFAudioEncoder.h"
namespace FFM {
void show_dshow_device_option(const char* cameraName, int index = 0);
using getDecoderFrame = int(uint8_t **, int);
class FFAudioDecoder final : public FFMBase
{
public:
FFAudioDecoder(std::string &fileName);
virtual ~FFAudioDecoder();
public:
void setFrameCallBack(getDecoderFrame cb) {
FrameCallBack = cb;
}
void setFFAudioEncoder(FFAudioEncoder *encoder) {
m_audioEncoder = encoder;
}
AVStream *getInSream() {
return m_pFormatCtx->streams[m_stream_index];
}
virtual int getStreamIndex()
{
return m_stream_index;
}
void startDecoder();
private:
int decode(int *gotframe);
int decodeAudio();
private:
AVPacket *m_pPkt;
AVFrame *m_pFrame;
int m_stream_index = -1;
std::function<getDecoderFrame> FrameCallBack = nullptr;
FFAudioEncoder *m_audioEncoder = nullptr;
int m_frameIndex = 0;
};
}
#endif
\ No newline at end of file
This diff is collapsed.
#ifndef FFAUDOENCODER_H
#define FFAUDOENCODER_H
#include "Cffmpeg.h"
#include <string>
/*
.ogg DE y
.opus E n
.aac DE y
.mp3 DE y
.flac DE y
.ape D n
.wav DE y
.m4a D y
.oga E y
.mid n
.webm DE n
.weba n
.amr DE n
.au DE y
.wma D y
.aiff DE y
*/
namespace FFM
{
const int MAX_FRAME_SIZE = 19200;
//ret < 0, break;
class FFAudioSwr
{
public:
FFAudioSwr(AVStream *inSt, AVStream *outSt);
~FFAudioSwr();
struct SwrContext *getSwrContext(AVFrame *inFrame) {
return m_pSwrCtx;
}
int SwrAudioFrame(AVFrame *inframe, AVFrame *outframe, uint64_t starttime);
int addFrame(AVFrame *frame);
AVFrame *getFrame();
void unrefFrame();
uint8_t **getOutBuffer() {
return m_out_buffer;
}
int getBufferSize() {
return m_buffer_size;
}
AVAudioFifo* getFifo() {
return m_fifo;
}
void setLastTime(uint64_t time) {
m_lastTime = time;
}
int64_t getLastTime() {
return m_lastTime;
}
int64_t getCurTime() {
return m_curTime;
}
AVRational getTimeBase() {
return m_timebase;
}
void close();
private:
int init_parameters(AVFrame *frame);
int init_swr(AVFrame *frame);
int addFrameToFifo(AVFrame *frame);
int convertFrameByFifo();
private:
AVCodecContext *m_pInCodecCtx = nullptr;
AVCodecContext *m_pOutCodecCtx = nullptr;
struct SwrContext *m_pSwrCtx = nullptr;
AVFrame *m_pFrame = nullptr;
AVFrame *m_pConvertFrame = nullptr;
int m_channel_layout = AV_CH_LAYOUT_STEREO;
int m_nb_samples = 0;
AVSampleFormat m_sample_fmt = AV_SAMPLE_FMT_FLTP;
int m_in_sample_rate = 44100;
int m_out_sample_rate = 0;
int m_samples_count = 0;
int m_buffer_size = 0;
uint8_t **m_out_buffer = nullptr;
AVAudioFifo *m_fifo = nullptr;
private:
int64_t m_lastTime = 0;
int64_t m_curTime = 0;
AVRational m_timebase;
bool m_isInit = false;
};
class FFAudioFilter
{
public:
FFAudioFilter(AVStream *inSt, AVStream *outSt);
~FFAudioFilter();
public:
void setStartTimeOnce(uint64_t time){
static bool isSet = false;
if (!isSet) {
m_starttime = time;
isSet = true;
}
}
int addFrame(AVFrame *frame);
int getFrame(AVFrame** filter_frame);
void unrefPktAndFrame();
private:
int init();
private:
AVStream *m_inStream = nullptr;
AVStream *m_outStream = nullptr;
private:
AVFilterContext *m_buffsinkCtx = nullptr;
AVFilterContext *m_buffSrcCtx = nullptr;
AVFilterContext *m_lastFilter = nullptr;
AVFilterGraph *m_filterGraph = nullptr;
std::string m_filter_spec = "anull";
AVPacket *m_enPkt = nullptr;
AVFrame *m_filterFrame = nullptr;
AVFilterInOut *m_out = nullptr;
AVFilterInOut *m_in = nullptr;
private:
uint64_t m_starttime = 0;
};
class FFAudioEncoder
{
public:
FFAudioEncoder(std::string &fileName, AVStream *inSt);
~FFAudioEncoder();
public:
void closeEncoder();
int encodeAudio(AVCodecContext *inCtx, AVFrame *frame, uint64_t starttime);
void printDuration(AVStream *fmtCtx);
AVStream *getOutStream() {
return m_pStream;
}
void setSwrconvert(FFAudioSwr *swr) {
m_swr = swr;
}
void setFilter(FFAudioFilter *fileter) {
m_filter = fileter;
}
private:
int openFile(std::string &fileName);
int openCodec();
private:
AVStream *m_pInStream = nullptr;
AVFormatContext *m_pFormatCtx = nullptr;
AVOutputFormat* m_pOutFmt = nullptr;
AVCodecContext *m_pCodecCtx = nullptr;
AVCodec *m_pCodec = nullptr;
AVStream *m_pStream = nullptr;
uint8_t *m_frameBuf = nullptr;
int m_bufSize = 0;
std::string m_fileName = "";
FFAudioSwr *m_swr = nullptr;
FFAudioFilter *m_filter = nullptr;
private:
AVRational m_mux_timebase;
int64_t m_lastTime = -1;
int m_frameIndex = 0;
};
}
#endif
\ No newline at end of file
cmake.exe -G "Visual Studio 15 2017" -A x64 "-DCMAKE_TOOLCHAIN_FILE=D:\Vcpkg\vcpkg\scripts\buildsystems\vcpkg.cmake" ..
\ No newline at end of file
cmake_minimum_required(VERSION 3.10.0)
# cmake_policy(SET CMP0079 NEW)
Set(MODULE_NAME ffmbase)
include_directories(.)
aux_source_directory(. SRCFILE)
add_library(
${MODULE_NAME}
# SHARED
STATIC
${SRCFILE}
)
\ No newline at end of file
#include "FFMBase.h"
namespace FFM
{
FFMBase::FFMBase(std::string &fileName)
{
}
FFMBase::~FFMBase()
{
if (m_pCodecCtx) {
avcodec_close(m_pCodecCtx);
}
if (m_pFormatCtx) {
avformat_close_input(&m_pFormatCtx);
}
printf("~FFMBase()\n");
}
int FFMBase::openFile(std::string &fileName)
{
AVDictionary* options = nullptr;
AVInputFormat *iformat = nullptr;
if (strstr(fileName.c_str(), std::string("video=").c_str()) || strstr(fileName.c_str(), std::string("audio=").c_str()))
{
iformat = av_find_input_format("dshow");
//av_dict_set_int(&options, "audio_device_number", 0, 0);
//av_dict_set_int(&options, "audio_buffer_size", 20, 0);
}
int ret = -1;
m_pFormatCtx = avformat_alloc_context();
if (ret = avformat_open_input(&m_pFormatCtx, fileName.c_str(), iformat, &options) != 0) {
printf("Open input %s stream failed.\n", fileName.c_str());
return ret;
}
else {
printf("Open input %s stream.\n", fileName.c_str());
}
return 0;
}
int FFMBase::findStream(AVFormatContext *fmtCtx)
{
// Retrieve stream information
if (avformat_find_stream_info(fmtCtx, nullptr) < 0) {
printf("Couldn't find stream information.\n");
return -1;
}
int tream = -1;
for (int i = 0; i < fmtCtx->nb_streams; i++)
{
if (fmtCtx->streams[i]->codec->codec_type == AVMEDIA_TYPE_AUDIO
|| fmtCtx->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO) {
tream = i;
break;
}
}
return tream;
}
int FFMBase::openCodec(AVFormatContext *fmtCtx, int stream)
{
//m_pCodecCtx = fmtCtx->streams[stream]->codec;
int ret = -1;
//获取输入流解码器
m_pCodec = avcodec_find_decoder(fmtCtx->streams[stream]->codec->codec_id);
if (m_pCodec == nullptr) {
printf("Codec not found.\n");
return ret;
}
//获取输入流解码器参数
m_pCodecCtx = avcodec_alloc_context3(m_pCodec);
if (!m_pCodecCtx) {
printf("avcodec_alloc_context3 failed.\n");
return ret;
}
/*int suffix_size = m_fileName.find_last_of('.');
std::string suffix = m_fileName.substr(suffix_size, m_fileName.size());*/
//if (suffix == ".wav") {
//}
//else if (suffix == ".aac") {
//}
m_pCodecCtx->pkt_timebase = fmtCtx->streams[stream]->time_base; //避免出现: Could not update timestamps for skipped samples.
m_pCodecCtx->bit_rate = 128000;
avcodec_parameters_to_context(m_pCodecCtx, fmtCtx->streams[stream]->codecpar);
fmtCtx->streams[stream]->codec = m_pCodecCtx;
fmtCtx->streams[stream]->duration = m_pFormatCtx->duration;
printf("**************************************************************\n");
printf("Audio decode name: %s\n", m_pCodecCtx->codec->long_name);
printf("decode channels: %d, sample_rate: %d, sample_fmt: %d, bit_rate: %d\n", m_pCodecCtx->channels, \
m_pCodecCtx->sample_rate, m_pCodecCtx->sample_fmt, m_pCodecCtx->bit_rate);
printf("**************************************************************\n");
// Open codec
if (avcodec_open2(m_pCodecCtx, m_pCodec, nullptr) < 0) {
printf("Could not open codec.\n");
return ret;
}
av_dump_format(fmtCtx, 0, m_fileName.c_str(), 0);
//avcodec_parameters_to_context(m_pCodecCtx, fmtCtx->streams[stream]->codecpar);
return 0;
}
}
\ No newline at end of file
#ifndef FFMBASE_H
#define FFMBASE_H
#include "Cffmpeg.h"
#include <string>
#include <functional>
#include <memory>
namespace FFM
{
template <typename T>
using smart_ptr = std::unique_ptr<T, std::function<void(void *)>>;
class FFMBase
{
public:
FFMBase(std::string &fileName);
virtual ~FFMBase();
virtual AVFormatContext *getAVFormatContext(){
return m_pFormatCtx;
}
virtual AVCodecContext *getAVCodecContext() {
return m_pCodecCtx;
}
virtual AVCodec *getAVCodec() {
return m_pCodec;
}
virtual int getStreamIndex() = 0;
protected:
virtual int openFile(std::string &fileName);
virtual int findStream(AVFormatContext *fmtCtx);
virtual int openCodec(AVFormatContext *fmtCtx, int stream);
protected:
AVFormatContext *m_pFormatCtx;
AVCodecContext *m_pCodecCtx;
AVCodec *m_pCodec = nullptr;
std::string m_fileName = "";
};
}
#endif // !FFMBASE_H
/*
* FileSaver.js
* A saveAs() FileSaver implementation.
*
* By Eli Grey, http://eligrey.com
*
* License : https://github.com/eligrey/FileSaver.js/blob/master/LICENSE.md (MIT)
* source : http://purl.eligrey.com/github/FileSaver.js
*/
// The one and only way of getting global scope in all environments
// https://stackoverflow.com/q/3277182/1008999
var _global = typeof window === 'object' && window.window === window
? window : typeof self === 'object' && self.self === self
? self : typeof global === 'object' && global.global === global
? global
: this
function bom (blob, opts) {
if (typeof opts === 'undefined') opts = { autoBom: false }
else if (typeof opts !== 'object') {
console.warn('Deprecated: Expected third argument to be a object')
opts = { autoBom: !opts }
}
// prepend BOM for UTF-8 XML and text/* types (including HTML)
// note: your browser will automatically convert UTF-16 U+FEFF to EF BB BF
if (opts.autoBom && /^\s*(?:text\/\S*|application\/xml|\S*\/\S*\+xml)\s*;.*charset\s*=\s*utf-8/i.test(blob.type)) {
return new Blob([String.fromCharCode(0xFEFF), blob], { type: blob.type })
}
return blob
}
function download (url, name, opts) {
var xhr = new XMLHttpRequest()
xhr.open('GET', url)
xhr.responseType = 'blob'
xhr.onload = function () {
saveAs(xhr.response, name, opts)
}
xhr.onerror = function () {
console.error('could not download file')
}
xhr.send()
}
function corsEnabled (url) {
var xhr = new XMLHttpRequest()
// use sync to avoid popup blocker
xhr.open('HEAD', url, false)
try {
xhr.send()
} catch (e) {}
return xhr.status >= 200 && xhr.status <= 299
}
// `a.click()` doesn't work for all browsers (#465)
function click (node) {
try {
node.dispatchEvent(new MouseEvent('click'))
} catch (e) {
var evt = document.createEvent('MouseEvents')
evt.initMouseEvent('click', true, true, window, 0, 0, 0, 80,
20, false, false, false, false, 0, null)
node.dispatchEvent(evt)
}
}
// Detect WebView inside a native macOS app by ruling out all browsers
// We just need to check for 'Safari' because all other browsers (besides Firefox) include that too
// https://www.whatismybrowser.com/guides/the-latest-user-agent/macos
var isMacOSWebView = /Macintosh/.test(navigator.userAgent) && /AppleWebKit/.test(navigator.userAgent) && !/Safari/.test(navigator.userAgent)
var saveAs = _global.saveAs || (
// probably in some web worker
(typeof window !== 'object' || window !== _global)
? function saveAs () { /* noop */ }
// Use download attribute first if possible (#193 Lumia mobile) unless this is a macOS WebView
: ('download' in HTMLAnchorElement.prototype && !isMacOSWebView)
? function saveAs (blob, name, opts) {
var URL = _global.URL || _global.webkitURL
var a = document.createElement('a')
name = name || blob.name || 'download'
a.download = name
a.rel = 'noopener' // tabnabbing
// TODO: detect chrome extensions & packaged apps
// a.target = '_blank'
if (typeof blob === 'string') {
// Support regular links
a.href = blob
if (a.origin !== location.origin) {
corsEnabled(a.href)
? download(blob, name, opts)
: click(a, a.target = '_blank')
} else {
click(a)
}
} else {
// Support blobs
a.href = URL.createObjectURL(blob)
setTimeout(function () { URL.revokeObjectURL(a.href) }, 4E4) // 40s
setTimeout(function () { click(a) }, 0)
}
}
// Use msSaveOrOpenBlob as a second approach
: 'msSaveOrOpenBlob' in navigator
? function saveAs (blob, name, opts) {
name = name || blob.name || 'download'
if (typeof blob === 'string') {
if (corsEnabled(blob)) {
download(blob, name, opts)
} else {
var a = document.createElement('a')
a.href = blob
a.target = '_blank'
setTimeout(function () { click(a) })
}
} else {
navigator.msSaveOrOpenBlob(bom(blob, opts), name)
}
}
// Fallback to using FileReader and a popup
: function saveAs (blob, name, opts, popup) {
// Open a popup immediately do go around popup blocker
// Mostly only available on user interaction and the fileReader is async so...
popup = popup || open('', '_blank')
if (popup) {
popup.document.title =
popup.document.body.innerText = 'downloading...'
}
if (typeof blob === 'string') return download(blob, name, opts)
var force = blob.type === 'application/octet-stream'
var isSafari = /constructor/i.test(_global.HTMLElement) || _global.safari
var isChromeIOS = /CriOS\/[\d]+/.test(navigator.userAgent)
if ((isChromeIOS || (force && isSafari) || isMacOSWebView) && typeof FileReader !== 'undefined') {
// Safari doesn't allow downloading of blob URLs
var reader = new FileReader()
reader.onloadend = function () {
var url = reader.result
url = isChromeIOS ? url : url.replace(/^data:[^;]*;/, 'data:attachment/file;')
if (popup) popup.location.href = url
else location = url
popup = null // reverse-tabnabbing #460
}
reader.readAsDataURL(blob)
} else {
var URL = _global.URL || _global.webkitURL
var url = URL.createObjectURL(blob)
if (popup) popup.location = url
else location.href = url
popup = null // reverse-tabnabbing #460
setTimeout(function () { URL.revokeObjectURL(url) }, 4E4) // 40s
}
}
)
_global.saveAs = saveAs.saveAs = saveAs
if (typeof module !== 'undefined') {
module.exports = saveAs;
}
\ No newline at end of file
This source diff could not be displayed because it is too large. You can view the blob instead.
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>demo2</title>
</head>
<script src="./html2canvas.min.js"></script>
<script src="./fileSaver.js"></script>
<body>
<div id="root2" style="width: 1920px;height: 1080px;">
<div id="root"
style="width: 1920px;height: 1080px;background-color: sandybrown; position: absolute; left: 0; top: 0;">
<div id="move" style="width: 100px; height: 100px; background-color: red; position: absolute; left: 0; top: 0;">
</div>
</div>
</div>
<div>
<!-- <button onclick="ready()" style="font-size: large;">ready => </button> -->
<button onclick="go()" style="font-size: large;">go => </button>
<progress id="go-pgrs" value="0" max="1200"></progress>
<button onclick="download()" style="font-size: large;">download !</button>
<input type="file" value="选择文件" onchange="inputJsFile(event)"></input>
<!-- <button onclick="clip()" style="font-size: large;">clip</button> -->
<div>
<span>FrameIndex: </span>
<span id="frameindex">0</span>
<span>;TotalFrame: 800</span>
<span>;width: 1920</span>
<span>;height: 1080</span>
<span>;FPS: 25</span>
<span>;CODEC: H264</span>
</div>
</div>
</body>
<script>
var MP4Encoder = {};
</script>
<script src="./mp4encoder.js"></script>
<script>
var MP4Encoder = {}
var FS = {}
var lengthBytesUTF8 = null;
var stringToUTF8 = null;
createMP4Encoder().then(m=>{
console.log(m)
MP4Encoder = m;
FS = MP4Encoder.FS
lengthBytesUTF8 = MP4Encoder.lengthBytesUTF8
stringToUTF8 = MP4Encoder.stringToUTF8
})
// const wasmInstanceFromFile = await WebAssembly.instantiateStreaming(await fetch('add.wasm'));
// let sum = wasmInstanceFromFile.instance.exports.add(1,2);
var inputJsFile = function (event) {
let file = event.target.files[0];
file.arrayBuffer().then(t=>{
console.log(t)
FS.mkdir('/working');
FS.writeFile('/working/input.txt', new Uint8Array(t), { flags:'w+' });
console.log(FS.stat('/working/input.txt'))
var buff = FS.readFile('/working/input.txt', { encoding: 'binary' });
console.log(buff)
var pStr = getCStringPtr("/working/input.txt");
var ret = MP4Encoder._openTestFile(pStr);
});
}
var getCStringPtr = function (jstr) {
var lengthBytes = lengthBytesUTF8(jstr) + 1;
var p = MP4Encoder._malloc(lengthBytes);
stringToUTF8(jstr, p, lengthBytes);
return p;
}
//file-saver
let root = document.getElementById("root");
let root2 = document.getElementById("root2");
let move = document.getElementById("move");
let idx = document.getElementById("frameindex");
let pgrs = document.getElementById("go-pgrs");
let d = 0;
let stop = false;
let isrunning = false;
var ready = function () {
console.log(MP4Encoder);
stop = false;
var pStr = getCStringPtr("/tmp/demo2.mp4");
var ret = MP4Encoder._createH264(pStr, 1920, 1080, 25);
console.log("ready =>", ret);
}
let step = async () => {
move.style.left = d + "px";
move.style.top = (d > 1080 - 100 ? (1080 - 100) * 2 - d : d) + "px";
d += 2;
idx.innerText = d;
pgrs.value = d;
let canvas = await html2canvas(root, {
x: 0,
y: 0,
width: 1920,
height: 1080,
scale: 1
});
let ctx = canvas.getContext("2d");
let imagedata = ctx.getImageData(0, 0, 1920, 1080); //rgba
let fileBuffer = new Uint8Array(imagedata.data.buffer);
let bufferPtr = MP4Encoder._malloc(fileBuffer.length);
MP4Encoder.HEAP8.set(fileBuffer, bufferPtr);
var ret = MP4Encoder._addFrame(bufferPtr);
MP4Encoder._free(bufferPtr);
};
var go = async () => {
if (isrunning) return;
isrunning = true;
ready();
for (let i = 0; i < 600; i++) {
let ret = await step();
if (ret < 0 || stop) return;
}
stop = true;
MP4Encoder._close();
};
var download = () => {
var buff = FS.readFile('/tmp/demo2.mp4', { encoding: 'binary' });
saveAs(new Blob([buff]), `demo2.mp4`);
}
var clip = async () => {
let canvas = await html2canvas(root2, {
x: 0,
y: 0,
width: 1080,
height: 720,
scale: 1
});
//var offscreen = new OffscreenCanvas(1080, 720);
let ctx = canvas.getContext("2d");
let imagedata = ctx.getImageData(0, 0, canvas.width, canvas.height); //rgba
root2.removeChild(root);
root2.appendChild(canvas);
let fileBuffer = new Uint8Array(imagedata.data.buffer);
console.log("imagedata.data.buffer", imagedata.data.buffer, fileBuffer);
};
</script>
</html>
\ No newline at end of file
#include <iostream>
#include <string>
#include <map>
#include "Cffmpeg.h"
#include "audio/FFAudioDecoder.h"
#include "audio/FFAudioEncoder.h"
using namespace FFM;
void ffmpeg_init()
{
initRegister();
}
void test_device()
{
//avcodec_find_encoder_by_name();
const char *dev = "@device_pnp_\\\\?\\usb#vid_1bcf&pid_2283&mi_00#6&137b13e&0&0000#{65e8773d-8f56-11d0-a3b9-00a0c9223196}\\global";
const char *name = "Full HD webcam";
FFM::show_dshow_device_option(name);
}
int audio_frame_callBack(uint8_t **data, int size)
{
printf("[0]-----%d-----\n", data[0]);
printf("[1]-----%d-----\n", data[1]);
return 0;
}
EM_PORT_API(int) tanscode_mp3(std::string &in_file, std::string &out_file)
{
FFAudioDecoder decoder(in_file);
FFAudioEncoder encoder(out_file, decoder.getInSream());
//FFAudioSwr swr(decoder.getInSream(), encoder.getOutStream());
//encoder.setSwrconvert(&swr);
FFAudioFilter filter(decoder.getInSream(), encoder.getOutStream());
encoder.setFilter(&filter);
decoder.setFFAudioEncoder(&encoder);
decoder.setFrameCallBack(audio_frame_callBack);
decoder.startDecoder();
}
int main(int argc, char *argv[])
{
if(argc < 2){
printf("no invalid parameter\n");
return -1;
}
//oga��oggת������
std::string name = argv[1];
std::string name1 = argv[2];
std::string dev = "video=Full HD webcam";
return 0;
}
/**
* Replace Moudle['quit'] to avoid process.exit();
*
* @ref: https://github.com/Kagami/ffmpeg.js/blob/v4.2.9003/build/pre.js#L48
*/
Module['quit'] = function(status) {
if (Module["onExit"]) Module["onExit"](status);
throw new ExitStatus(status);
}
Module['exit'] = exit;
Module["lengthBytesUTF8"] = lengthBytesUTF8;
Module["stringToUTF8"] = stringToUTF8;
/**
* Disable all console output, might need to enable it
* for debugging
*/
out = err = function() {}
/*
* AC-3 parser prototypes
* Copyright (c) 2003 Fabrice Bellard
* Copyright (c) 2003 Michael Niedermayer
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#ifndef AVCODEC_AC3_PARSER_H
#define AVCODEC_AC3_PARSER_H
#include <stddef.h>
#include <stdint.h>
/**
* Extract the bitstream ID and the frame size from AC-3 data.
*/
int av_ac3_parse_header(const uint8_t *buf, size_t size,
uint8_t *bitstream_id, uint16_t *frame_size);
#endif /* AVCODEC_AC3_PARSER_H */
/*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#ifndef AVCODEC_ADTS_PARSER_H
#define AVCODEC_ADTS_PARSER_H
#include <stddef.h>
#include <stdint.h>
#define AV_AAC_ADTS_HEADER_SIZE 7
/**
* Extract the number of samples and frames from AAC data.
* @param[in] buf pointer to AAC data buffer
* @param[out] samples Pointer to where number of samples is written
* @param[out] frames Pointer to where number of frames is written
* @return Returns 0 on success, error code on failure.
*/
int av_adts_header_parse(const uint8_t *buf, uint32_t *samples,
uint8_t *frames);
#endif /* AVCODEC_ADTS_PARSER_H */
This diff is collapsed.
/*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#ifndef AVCODEC_AVDCT_H
#define AVCODEC_AVDCT_H
#include "libavutil/opt.h"
/**
* AVDCT context.
* @note function pointers can be NULL if the specific features have been
* disabled at build time.
*/
typedef struct AVDCT {
const AVClass *av_class;
void (*idct)(int16_t *block /* align 16 */);
/**
* IDCT input permutation.
* Several optimized IDCTs need a permutated input (relative to the
* normal order of the reference IDCT).
* This permutation must be performed before the idct_put/add.
* Note, normally this can be merged with the zigzag/alternate scan<br>
* An example to avoid confusion:
* - (->decode coeffs -> zigzag reorder -> dequant -> reference IDCT -> ...)
* - (x -> reference DCT -> reference IDCT -> x)
* - (x -> reference DCT -> simple_mmx_perm = idct_permutation
* -> simple_idct_mmx -> x)
* - (-> decode coeffs -> zigzag reorder -> simple_mmx_perm -> dequant
* -> simple_idct_mmx -> ...)
*/
uint8_t idct_permutation[64];
void (*fdct)(int16_t *block /* align 16 */);
/**
* DCT algorithm.
* must use AVOptions to set this field.
*/
int dct_algo;
/**
* IDCT algorithm.
* must use AVOptions to set this field.
*/
int idct_algo;
void (*get_pixels)(int16_t *block /* align 16 */,
const uint8_t *pixels /* align 8 */,
ptrdiff_t line_size);
int bits_per_sample;
void (*get_pixels_unaligned)(int16_t *block /* align 16 */,
const uint8_t *pixels,
ptrdiff_t line_size);
} AVDCT;
/**
* Allocates a AVDCT context.
* This needs to be initialized with avcodec_dct_init() after optionally
* configuring it with AVOptions.
*
* To free it use av_free()
*/
AVDCT *avcodec_dct_alloc(void);
int avcodec_dct_init(AVDCT *);
const AVClass *avcodec_dct_get_class(void);
#endif /* AVCODEC_AVDCT_H */
/*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#ifndef AVCODEC_AVFFT_H
#define AVCODEC_AVFFT_H
/**
* @file
* @ingroup lavc_fft
* FFT functions
*/
/**
* @defgroup lavc_fft FFT functions
* @ingroup lavc_misc
*
* @{
*/
typedef float FFTSample;
typedef struct FFTComplex {
FFTSample re, im;
} FFTComplex;
typedef struct FFTContext FFTContext;
/**
* Set up a complex FFT.
* @param nbits log2 of the length of the input array
* @param inverse if 0 perform the forward transform, if 1 perform the inverse
*/
FFTContext *av_fft_init(int nbits, int inverse);
/**
* Do the permutation needed BEFORE calling ff_fft_calc().
*/
void av_fft_permute(FFTContext *s, FFTComplex *z);
/**
* Do a complex FFT with the parameters defined in av_fft_init(). The
* input data must be permuted before. No 1.0/sqrt(n) normalization is done.
*/
void av_fft_calc(FFTContext *s, FFTComplex *z);
void av_fft_end(FFTContext *s);
FFTContext *av_mdct_init(int nbits, int inverse, double scale);
void av_imdct_calc(FFTContext *s, FFTSample *output, const FFTSample *input);
void av_imdct_half(FFTContext *s, FFTSample *output, const FFTSample *input);
void av_mdct_calc(FFTContext *s, FFTSample *output, const FFTSample *input);
void av_mdct_end(FFTContext *s);
/* Real Discrete Fourier Transform */
enum RDFTransformType {
DFT_R2C,
IDFT_C2R,
IDFT_R2C,
DFT_C2R,
};
typedef struct RDFTContext RDFTContext;
/**
* Set up a real FFT.
* @param nbits log2 of the length of the input array
* @param trans the type of transform
*/
RDFTContext *av_rdft_init(int nbits, enum RDFTransformType trans);
void av_rdft_calc(RDFTContext *s, FFTSample *data);
void av_rdft_end(RDFTContext *s);
/* Discrete Cosine Transform */
typedef struct DCTContext DCTContext;
enum DCTTransformType {
DCT_II = 0,
DCT_III,
DCT_I,
DST_I,
};
/**
* Set up DCT.
*
* @param nbits size of the input array:
* (1 << nbits) for DCT-II, DCT-III and DST-I
* (1 << nbits) + 1 for DCT-I
* @param type the type of transform
*
* @note the first element of the input of DST-I is ignored
*/
DCTContext *av_dct_init(int nbits, enum DCTTransformType type);
void av_dct_calc(DCTContext *s, FFTSample *data);
void av_dct_end (DCTContext *s);
/**
* @}
*/
#endif /* AVCODEC_AVFFT_H */
This diff is collapsed.
This diff is collapsed.
/*
* Codec descriptors public API
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#ifndef AVCODEC_CODEC_DESC_H
#define AVCODEC_CODEC_DESC_H
#include "libavutil/avutil.h"
#include "codec_id.h"
/**
* @addtogroup lavc_core
* @{
*/
/**
* This struct describes the properties of a single codec described by an
* AVCodecID.
* @see avcodec_descriptor_get()
*/
typedef struct AVCodecDescriptor {
enum AVCodecID id;
enum AVMediaType type;
/**
* Name of the codec described by this descriptor. It is non-empty and
* unique for each codec descriptor. It should contain alphanumeric
* characters and '_' only.
*/
const char *name;
/**
* A more descriptive name for this codec. May be NULL.
*/
const char *long_name;
/**
* Codec properties, a combination of AV_CODEC_PROP_* flags.
*/
int props;
/**
* MIME type(s) associated with the codec.
* May be NULL; if not, a NULL-terminated array of MIME types.
* The first item is always non-NULL and is the preferred MIME type.
*/
const char *const *mime_types;
/**
* If non-NULL, an array of profiles recognized for this codec.
* Terminated with FF_PROFILE_UNKNOWN.
*/
const struct AVProfile *profiles;
} AVCodecDescriptor;
/**
* Codec uses only intra compression.
* Video and audio codecs only.
*/
#define AV_CODEC_PROP_INTRA_ONLY (1 << 0)
/**
* Codec supports lossy compression. Audio and video codecs only.
* @note a codec may support both lossy and lossless
* compression modes
*/
#define AV_CODEC_PROP_LOSSY (1 << 1)
/**
* Codec supports lossless compression. Audio and video codecs only.
*/
#define AV_CODEC_PROP_LOSSLESS (1 << 2)
/**
* Codec supports frame reordering. That is, the coded order (the order in which
* the encoded packets are output by the encoders / stored / input to the
* decoders) may be different from the presentation order of the corresponding
* frames.
*
* For codecs that do not have this property set, PTS and DTS should always be
* equal.
*/
#define AV_CODEC_PROP_REORDER (1 << 3)
/**
* Subtitle codec is bitmap based
* Decoded AVSubtitle data can be read from the AVSubtitleRect->pict field.
*/
#define AV_CODEC_PROP_BITMAP_SUB (1 << 16)
/**
* Subtitle codec is text based.
* Decoded AVSubtitle data can be read from the AVSubtitleRect->ass field.
*/
#define AV_CODEC_PROP_TEXT_SUB (1 << 17)
/**
* @return descriptor for given codec ID or NULL if no descriptor exists.
*/
const AVCodecDescriptor *avcodec_descriptor_get(enum AVCodecID id);
/**
* Iterate over all codec descriptors known to libavcodec.
*
* @param prev previous descriptor. NULL to get the first descriptor.
*
* @return next descriptor or NULL after the last descriptor
*/
const AVCodecDescriptor *avcodec_descriptor_next(const AVCodecDescriptor *prev);
/**
* @return codec descriptor with the given name or NULL if no such descriptor
* exists.
*/
const AVCodecDescriptor *avcodec_descriptor_get_by_name(const char *name);
/**
* @}
*/
#endif // AVCODEC_CODEC_DESC_H
This diff is collapsed.
/*
* Codec parameters public API
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#ifndef AVCODEC_CODEC_PAR_H
#define AVCODEC_CODEC_PAR_H
#include <stdint.h>
#include "libavutil/avutil.h"
#include "libavutil/rational.h"
#include "libavutil/pixfmt.h"
#include "codec_id.h"
/**
* @addtogroup lavc_core
*/
enum AVFieldOrder {
AV_FIELD_UNKNOWN,
AV_FIELD_PROGRESSIVE,
AV_FIELD_TT, //< Top coded_first, top displayed first
AV_FIELD_BB, //< Bottom coded first, bottom displayed first
AV_FIELD_TB, //< Top coded first, bottom displayed first
AV_FIELD_BT, //< Bottom coded first, top displayed first
};
/**
* This struct describes the properties of an encoded stream.
*
* sizeof(AVCodecParameters) is not a part of the public ABI, this struct must
* be allocated with avcodec_parameters_alloc() and freed with
* avcodec_parameters_free().
*/
typedef struct AVCodecParameters {
/**
* General type of the encoded data.
*/
enum AVMediaType codec_type;
/**
* Specific type of the encoded data (the codec used).
*/
enum AVCodecID codec_id;
/**
* Additional information about the codec (corresponds to the AVI FOURCC).
*/
uint32_t codec_tag;
/**
* Extra binary data needed for initializing the decoder, codec-dependent.
*
* Must be allocated with av_malloc() and will be freed by
* avcodec_parameters_free(). The allocated size of extradata must be at
* least extradata_size + AV_INPUT_BUFFER_PADDING_SIZE, with the padding
* bytes zeroed.
*/
uint8_t *extradata;
/**
* Size of the extradata content in bytes.
*/
int extradata_size;
/**
* - video: the pixel format, the value corresponds to enum AVPixelFormat.
* - audio: the sample format, the value corresponds to enum AVSampleFormat.
*/
int format;
/**
* The average bitrate of the encoded data (in bits per second).
*/
int64_t bit_rate;
/**
* The number of bits per sample in the codedwords.
*
* This is basically the bitrate per sample. It is mandatory for a bunch of
* formats to actually decode them. It's the number of bits for one sample in
* the actual coded bitstream.
*
* This could be for example 4 for ADPCM
* For PCM formats this matches bits_per_raw_sample
* Can be 0
*/
int bits_per_coded_sample;
/**
* This is the number of valid bits in each output sample. If the
* sample format has more bits, the least significant bits are additional
* padding bits, which are always 0. Use right shifts to reduce the sample
* to its actual size. For example, audio formats with 24 bit samples will
* have bits_per_raw_sample set to 24, and format set to AV_SAMPLE_FMT_S32.
* To get the original sample use "(int32_t)sample >> 8"."
*
* For ADPCM this might be 12 or 16 or similar
* Can be 0
*/
int bits_per_raw_sample;
/**
* Codec-specific bitstream restrictions that the stream conforms to.
*/
int profile;
int level;
/**
* Video only. The dimensions of the video frame in pixels.
*/
int width;
int height;
/**
* Video only. The aspect ratio (width / height) which a single pixel
* should have when displayed.
*
* When the aspect ratio is unknown / undefined, the numerator should be
* set to 0 (the denominator may have any value).
*/
AVRational sample_aspect_ratio;
/**
* Video only. The order of the fields in interlaced video.
*/
enum AVFieldOrder field_order;
/**
* Video only. Additional colorspace characteristics.
*/
enum AVColorRange color_range;
enum AVColorPrimaries color_primaries;
enum AVColorTransferCharacteristic color_trc;
enum AVColorSpace color_space;
enum AVChromaLocation chroma_location;
/**
* Video only. Number of delayed frames.
*/
int video_delay;
/**
* Audio only. The channel layout bitmask. May be 0 if the channel layout is
* unknown or unspecified, otherwise the number of bits set must be equal to
* the channels field.
*/
uint64_t channel_layout;
/**
* Audio only. The number of audio channels.
*/
int channels;
/**
* Audio only. The number of audio samples per second.
*/
int sample_rate;
/**
* Audio only. The number of bytes per coded audio frame, required by some
* formats.
*
* Corresponds to nBlockAlign in WAVEFORMATEX.
*/
int block_align;
/**
* Audio only. Audio frame size, if known. Required by some formats to be static.
*/
int frame_size;
/**
* Audio only. The amount of padding (in samples) inserted by the encoder at
* the beginning of the audio. I.e. this number of leading decoded samples
* must be discarded by the caller to get the original audio without leading
* padding.
*/
int initial_padding;
/**
* Audio only. The amount of padding (in samples) appended by the encoder to
* the end of the audio. I.e. this number of decoded samples must be
* discarded by the caller from the end of the stream to get the original
* audio without any trailing padding.
*/
int trailing_padding;
/**
* Audio only. Number of samples to skip after a discontinuity.
*/
int seek_preroll;
} AVCodecParameters;
/**
* Allocate a new AVCodecParameters and set its fields to default values
* (unknown/invalid/0). The returned struct must be freed with
* avcodec_parameters_free().
*/
AVCodecParameters *avcodec_parameters_alloc(void);
/**
* Free an AVCodecParameters instance and everything associated with it and
* write NULL to the supplied pointer.
*/
void avcodec_parameters_free(AVCodecParameters **par);
/**
* Copy the contents of src to dst. Any allocated fields in dst are freed and
* replaced with newly allocated duplicates of the corresponding fields in src.
*
* @return >= 0 on success, a negative AVERROR code on failure.
*/
int avcodec_parameters_copy(AVCodecParameters *dst, const AVCodecParameters *src);
/**
* @}
*/
#endif // AVCODEC_CODEC_PAR_H
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
/*
* JNI public API functions
*
* Copyright (c) 2015-2016 Matthieu Bouron <matthieu.bouron stupeflix.com>
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#ifndef AVCODEC_JNI_H
#define AVCODEC_JNI_H
/*
* Manually set a Java virtual machine which will be used to retrieve the JNI
* environment. Once a Java VM is set it cannot be changed afterwards, meaning
* you can call multiple times av_jni_set_java_vm with the same Java VM pointer
* however it will error out if you try to set a different Java VM.
*
* @param vm Java virtual machine
* @param log_ctx context used for logging, can be NULL
* @return 0 on success, < 0 otherwise
*/
int av_jni_set_java_vm(void *vm, void *log_ctx);
/*
* Get the Java virtual machine which has been set with av_jni_set_java_vm.
*
* @param vm Java virtual machine
* @return a pointer to the Java virtual machine
*/
void *av_jni_get_java_vm(void *log_ctx);
#endif /* AVCODEC_JNI_H */
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
/* Generated by ffmpeg configure */
#ifndef AVUTIL_AVCONFIG_H
#define AVUTIL_AVCONFIG_H
#define AV_HAVE_BIGENDIAN 0
#define AV_HAVE_FAST_UNALIGNED 1
#endif /* AVUTIL_AVCONFIG_H */
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
/* Automatically generated by version.sh, do not manually edit! */
#ifndef AVUTIL_FFVERSION_H
#define AVUTIL_FFVERSION_H
#define FFMPEG_VERSION "v0.10.0-27-g8f39fb6c8a"
#endif /* AVUTIL_FFVERSION_H */
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment