android基于ffmpeg的简单视频播发器 三线程实现播放器(完)

2024-05-11 06:32

本文主要是介绍android基于ffmpeg的简单视频播发器 三线程实现播放器(完),希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

一个多星期都在研究播放器,从双线程到三线程,它们的关系太复杂了,总是搞不定,而且本人c++水平实在有限,很多东西都不太会用。终于搞好了一个能拿得出手的东东,基本没啥严重的bug了,或者我没发现严重的bug,不过代码还是挺乱的,而且音视频对齐使用的办法也不是很好,以后再慢慢优化,先拿来用

一个线程读取AVPacket保存到数组,由另外两个线程做解码和播放,这样就不会出现上一篇博文里一个文件两个线程都去加载了,这样读网络资源就不需要读两次了,对ffmpeg理解还是不深,不明白地方太多了,先按自己理解的来吧

因为不能确定数据多少,所以要用动态数组,我就用了vector来储存数据

std::vector<AVPacket *> video_packets;
std::vector<AVPacket *> audio_packets;

还要加个数组最大数,不然直接就会爆掉的

int max_count = 200;

还需要记录线程的状态

typedef enum {VIDEO_READY_STOP,
    VIDEO_STOP,
    VIDEO_READY_MOVE,
    VIDEO_MOVE,
    VIDEO_MOVE_OVER,
    VIDEO_READY,
    VIDEO_PLAY,
    VIDEO_OVER
} VIDEO_STATE;

在开发中发现一个严重的bug,那就是关于SurfaceView的生命周期,当app切换到后台时,SurfaceView会释放掉Surface,所以egl就不能用了,要重新加载Surface,所以要记录下SurfaceView的生命周期来进行操作,在加个锁防止并发

bool create_egl = false;
pthread_mutex_t play_mutex;

试了试简单的跳帧

if(!yuvFrame->key_frame){int m = (int)s / -300;
    if(m > throw_max){throw_max = m;
    }if(throw_index < throw_max){throw_index++;
        av_frame_free(&yuvFrame);
        av_packet_unref(pkt);
        continue;
    }
}

主要还是各种逻辑关系,基本思路是先停音频线程,再停视频线程,先启动视频线程,在启动音频线程,因为视频解码比较费时,所以优先考虑视频

java代码VideoSurfaceView

public class VideoSurfaceView extends SurfaceView implements SurfaceHolder.Callback {/**
     * 视频路径
     */

    String videoPath = "/storage/emulated/0/baiduNetdisk/season09.mp4";

    private SurfaceHolder mHolder;

    public VideoSurfaceView(Context context) {super(context);
        init();
    }public VideoSurfaceView(Context context, AttributeSet attributeSet) {super(context, attributeSet);
        init();
    }private void init() {mHolder = getHolder();
        mHolder.addCallback(this);


        Thread thread = new Thread() {@Override
            public void run() {super.run();
                decoder(videoPath);

            }};
        thread.start();

        Thread audioThread = new Thread() {@Override
            public void run() {super.run();
                audioPlay();
            }};

        Thread videoThread = new Thread() {@Override
            public void run() {super.run();
                videoPlay();
            }};
        audioThread.start();
        videoThread.start();
    }public void surfaceCreated(SurfaceHolder holder) {created();
    }public void surfaceDestroyed(SurfaceHolder holder) {destroyed();
    }public void surfaceChanged(SurfaceHolder holder, int format, final int w, final int h) {}public Surface getSurface(){return mHolder.getSurface();
    }public AudioTrack createAudio(int sampleRateInHz, int nb_channels) {int channelConfig;
        if (nb_channels == 1) {channelConfig = AudioFormat.CHANNEL_OUT_MONO;
        } else if (nb_channels == 2) {channelConfig = AudioFormat.CHANNEL_OUT_STEREO;
        } else {channelConfig = AudioFormat.CHANNEL_OUT_STEREO;
        }int audioFormat = AudioFormat.ENCODING_PCM_16BIT;
        int minBufferSize = AudioTrack.getMinBufferSize(sampleRateInHz,
                channelConfig, audioFormat);

        AudioTrack audio = new AudioTrack(AudioManager.STREAM_MUSIC, // 指定流的类型
                sampleRateInHz, // 设置音频数据的采样率 32k,如果是44.1k就是44100
                channelConfig, // 设置输出声道为双声道立体声,而CHANNEL_OUT_MONO类型是单声道
                audioFormat, // 设置音频数据块是8位还是16位,这里设置为16位。好像现在绝大多数的音频都是16位的了
                minBufferSize, AudioTrack.MODE_STREAM // 设置模式类型,在这里设置为流类型,另外一种MODE_STATIC貌似没有什么效果
        );
        // audio.play(); // 启动音频设备,下面就可以真正开始音频数据的播放了
        return audio;
    }static {System.loadLibrary("native-lib");
    }public native void decoder(String path);

    public native void play();

    public native void stop();


    public native void videoPlay();

    public native void audioPlay();

    public native void move(long time);

    public native void created();
    public native void destroyed();
    public native void close();

}

extern "C" {
#include "libavformat/avformat.h"
#include "libavfilter/avfiltergraph.h"
#include "libavfilter/buffersink.h"
#include "libswresample/swresample.h"
};

#include <vector>
#include<mutex>

#define MAX_AUDIO_FRME_SIZE 48000 * 4

//系统当前时间
long getCurrentTime() {struct timeval tv;
    gettimeofday(&tv, NULL);
    return tv.tv_sec * 1000 + tv.tv_usec / 1000;
}
//等待时间
timespec waitTime(long timeout_ms) {struct timespec abstime;
    struct timeval now;
    gettimeofday(&now, NULL);
    long nsec = now.tv_usec * 1000 + (timeout_ms % 1000) * 1000000;
    abstime.tv_sec = now.tv_sec + nsec / 1000000000 + timeout_ms / 1000;
    abstime.tv_nsec = nsec % 1000000000;
    return abstime;
}double play_time;//播放时间

long audio_time = 0;//声音时间   -1表示音频线程停止,-2表示视频数据和停止的音频数据停在差不多位置
long start_time = 0;//记录audio_time的时间

bool isClose = false;//结束循环

std::vector<AVPacket *> video_packets;//视频数据数组
std::vector<AVPacket *> audio_packets;//音频数据数组

AVStream *video_stream = NULL;
AVStream *audio_stream = NULL;

AVCodecContext *video_codec_ctx;
AVCodecContext *audio_codec_ctx;

//视频锁
pthread_mutex_t video_mutex;
pthread_cond_t video_cond;
//音频锁
pthread_mutex_t audio_mutex;
pthread_cond_t audio_cond;

//解码等待
bool decoder_wait;

//解码锁
pthread_mutex_t decoder_mutex;
pthread_cond_t decoder_cond;

//跳转时间
double move_time = 0;

//解码结束
bool decoder_over = false;

//数据数组最大数
int max_count = 200;

//线程状态
typedef enum {VIDEO_READY_STOP,//准备停止
    VIDEO_STOP,//停止
    VIDEO_READY_MOVE,//准备跳转
    VIDEO_MOVE,//跳转中
    VIDEO_MOVE_OVER,//跳转结束
    VIDEO_READY,//准备播放
    VIDEO_PLAY,//播放中
    VIDEO_OVER//播放结束
} VIDEO_STATE;

VIDEO_STATE video_state; //视频状态
VIDEO_STATE audio_state; //音频状态


//界面是否隐藏
bool create_egl = false;
//渲染锁
pthread_mutex_t play_mutex;


//将音频数据与视频数据对齐
void alineAudio2VideoPst() {if (audio_packets.size() >= 3 && video_packets.size() >= 1) {AVPacket *video_packet = video_packets[0];
        AVPacket *audio_packet_1 = audio_packets[0];
        AVPacket *audio_packet_2 = audio_packets[1];
        double video_time = video_packet->pts * av_q2d(video_stream->time_base);
        double audio_time_1 = audio_packet_1->pts * av_q2d(audio_stream->time_base);
        double audio_time_2 = audio_packet_2->pts * av_q2d(audio_stream->time_base);
        if (video_time >= audio_time_1 && video_time < audio_time_2) {} else {audio_packets.erase(audio_packets.begin());
            av_packet_unref(audio_packet_1);
            alineAudio2VideoPst();
        }}
}extern "C"
JNIEXPORT void JNICALL
Java_com_example_videoplay_VideoSurfaceView_decoder(JNIEnv *env, jobject instance, jstring path_) {const char *path = env->GetStringUTFChars(path_, 0);

    // TODO


    pthread_mutex_init(&decoder_mutex, NULL);
    pthread_cond_init(&decoder_cond, NULL);


    pthread_mutex_lock(&decoder_mutex);
    decoder_wait = true;
    pthread_cond_wait(&decoder_cond, &decoder_mutex);
    decoder_wait = false;
    pthread_mutex_unlock(&decoder_mutex);


    av_register_all();
    avformat_network_init();
    AVFormatContext *fmt_ctx = avformat_alloc_context();
    if (avformat_open_input(&fmt_ctx, path, NULL, NULL) < 0) {return;
    }if (avformat_find_stream_info(fmt_ctx, NULL) < 0) {return;
    }int video_stream_index = -1;
    int audio_stream_index = -1;
    for (int i = 0; i < fmt_ctx->nb_streams; i++) {if (fmt_ctx->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_VIDEO) {video_stream = fmt_ctx->streams[i];
            video_stream_index = i;
        } else if (fmt_ctx->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_AUDIO) {audio_stream = fmt_ctx->streams[i];
            audio_stream_index = i;
        }if (video_stream_index != -1 && audio_stream_index != -1) {break;
        }}if (video_stream_index == -1) {return;
    }if (audio_stream_index == -1) {return;
    }video_codec_ctx = avcodec_alloc_context3(NULL);
    avcodec_parameters_to_context(video_codec_ctx, video_stream->codecpar);
    AVCodec *video_codec = avcodec_find_decoder(video_codec_ctx->codec_id);
    if (avcodec_open2(video_codec_ctx, video_codec, NULL) < 0) {return;
    }audio_codec_ctx = avcodec_alloc_context3(NULL);
    avcodec_parameters_to_context(audio_codec_ctx, audio_stream->codecpar);
    AVCodec *audio_codec = avcodec_find_decoder(audio_codec_ctx->codec_id);
    if (avcodec_open2(audio_codec_ctx, audio_codec, NULL) < 0) {return;
    }while (1) {if (isClose) {break;
        }//视频频是否进入跳转状态
        // 状态顺序
        // VIDEO_READY_MOVE->audio_state = VIDEO_MOVE -> video_state == VIDEO_MOVE -> VIDEO_MOVE_OVER
        if (video_state == VIDEO_MOVE) {//清空数组数据
            while (video_packets.size() != 0) {AVPacket *pkt = video_packets[0];
                video_packets.erase(video_packets.begin());
                av_packet_unref(pkt);
            }while (audio_packets.size() != 0) {AVPacket *pkt = audio_packets[0];
                audio_packets.erase(audio_packets.begin());
                av_packet_unref(pkt);
            }std::vector<AVPacket *>().swap(video_packets);
            std::vector<AVPacket *>().swap(audio_packets);

            //计算时间
            int64_t k = (int64_t) (move_time / av_q2d(video_stream->time_base));
            //跳转
            av_seek_frame(fmt_ctx, video_stream_index,
                          k,
                          AVSEEK_FLAG_BACKWARD);
            avcodec_flush_buffers(video_codec_ctx);
            avcodec_flush_buffers(audio_codec_ctx);
            //改变状态
            video_state = VIDEO_MOVE_OVER;
            audio_state = VIDEO_MOVE_OVER;
        }AVPacket *pkt = (AVPacket *) malloc(sizeof(AVPacket));
        //当没有数据时
        if (av_read_frame(fmt_ctx, pkt) < 0) {av_packet_unref(pkt);
            //是否跳转结束
            if (video_state == VIDEO_MOVE_OVER && audio_state == VIDEO_MOVE_OVER){//数据对齐
                alineAudio2VideoPst();

                //启动视频播放线程,变成播放状态
                pthread_mutex_lock(&video_mutex);
                video_state = VIDEO_PLAY;
                pthread_cond_signal(&video_cond);
                pthread_mutex_unlock(&video_mutex);

                //启动音频播放线程,变成播放状态
                pthread_mutex_lock(&audio_mutex);
                audio_state = VIDEO_PLAY;
                pthread_cond_signal(&audio_cond);
                pthread_mutex_unlock(&audio_mutex);


            }//先判断是否进入了跳转状态,是就不让线程进行等待
            pthread_mutex_lock(&decoder_mutex);
            if (video_state != VIDEO_MOVE && !isClose) {decoder_over = true;
                pthread_cond_wait(&decoder_cond, &decoder_mutex);
                decoder_over = false;
            }pthread_mutex_unlock(&decoder_mutex);
            continue;
        }if (pkt->stream_index == audio_stream_index) {pthread_mutex_lock(&audio_mutex);
            audio_packets.push_back(pkt);
            pthread_mutex_unlock(&audio_mutex);
        } else if (pkt->stream_index == video_stream_index) {pthread_mutex_lock(&video_mutex);
            video_packets.push_back(pkt);
            pthread_mutex_unlock(&video_mutex);
        }//判断数组数据超过最大值
        if (video_packets.size() > max_count && audio_packets.size() > max_count) {//是否跳转结束
            if (video_state == VIDEO_MOVE_OVER && audio_state == VIDEO_MOVE_OVER){//数据对齐
                alineAudio2VideoPst();

                //启动视频播放线程,变成播放状态
                pthread_mutex_lock(&video_mutex);
                video_state = VIDEO_PLAY;
                pthread_cond_signal(&video_cond);
                pthread_mutex_unlock(&video_mutex);

                //启动音频播放线程,变成播放状态
                pthread_mutex_lock(&audio_mutex);
                audio_state = VIDEO_PLAY;
                pthread_cond_signal(&audio_cond);
                pthread_mutex_unlock(&audio_mutex);


            } else{//是否是准备状态
                if (audio_state == VIDEO_READY) {pthread_mutex_lock(&audio_mutex);
                    pthread_cond_signal(&audio_cond);
                    pthread_mutex_unlock(&audio_mutex);
                }if (video_state == VIDEO_READY) {pthread_mutex_lock(&video_mutex);
                    pthread_cond_signal(&video_cond);
                    pthread_mutex_unlock(&video_mutex);
                }}//先判断是否进入了跳转状态,是就不让线程进行等待
            pthread_mutex_lock(&decoder_mutex);
            if (video_state != VIDEO_MOVE && !isClose) {decoder_wait = true;
                pthread_cond_wait(&decoder_cond, &decoder_mutex);
                decoder_wait = false;
            }pthread_mutex_unlock(&decoder_mutex);
        }}//释放
    avformat_close_input(&fmt_ctx);
    while (video_packets.size() != 0) {AVPacket *pkt = video_packets[0];
        video_packets.erase(video_packets.begin());
        av_packet_unref(pkt);
    }while (audio_packets.size() != 0) {AVPacket *pkt = audio_packets[0];
        audio_packets.erase(audio_packets.begin());
        av_packet_unref(pkt);
    }std::vector<AVPacket *>().swap(video_packets);
    std::vector<AVPacket *>().swap(audio_packets);


    pthread_mutex_destroy(&decoder_mutex);

    pthread_cond_destroy(&decoder_cond);

    env->ReleaseStringUTFChars(path_, path);
}
//判断是否要解码
bool isDecoder() {return video_packets.size() > max_count / 2 && audio_packets.size() > max_count / 2 &&decoder_wait;
}EGLUtils *eglUtils = NULL;
extern "C"
JNIEXPORT void JNICALL
Java_com_example_videoplay_VideoSurfaceView_videoPlay(JNIEnv *env, jobject instance) {// TODO


    pthread_mutex_init(&video_mutex, NULL);
    pthread_cond_init(&video_cond, NULL);

    pthread_mutex_init(&play_mutex, NULL);


    pthread_mutex_lock(&video_mutex);
    video_state = VIDEO_READY;
    pthread_cond_wait(&video_cond, &video_mutex);
    video_state = VIDEO_PLAY;
    pthread_mutex_unlock(&video_mutex);


    OpenGLUtils *openGLUtils = new OpenGLUtils();

    jclass player_class = env->GetObjectClass(instance);
    jmethodID get_surface_mid = env->GetMethodID(player_class, "getSurface",
                                                 "()Landroid/view/Surface;");


    AVRational timeBase = video_stream->time_base;

    int throw_index = 0;

    int throw_max = 1;
    int ret;
    while (1) {//是否进入准备停止状态,数据是否对齐
        pthread_mutex_lock(&video_mutex);
        if (video_state == VIDEO_READY_STOP && audio_time == -2) {video_state = VIDEO_STOP;
            pthread_cond_wait(&video_cond, &video_mutex);
        }pthread_mutex_unlock(&video_mutex);
        //判断音频线程进入等待状态
        if (audio_state == VIDEO_MOVE) {pthread_mutex_lock(&decoder_mutex);
            video_state = VIDEO_MOVE;
            //判断解码线程是否进入等待状态,是就启动
            if (decoder_wait || decoder_over) {pthread_cond_signal(&decoder_cond);
            }pthread_mutex_unlock(&decoder_mutex);
            //进入等待状态
            pthread_mutex_lock(&video_mutex);
            if(video_state != VIDEO_PLAY){pthread_cond_wait(&video_cond, &video_mutex);
            }pthread_mutex_unlock(&video_mutex);
        }if (isClose) {break;
        }AVPacket *pkt = NULL;

        if (video_packets.size() != 0) {pthread_mutex_lock(&video_mutex);
            pkt = video_packets[0];
            video_packets.erase(video_packets.begin());
            pthread_mutex_unlock(&video_mutex);
        }else{//播放结束,进入结束状态
            pthread_mutex_lock(&video_mutex);
            if(video_state == VIDEO_PLAY){video_state = VIDEO_OVER;
                pthread_cond_wait(&video_cond, &video_mutex);
            }pthread_mutex_unlock(&video_mutex);
        }if (pkt == NULL) {continue;
        }ret = avcodec_send_packet(video_codec_ctx, pkt);
        if (ret < 0 && ret != AVERROR(EAGAIN) && ret != AVERROR_EOF) {av_packet_unref(pkt);
            continue;
        }AVFrame *yuvFrame = av_frame_alloc();
        ret = avcodec_receive_frame(video_codec_ctx, yuvFrame);
        if (ret < 0 && ret != AVERROR_EOF) {av_frame_free(&yuvFrame);
            av_packet_unref(pkt);

            continue;
        }if (yuvFrame->pts < 0) {av_packet_unref(pkt);
            av_frame_free(&yuvFrame);
            continue;
        }//初始化opengl
        pthread_mutex_lock(&play_mutex);
        if (create_egl) {if (eglUtils == NULL) {openGLUtils->release();
                eglUtils = new EGLUtils();
                jobject surface = env->CallObjectMethod(instance, get_surface_mid);
                ANativeWindow *nativeWindow = ANativeWindow_fromSurface(env, surface);
                eglUtils->initEGL(nativeWindow);
                openGLUtils->surfaceCreated();
                openGLUtils->surfaceChanged(eglUtils->getWidth(), eglUtils->getHeight());
                openGLUtils->initTexture(video_codec_ctx->width, video_codec_ctx->height);
            }}pthread_mutex_unlock(&play_mutex);

        double nowTime = yuvFrame->pts * av_q2d(timeBase);
        long a = audio_time;

        if (a != -1 && a != -2) { //判断音频线程的状态是播放状态进入
            long t = (long) (nowTime * 1000);

            //计算时间,进行等待,比声音慢的话不进行等待
            long time = getCurrentTime() - start_time;
            long s = t - time - a;
            if (s > 0) {struct timespec abstime = waitTime(s);
                pthread_mutex_lock(&video_mutex);
                pthread_cond_timedwait(&video_cond, &video_mutex, &abstime);
                pthread_mutex_unlock(&video_mutex);
            }else if(s <  - 300 ){//跳帧,音频比视频快时进行跳帧,不跳关键帧
                //相差300毫秒进行跳帧,时间太短会有明显的卡顿感
                if(!yuvFrame->key_frame){int m = (int)s / -300;
                    if(m > throw_max){throw_max = m;
                    }if(throw_index < throw_max){throw_index++;
                        av_frame_free(&yuvFrame);
                        av_packet_unref(pkt);
                        continue;
                    }}}throw_max = 1;
            throw_index = 0;
        }else if(a == -1){//音频线程进入等待状态,对齐数据
            if(nowTime >= play_time ){audio_time = -2;
            }av_frame_free(&yuvFrame);
            av_packet_unref(pkt);
            continue;
        }//opengl渲染
        pthread_mutex_lock(&play_mutex);
        if (eglUtils != NULL) {openGLUtils->updateTexture(yuvFrame->width, yuvFrame->height, yuvFrame->data[0],
                                       yuvFrame->data[1], yuvFrame->data[2]);
            openGLUtils->surfaceDraw();
            eglUtils->drawEGL();
        }pthread_mutex_unlock(&play_mutex);
        av_frame_free(&yuvFrame);
        av_packet_unref(pkt);

        //启动解码线程
        pthread_mutex_lock(&decoder_mutex);
        if (isDecoder()) {pthread_cond_signal(&decoder_cond);
        }pthread_mutex_unlock(&decoder_mutex);
    }//释放
    pthread_mutex_destroy(&play_mutex);
    pthread_cond_destroy(&video_cond);
    pthread_mutex_destroy(&video_mutex);
    avcodec_close(video_codec_ctx);
}extern "C"
JNIEXPORT void JNICALL
Java_com_example_videoplay_VideoSurfaceView_audioPlay(JNIEnv *env, jobject instance) {// TODO

    pthread_mutex_init(&audio_mutex, NULL);
    pthread_cond_init(&audio_cond, NULL);

    pthread_mutex_lock(&audio_mutex);
    audio_state = VIDEO_READY;
    pthread_cond_wait(&audio_cond, &audio_mutex);
    audio_state = VIDEO_PLAY;
    pthread_mutex_unlock(&audio_mutex);


    SwrContext *swr_ctx = swr_alloc();

    enum AVSampleFormat in_sample_fmt = audio_codec_ctx->sample_fmt;

    enum AVSampleFormat out_sample_fmt = AV_SAMPLE_FMT_S16;

    int in_sample_rate = audio_codec_ctx->sample_rate;

    int out_sample_rate = in_sample_rate;

    uint64_t in_ch_layout = audio_codec_ctx->channel_layout;

    uint64_t out_ch_layout = AV_CH_LAYOUT_STEREO;


    swr_alloc_set_opts(swr_ctx,
                       out_ch_layout, out_sample_fmt, out_sample_rate,
                       in_ch_layout, in_sample_fmt, in_sample_rate,
                       0, NULL);
    swr_init(swr_ctx);

    int out_channel_nb = av_get_channel_layout_nb_channels(out_ch_layout);

    jclass player_class = env->GetObjectClass(instance);
    jmethodID create_audio_track_mid = env->GetMethodID(player_class, "createAudio",
                                                        "(II)Landroid/media/AudioTrack;");
    jobject audio_track = env->CallObjectMethod(instance, create_audio_track_mid,
                                                out_sample_rate, out_channel_nb);


    jclass audio_track_class = env->GetObjectClass(audio_track);
    jmethodID audio_track_play_mid = env->GetMethodID(audio_track_class, "play", "()V");
    jmethodID audio_track_stop_mid = env->GetMethodID(audio_track_class, "stop", "()V");
    env->CallVoidMethod(audio_track, audio_track_play_mid);

    jmethodID audio_track_write_mid = env->GetMethodID(audio_track_class, "write",
                                                       "([BII)I");

    AVRational timeBase = audio_stream->time_base;
    uint8_t *out_buffer = (uint8_t *) av_malloc(MAX_AUDIO_FRME_SIZE);


    int ret;
    while (1) {//停止音频播放,进入等待状态
        pthread_mutex_lock(&audio_mutex);
        if (audio_state == VIDEO_READY_STOP) {audio_state = VIDEO_STOP;
            audio_time = -1;
            pthread_cond_wait(&audio_cond, &audio_mutex);
        } else if (audio_state == VIDEO_READY_MOVE) {audio_state = VIDEO_MOVE;
            audio_time = -1;
            pthread_cond_wait(&audio_cond, &audio_mutex);
        }pthread_mutex_unlock(&audio_mutex);

        if (isClose) {break;
        }AVPacket *pkt = NULL;

        if (audio_packets.size() != 0) {pthread_mutex_lock(&audio_mutex);
            pkt = audio_packets[0];
            audio_packets.erase(audio_packets.begin());
            pthread_mutex_unlock(&audio_mutex);
        }else{//播放结束,进入结束状态
            pthread_mutex_lock(&audio_mutex);
            if (audio_state == VIDEO_PLAY) {audio_state = VIDEO_OVER;
                pthread_cond_wait(&audio_cond, &audio_mutex);
            }pthread_mutex_unlock(&audio_mutex);
        }if (pkt == NULL) {continue;
        }ret = avcodec_send_packet(audio_codec_ctx, pkt);
        if (ret < 0 && ret != AVERROR(EAGAIN) && ret != AVERROR_EOF) {av_packet_unref(pkt);
            continue;
        }AVFrame *frame = av_frame_alloc();

        ret = avcodec_receive_frame(audio_codec_ctx, frame);
        if (ret < 0 && ret != AVERROR_EOF) {av_packet_unref(pkt);
            av_frame_free(&frame);
            continue;
        }if (frame->pts < 0) {av_packet_unref(pkt);
            av_frame_free(&frame);
            continue;
        }//时间赋值
        double nowTime = frame->pts * av_q2d(timeBase);
        long t = (long) (nowTime * 1000);
        play_time = nowTime;
        start_time = getCurrentTime();
        audio_time = t;


        swr_convert(swr_ctx, &out_buffer, MAX_AUDIO_FRME_SIZE,
                    (const uint8_t **) frame->data,
                    frame->nb_samples);
        int out_buffer_size = av_samples_get_buffer_size(NULL, out_channel_nb,
                                                         frame->nb_samples, out_sample_fmt,
                                                         1);

        jbyteArray audio_sample_array = env->NewByteArray(out_buffer_size);
        jbyte *sample_bytep = env->GetByteArrayElements(audio_sample_array, NULL);

        memcpy(sample_bytep, out_buffer, (size_t) out_buffer_size);
        env->ReleaseByteArrayElements(audio_sample_array, sample_bytep, 0);


        env->CallIntMethod(audio_track, audio_track_write_mid,
                           audio_sample_array, 0, out_buffer_size);

        env->DeleteLocalRef(audio_sample_array);

        av_frame_free(&frame);

        av_packet_unref(pkt);

        //启动解码线程
        pthread_mutex_lock(&decoder_mutex);
        if (isDecoder()) {pthread_cond_signal(&decoder_cond);
        }pthread_mutex_unlock(&decoder_mutex);
    }env->CallVoidMethod(audio_track, audio_track_stop_mid);
    av_free(out_buffer);
    swr_free(&swr_ctx);
    avcodec_close(audio_codec_ctx);

    pthread_mutex_destroy(&audio_mutex);
    pthread_cond_destroy(&audio_cond);
}extern "C"
JNIEXPORT void JNICALL
Java_com_example_videoplay_VideoSurfaceView_play(JNIEnv *env, jobject instance) {// TODO


    pthread_mutex_lock(&audio_mutex);
    if (audio_state == VIDEO_STOP) {//停止状态,直接播放
        audio_state = VIDEO_PLAY;
        pthread_cond_signal(&audio_cond);
    }else if(audio_state == VIDEO_OVER){//结束状态,进行跳转,重新开始播放
        audio_state = VIDEO_READY_MOVE;
        pthread_cond_signal(&audio_cond);
    }pthread_mutex_unlock(&audio_mutex);

    pthread_mutex_lock(&video_mutex);
    if (video_state == VIDEO_STOP) { //停止状态,直接播放
        video_state = VIDEO_PLAY;
        pthread_cond_signal(&video_cond);
    } else if(video_state == VIDEO_OVER){//结束状态,进行跳转,重新开始播放
        move_time = 0;
        video_state = VIDEO_MOVE_OVER;
        pthread_cond_signal(&video_cond);
    }pthread_mutex_unlock(&video_mutex);

    //启动解码线程
    pthread_mutex_lock(&decoder_mutex);
    if (decoder_wait) {pthread_cond_signal(&decoder_cond);
    }pthread_mutex_unlock(&decoder_mutex);
}extern "C"
JNIEXPORT void JNICALL
Java_com_example_videoplay_VideoSurfaceView_move(JNIEnv *env, jobject instance, jlong time) {// TODO
    move_time = play_time + time;
    if (move_time < 0) {move_time = 0;
    }pthread_mutex_lock(&audio_mutex);
    if (audio_state == VIDEO_STOP) { //停止状态,启动线程进行跳转
        audio_state = VIDEO_READY_MOVE;
        pthread_cond_signal(&audio_cond);
    } else if (audio_state == VIDEO_PLAY) {//播放状态,直接进行跳转
        audio_state = VIDEO_READY_MOVE;
    }else if(audio_state == VIDEO_OVER){//结束状态,启动线程进行跳转
        audio_state = VIDEO_READY_MOVE;
        pthread_cond_signal(&audio_cond);
    }pthread_mutex_unlock(&audio_mutex);

    pthread_mutex_lock(&video_mutex);
    if (video_state == VIDEO_STOP) {//停止状态,启动线程进行跳转
        video_state = VIDEO_READY_MOVE;
        pthread_cond_signal(&video_cond);
    } else if (video_state == VIDEO_PLAY) {//播放状态,直接进行跳转
        video_state = VIDEO_READY_MOVE;
    } else if(video_state == VIDEO_OVER){//结束状态,启动线程进行跳转
        video_state = VIDEO_READY_MOVE;
        pthread_cond_signal(&video_cond);
    }pthread_mutex_unlock(&video_mutex);

}extern "C"
JNIEXPORT void JNICALL
Java_com_example_videoplay_VideoSurfaceView_stop(JNIEnv *env, jobject instance) {// TODO
    //播放状态,变为准备停止状态
    pthread_mutex_lock(&audio_mutex);
    if (audio_state == VIDEO_PLAY) {audio_state = VIDEO_READY_STOP;
    }pthread_mutex_unlock(&audio_mutex);
    //播放状态,变为准备停止状态
    pthread_mutex_lock(&video_mutex);
    if (video_state == VIDEO_PLAY) {video_state = VIDEO_READY_STOP;
    }pthread_mutex_unlock(&video_mutex);
}extern "C"
JNIEXPORT void JNICALL
Java_com_example_videoplay_VideoSurfaceView_created(JNIEnv *env, jobject instance) {// TODO
    //SurfaceView的生命周期,SurfaceHolder.Callback.surfaceCreated内调用
    pthread_mutex_lock(&play_mutex);
    create_egl = true;
    pthread_mutex_unlock(&play_mutex);
}
extern "C"
JNIEXPORT void JNICALL
Java_com_example_videoplay_VideoSurfaceView_destroyed(JNIEnv *env, jobject instance) {// TODO
    //SurfaceView的生命周期,SurfaceHolder.Callback.surfaceDestroyed内调用
    //释放掉egl环境
    pthread_mutex_lock(&play_mutex);
    if (eglUtils != NULL) {delete eglUtils;
        eglUtils = NULL;
    }create_egl = false;
    pthread_mutex_unlock(&play_mutex);

}extern "C"
JNIEXPORT void JNICALL
Java_com_example_videoplay_VideoSurfaceView_close(JNIEnv *env, jobject instance) {// TODO
    //结束播放
    isClose = true;


    pthread_mutex_lock(&video_mutex);
    video_state = VIDEO_PLAY;
    pthread_cond_signal(&video_cond);
    pthread_mutex_unlock(&video_mutex);


    pthread_mutex_lock(&audio_mutex);
    audio_state = VIDEO_PLAY;
    pthread_cond_signal(&audio_cond);
    pthread_mutex_unlock(&audio_mutex);

    pthread_mutex_lock(&decoder_mutex);
    pthread_cond_signal(&decoder_cond);
    pthread_mutex_unlock(&decoder_mutex);
}

播放器到此结束,勉强可以用,我准备拿来做公司的项目测试













这篇关于android基于ffmpeg的简单视频播发器 三线程实现播放器(完)的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!


原文地址:
本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.chinasem.cn/article/978765

相关文章

Java 压缩包解压实现代码

《Java压缩包解压实现代码》Java标准库(JavaSE)提供了对ZIP格式的原生支持,通过java.util.zip包中的类来实现压缩和解压功能,本文将重点介绍如何使用Java来解压ZIP或RA... 目录一、解压压缩包1.zip解压代码实现:2.rar解压代码实现:3.调用解压方法:二、注意事项三、总

NGINX 配置内网访问的实现步骤

《NGINX配置内网访问的实现步骤》本文主要介绍了NGINX配置内网访问的实现步骤,Nginx的geo模块限制域名访问权限,仅允许内网/办公室IP访问,具有一定的参考价值,感兴趣的可以了解一下... 目录需求1. geo 模块配置2. 访问控制判断3. 错误页面配置4. 一个完整的配置参考文档需求我们有一

Linux实现简易版Shell的代码详解

《Linux实现简易版Shell的代码详解》本篇文章,我们将一起踏上一段有趣的旅程,仿照CentOS–Bash的工作流程,实现一个功能虽然简单,但足以让你深刻理解Shell工作原理的迷你Sh... 目录一、程序流程分析二、代码实现1. 打印命令行提示符2. 获取用户输入的命令行3. 命令行解析4. 执行命令

基于MongoDB实现文件的分布式存储

《基于MongoDB实现文件的分布式存储》分布式文件存储的方案有很多,今天分享一个基于mongodb数据库来实现文件的存储,mongodb支持分布式部署,以此来实现文件的分布式存储,需要的朋友可以参考... 目录一、引言二、GridFS 原理剖析三、Spring Boot 集成 GridFS3.1 添加依赖

利用Python实现Excel文件智能合并工具

《利用Python实现Excel文件智能合并工具》有时候,我们需要将多个Excel文件按照特定顺序合并成一个文件,这样可以更方便地进行后续的数据处理和分析,下面我们看看如何使用Python实现Exce... 目录运行结果为什么需要这个工具技术实现工具的核心功能代码解析使用示例工具优化与扩展有时候,我们需要将

Python+PyQt5实现文件夹结构映射工具

《Python+PyQt5实现文件夹结构映射工具》在日常工作中,我们经常需要对文件夹结构进行复制和备份,本文将带来一款基于PyQt5开发的文件夹结构映射工具,感兴趣的小伙伴可以跟随小编一起学习一下... 目录概述功能亮点展示效果软件使用步骤代码解析1. 主窗口设计(FolderCopyApp)2. 拖拽路径

Spring AI 实现 STDIO和SSE MCP Server的过程详解

《SpringAI实现STDIO和SSEMCPServer的过程详解》STDIO方式是基于进程间通信,MCPClient和MCPServer运行在同一主机,主要用于本地集成、命令行工具等场景... 目录Spring AI 实现 STDIO和SSE MCP Server1.新建Spring Boot项目2.a

Spring Boot拦截器Interceptor与过滤器Filter深度解析(区别、实现与实战指南)

《SpringBoot拦截器Interceptor与过滤器Filter深度解析(区别、实现与实战指南)》:本文主要介绍SpringBoot拦截器Interceptor与过滤器Filter深度解析... 目录Spring Boot拦截器(Interceptor)与过滤器(Filter)深度解析:区别、实现与实

C#实现访问远程硬盘的图文教程

《C#实现访问远程硬盘的图文教程》在现实场景中,我们经常用到远程桌面功能,而在某些场景下,我们需要使用类似的远程硬盘功能,这样能非常方便地操作对方电脑磁盘的目录、以及传送文件,这次我们将给出一个完整的... 目录引言一. 远程硬盘功能展示二. 远程硬盘代码实现1. 底层业务通信实现2. UI 实现三. De

SpringBoot后端实现小程序微信登录功能实现

《SpringBoot后端实现小程序微信登录功能实现》微信小程序登录是开发者通过微信提供的身份验证机制,获取用户唯一标识(openid)和会话密钥(session_key)的过程,这篇文章给大家介绍S... 目录SpringBoot实现微信小程序登录简介SpringBoot后端实现微信登录SpringBoo