aboutsummaryrefslogtreecommitdiffstats
path: root/libavfilter/Makefile
diff options
context:
space:
mode:
authorGuo, Yejun <yejun.guo@intel.com>2019-10-31 16:33:02 +0800
committerPedro Arthur <bygrandao@gmail.com>2019-11-07 15:46:00 -0300
commit4d980a8ceba9d0b7cc9f34a5f5cd769a4d7c2773 (patch)
treee315cb921282dafcc79dd458bbff6be65ff81e8e /libavfilter/Makefile
parentfc7b6d55741a926896958939a97b2958df1c1bb4 (diff)
downloadffmpeg-4d980a8ceba9d0b7cc9f34a5f5cd769a4d7c2773.tar.gz
avfilter/vf_dnn_processing: add a generic filter for image proccessing with dnn networks
This filter accepts all the dnn networks which do image processing. Currently, frame with formats rgb24 and bgr24 are supported. Other formats such as gray and YUV will be supported next. The dnn network can accept data in float32 or uint8 format. And the dnn network can change frame size. The following is a python script to halve the value of the first channel of the pixel. It demos how to setup and execute dnn model with python+tensorflow. It also generates .pb file which will be used by ffmpeg. import tensorflow as tf import numpy as np import imageio in_img = imageio.imread('in.bmp') in_img = in_img.astype(np.float32)/255.0 in_data = in_img[np.newaxis, :] filter_data = np.array([0.5, 0, 0, 0, 1., 0, 0, 0, 1.]).reshape(1,1,3,3).astype(np.float32) filter = tf.Variable(filter_data) x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in') y = tf.nn.conv2d(x, filter, strides=[1, 1, 1, 1], padding='VALID', name='dnn_out') sess=tf.Session() sess.run(tf.global_variables_initializer()) output = sess.run(y, feed_dict={x: in_data}) graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out']) tf.train.write_graph(graph_def, '.', 'halve_first_channel.pb', as_text=False) output = output * 255.0 output = output.astype(np.uint8) imageio.imsave("out.bmp", np.squeeze(output)) To do the same thing with ffmpeg: - generate halve_first_channel.pb with the above script - generate halve_first_channel.model with tools/python/convert.py - try with following commands ./ffmpeg -i input.jpg -vf dnn_processing=model=halve_first_channel.model:input=dnn_in:output=dnn_out:fmt=rgb24:dnn_backend=native -y out.native.png ./ffmpeg -i input.jpg -vf dnn_processing=model=halve_first_channel.pb:input=dnn_in:output=dnn_out:fmt=rgb24:dnn_backend=tensorflow -y out.tf.png Signed-off-by: Guo, Yejun <yejun.guo@intel.com> Signed-off-by: Pedro Arthur <bygrandao@gmail.com>
Diffstat (limited to 'libavfilter/Makefile')
-rw-r--r--libavfilter/Makefile1
1 files changed, 1 insertions, 0 deletions
diff --git a/libavfilter/Makefile b/libavfilter/Makefile
index 2080eed559..3eff398860 100644
--- a/libavfilter/Makefile
+++ b/libavfilter/Makefile
@@ -223,6 +223,7 @@ OBJS-$(CONFIG_DILATION_FILTER) += vf_neighbor.o
OBJS-$(CONFIG_DILATION_OPENCL_FILTER) += vf_neighbor_opencl.o opencl.o \
opencl/neighbor.o
OBJS-$(CONFIG_DISPLACE_FILTER) += vf_displace.o framesync.o
+OBJS-$(CONFIG_DNN_PROCESSING_FILTER) += vf_dnn_processing.o
OBJS-$(CONFIG_DOUBLEWEAVE_FILTER) += vf_weave.o
OBJS-$(CONFIG_DRAWBOX_FILTER) += vf_drawbox.o
OBJS-$(CONFIG_DRAWGRAPH_FILTER) += f_drawgraph.o