点点击头像加好友表情包WX,什么FZ都有

可以用 Python 编程语言做哪些神奇好玩的事情? - 知乎有问题,上知乎。知乎作为中文互联网最大的知识分享平台,以「知识连接一切」为愿景,致力于构建一个人人都可以便捷接入的知识分享网络,让人们便捷地与世界分享知识、经验和见解,发现更大的世界。<strong class="NumberBoard-itemValue" title="3被浏览<strong class="NumberBoard-itemValue" title=",431,910分享邀请回答arxiv.org/abs/6)应用到自己的这张图片上。结果就变成下面这个样子了咦,吓死宝宝了,不过好玩的东西当然要身先士卒啦!接着由于距离开学也越来越近了,为了给广大新生营造一个良好的校园,噗!为了美化校园在新生心目中的形象学长真的不是有意要欺骗你们的。特意制作了下面的《梵高笔下的东华理工大学》,是不是没有听说过这个大学,的确她就是一个普通的二本学校不过这都不是重点。左边的图片是梵高的《星空》作为模板,中间的图片是待转化的图片,右边的图片是结果这是我们学校的内“湖”(池塘)校园里的樱花广场(个人觉得这是我校最浪漫的地方了)不多说,学校图书馆“池塘”边的柳树学校东大门学校测绘楼学校地学楼为了便于观看,附上生成后的大图:别看才区区七张图片,可是这让计算机运行了好长的时间,期间电脑死机两次!好了广告打完了,下面是福利时间在本地用keras搭建风格转移平台1.相关依赖库的安装# 命令行安装keras、h5py、tensorflow
pip3 install keras
pip3 install h5py
pip3 install numpy
pip3 install scipy
pip3 install tensorflow
# 如果pip3失败请用pip就好
如果上述依赖包命令行安装失败,大部分情况是由于超时而导致的,这时只需要换成国内PyPi源即可快速命令行pip下载。推荐一个自己写的换源小工具详情见GitHub
。2.配置运行环境下载VGG16模型
放入如下目录当中(注意:每个人的主目录都不一样,我的是yhf,如果没有.keras这个文件夹新建就行了)3.代码编写from __future__ import print_function
from keras.preprocessing.image import load_img, img_to_array
from scipy.misc import imsave
import numpy as np
from scipy.optimize import fmin_l_bfgs_b
import time
import argparse
from keras.applications import vgg16
from keras import backend as K
parser = argparse.ArgumentParser(description='Neural style transfer with Keras.')
parser.add_argument('base_image_path', metavar='base', type=str,
help='Path to the image to transform.')
parser.add_argument('style_reference_image_path', metavar='ref', type=str,
help='Path to the style reference image.')
parser.add_argument('result_prefix', metavar='res_prefix', type=str,
help='Prefix for the saved results.')
parser.add_argument('--iter', type=int, default=10, required=False,
help='Number of iterations to run.')
parser.add_argument('--content_weight', type=float, default=0.025, required=False,
help='Content weight.')
parser.add_argument('--style_weight', type=float, default=1.0, required=False,
help='Style weight.')
parser.add_argument('--tv_weight', type=float, default=1.0, required=False,
help='Total Variation weight.')
args = parser.parse_args()
base_image_path = args.base_image_path
style_reference_image_path = args.style_reference_image_path
result_prefix = args.result_prefix
iterations = args.iter
# these are the weights of the different loss components
total_variation_weight = args.tv_weight
style_weight = args.style_weight
content_weight = args.content_weight
# dimensions of the generated picture.
width, height = load_img(base_image_path).size
img_nrows = 400
img_ncols = int(width * img_nrows / height)
# util function to open, resize and format pictures into appropriate tensors
def preprocess_image(image_path):
img = load_img(image_path, target_size=(img_nrows, img_ncols))
img = img_to_array(img)
img = np.expand_dims(img, axis=0)
img = vgg16.preprocess_input(img)
return img
# util function to convert a tensor into a valid image
def deprocess_image(x):
if K.image_data_format() == 'channels_first':
x = x.reshape((3, img_nrows, img_ncols))
x = x.transpose((1, 2, 0))
x = x.reshape((img_nrows, img_ncols, 3))
# Remove zero-center by mean pixel
x[:, :, 0] += 103.939
x[:, :, 1] += 116.779
x[:, :, 2] += 123.68
# 'BGR'-&'RGB'
x = x[:, :, ::-1]
x = np.clip(x, 0, 255).astype('uint8')
# get tensor representations of our images
base_image = K.variable(preprocess_image(base_image_path))
style_reference_image = K.variable(preprocess_image(style_reference_image_path))
# this will contain our generated image
if K.image_data_format() == 'channels_first':
combination_image = K.placeholder((1, 3, img_nrows, img_ncols))
combination_image = K.placeholder((1, img_nrows, img_ncols, 3))
# combine the 3 images into a single Keras tensor
input_tensor = K.concatenate([base_image,
style_reference_image,
combination_image], axis=0)
# build the VGG16 network with our 3 images as input
# the model will be loaded with pre-trained ImageNet weights
model = vgg16.VGG16(input_tensor=input_tensor,
weights='imagenet', include_top=False)
print('Model loaded.')
# get the symbolic outputs of each "key" layer (we gave them unique names).
outputs_dict = dict([(layer.name, layer.output) for layer in model.layers])
# compute the neural style loss
# first we need to define 4 util functions
# the gram matrix of an image tensor (feature-wise outer product)
def gram_matrix(x):
assert K.ndim(x) == 3
if K.image_data_format() == 'channels_first':
features = K.batch_flatten(x)
features = K.batch_flatten(K.permute_dimensions(x, (2, 0, 1)))
gram = K.dot(features, K.transpose(features))
return gram
# the "style loss" is designed to maintain
# the style of the reference image in the generated image.
# It is based on the gram matrices (which capture style) of
# feature maps from the style reference image
# and from the generated image
def style_loss(style, combination):
assert K.ndim(style) == 3
assert K.ndim(combination) == 3
S = gram_matrix(style)
C = gram_matrix(combination)
channels = 3
size = img_nrows * img_ncols
return K.sum(K.square(S - C)) / (4. * (channels ** 2) * (size ** 2))
# an auxiliary loss function
# designed to maintain the "content" of the
# base image in the generated image
def content_loss(base, combination):
return K.sum(K.square(combination - base))
# the 3rd loss function, total variation loss,
# designed to keep the generated image locally coherent
def total_variation_loss(x):
assert K.ndim(x) == 4
if K.image_data_format() == 'channels_first':
a = K.square(x[:, :, :img_nrows - 1, :img_ncols - 1] - x[:, :, 1:, :img_ncols - 1])
b = K.square(x[:, :, :img_nrows - 1, :img_ncols - 1] - x[:, :, :img_nrows - 1, 1:])
a = K.square(x[:, :img_nrows - 1, :img_ncols - 1, :] - x[:, 1:, :img_ncols - 1, :])
b = K.square(x[:, :img_nrows - 1, :img_ncols - 1, :] - x[:, :img_nrows - 1, 1:, :])
return K.sum(K.pow(a + b, 1.25))
# combine these loss functions into a single scalar
loss = K.variable(0.)
layer_features = outputs_dict['block4_conv2']
base_image_features = layer_features[0, :, :, :]
combination_features = layer_features[2, :, :, :]
loss += content_weight * content_loss(base_image_features,
combination_features)
feature_layers = ['block1_conv1', 'block2_conv1',
'block3_conv1', 'block4_conv1',
'block5_conv1']
for layer_name in feature_layers:
layer_features = outputs_dict[layer_name]
style_reference_features = layer_features[1, :, :, :]
combination_features = layer_features[2, :, :, :]
sl = style_loss(style_reference_features, combination_features)
loss += (style_weight / len(feature_layers)) * sl
loss += total_variation_weight * total_variation_loss(combination_image)
# get the gradients of the generated image wrt the loss
grads = K.gradients(loss, combination_image)
outputs = [loss]
if isinstance(grads, (list, tuple)):
outputs += grads
outputs.append(grads)
f_outputs = K.function([combination_image], outputs)
def eval_loss_and_grads(x):
if K.image_data_format() == 'channels_first':
x = x.reshape((1, 3, img_nrows, img_ncols))
x = x.reshape((1, img_nrows, img_ncols, 3))
outs = f_outputs([x])
loss_value = outs[0]
if len(outs[1:]) == 1:
grad_values = outs[1].flatten().astype('float64')
grad_values = np.array(outs[1:]).flatten().astype('float64')
return loss_value, grad_values
# this Evaluator class makes it possible
# to compute loss and gradients in one pass
# while retrieving them via two separate functions,
# "loss" and "grads". This is done because scipy.optimize
# requires separate functions for loss and gradients,
# but computing them separately would be inefficient.
class Evaluator(object):
def __init__(self):
self.loss_value = None
self.grads_values = None
def loss(self, x):
assert self.loss_value is None
loss_value, grad_values = eval_loss_and_grads(x)
self.loss_value = loss_value
self.grad_values = grad_values
return self.loss_value
def grads(self, x):
assert self.loss_value is not None
grad_values = np.copy(self.grad_values)
self.loss_value = None
self.grad_values = None
return grad_values
evaluator = Evaluator()
# run scipy-based optimization (L-BFGS) over the pixels of the generated image
# so as to minimize the neural style loss
if K.image_data_format() == 'channels_first':
x = np.random.uniform(0, 255, (1, 3, img_nrows, img_ncols)) - 128.
x = np.random.uniform(0, 255, (1, img_nrows, img_ncols, 3)) - 128.
for i in range(iterations):
print('Start of iteration', i)
start_time = time.time()
x, min_val, info = fmin_l_bfgs_b(evaluator.loss, x.flatten(),
fprime=evaluator.grads, maxfun=20)
print('Current loss value:', min_val)
# save current generated image
img = deprocess_image(x.copy())
fname = result_prefix + '_at_iteration_%d.png' % i
imsave(fname, img)
end_time = time.time()
print('Image saved as', fname)
print('Iteration %d completed in %ds' % (i, end_time - start_time))
复制上述代码保存为neural_style_transfer.py(随便命名)4.运行新建一个空文件夹,把上一步骤的文件neural_style_transfer.py放入这个空文件夹中。然后把相应的模板图片,待转化图片放入该文件当中。python neural_style_transfer.py
你的待转化图片路径
模板图片路径
保存的生产图片路径加名称(注意不需要有.jpg等后缀)
python neural_style_transfer.py './me.jpg' './starry_night.jpg' './me_t'
注意:图片路径需要去掉引号,自定义迭代次数请在末尾指定--iter n。迭代结果截图:迭代过程对比其它库实现风格转化基于python深度学习库DeepPy的实现:基于python深度学习库TensorFlow的实现:基于python深度学习库Caffe的实现:3.9K751 条评论分享收藏感谢收起54921 条评论分享收藏感谢收起(ERROR:15) & 访客不能直接访问我手机每天晚上后半夜跑流量,移动告诉我是mmsns.qpic.cn和wx.qlogo.cn跑的流量。这2个是什么东西哦?_百度知道
我手机每天晚上后半夜跑流量,移动告诉我是mmsns.qpic.cn和wx.qlogo.cn跑的流量。这2个是什么东西哦?
我手机每天晚上后半夜跑流量,移动告诉我是mmsns.qpic.cn和wx.qlogo.cn跑的流量。这2个是什么东西哦?一晚上能跑50多M的流量。
我有更好的答案
wx.qlogo.cn跑的流量是好友的头像, 有的人照片比较大, 就消耗流量
采纳率:100%
微信和qq空间
微信后半夜也没人跟我说话呀。QQ空间我手机没有啊。
这两个确实是微信和qq空间,你把那些后台自启动的程序关了
微信怎么会一晚上跑40多M啊?我觉得好奇怪。是不是移动公司骗我的哦。我晚上睡觉天天都会开着微信。怎么平时没事走个几百KB的。突然有一两天就4 5十M啊?
你有没安装360之类的软件,里面流量监控有显示哪个软件流量耗费的多
为您推荐:
其他类似问题
&#xe675;换一换
回答问题,赢新手礼包&#xe6b9;
个人、企业类
违法有害信息,请在下方选择后提交
色情、暴力
我们会通过消息、邮箱等方式尽快将举报结果通知您。老是点击对方头像,对方有知道不_百度知道
老是点击对方头像,对方有知道不
老是点击对方头像,对方有知道不
我有更好的答案
对方不会知道的。
不想让对方知道了就选择匿名访问
为您推荐:
其他类似问题
&#xe675;换一换
回答问题,赢新手礼包&#xe6b9;
个人、企业类
违法有害信息,请在下方选择后提交
色情、暴力
我们会通过消息、邮箱等方式尽快将举报结果通知您。也许某一天我可以成为大神
小程序获取用户头像等用户信息
没有更多推荐了,

我要回帖

更多关于 加菲猫头像 的文章

 

随机推荐