keras根据层名称来初始化网络
def get_model(input_shape1=[75, 75, 3], input_shape2=[1], weights=None): bn_model = 0 trainable = True # kernel_regularizer = regularizers.l2(1e-4) kernel_regularizer = None activation = 'relu' img_input = Input(shape=input_shape1) angle_input = Input(shape=input_shape2) # Block 1 x = Conv2D(64, (3, 3), activation=activation, padding='same', trainable=trainable, kernel_regularizer=kernel_regularizer, name='block1_conv1')(img_input) x = Conv2D(64, (3, 3), activation=activation, padding='same', trainable=trainable, kernel_regularizer=kernel_regularizer, name='block1_conv2')(x) x = MaxPooling2D((2, 2), strides=(2, 2), name='block1_pool')(x) # Block 2 x = Conv2D(128, (3, 3), activation=activation, padding='same', trainable=trainable, kernel_regularizer=kernel_regularizer, name='block2_conv1')(x) x = Conv2D(128, (3, 3), activation=activation, padding='same', trainable=trainable, kernel_regularizer=kernel_regularizer, name='block2_conv2')(x) x = MaxPooling2D((2, 2), strides=(2, 2), name='block2_pool')(x) # Block 3 x = Conv2D(256, (3, 3), activation=activation, padding='same', trainable=trainable, kernel_regularizer=kernel_regularizer, name='block3_conv1')(x) x = Conv2D(256, (3, 3), activation=activation, padding='same', trainable=trainable, kernel_regularizer=kernel_regularizer, name='block3_conv2')(x) x = Conv2D(256, (3, 3), activation=activation, padding='same', trainable=trainable, kernel_regularizer=kernel_regularizer, name='block3_conv3')(x) x = MaxPooling2D((2, 2), strides=(2, 2), name='block3_pool')(x) # Block 4 x = Conv2D(512, (3, 3), activation=activation, padding='same', trainable=trainable, kernel_regularizer=kernel_regularizer, name='block4_conv1')(x) x = Conv2D(512, (3, 3), activation=activation, padding='same', trainable=trainable, kernel_regularizer=kernel_regularizer, name='block4_conv2')(x) x = Conv2D(512, (3, 3), activation=activation, padding='same', trainable=trainable, kernel_regularizer=kernel_regularizer, name='block4_conv3')(x) x = MaxPooling2D((2, 2), strides=(2, 2), name='block4_pool')(x) # Block 5 x = Conv2D(512, (3, 3), activation=activation, padding='same', trainable=trainable, kernel_regularizer=kernel_regularizer, name='block5_conv1')(x) x = Conv2D(512, (3, 3), activation=activation, padding='same', trainable=trainable, kernel_regularizer=kernel_regularizer, name='block5_conv2')(x) x = Conv2D(512, (3, 3), activation=activation, padding='same', trainable=trainable, kernel_regularizer=kernel_regularizer, name='block5_conv3')(x) x = MaxPooling2D((2, 2), strides=(2, 2), name='block5_pool')(x) branch_1 = GlobalMaxPooling2D()(x) # branch_1 = BatchNormalization(momentum=bn_model)(branch_1) branch_2 = GlobalAveragePooling2D()(x) # branch_2 = BatchNormalization(momentum=bn_model)(branch_2) branch_3 = BatchNormalization(momentum=bn_model)(angle_input) x = (Concatenate()([branch_1, branch_2, branch_3])) x = Dense(1024, activation=activation, kernel_regularizer=kernel_regularizer)(x) # x = Dropout(0.5)(x) x = Dense(1024, activation=activation, kernel_regularizer=kernel_regularizer)(x) x = Dropout(0.6)(x) output = Dense(1, activation='sigmoid')(x) model = Model([img_input, angle_input], output) optimizer = Adam(lr=1e-5, beta_1=0.9, beta_2=0.999, epsilon=1e-8, decay=0.0) model.compile(loss='binary_crossentropy', optimizer=optimizer, metrics=['accuracy']) if weights is not None: # 将by_name设置成True model.load_weights(weights, by_name=True) # layer_weights = h5py.File(weights, 'r') # for idx in range(len(model.layers)): # model.set_weights() print 'have prepared the model.' return model
补充知识:keras.layers.Dense()方法
keras.layers.Dense()是定义网络层的基本方法,执行的操作是:output = activation(dot(input,kernel)+ bias。
其中activation是激活函数,kernel是权重矩阵,bias是偏向量。如果层输入大于2,在进行初始点积之前会将其展平。
代码如下:
class Dense(Layer): """Just your regular densely-connected NN layer. `Dense` implements the operation: `output = activation(dot(input, kernel) + bias)` where `activation` is the element-wise activation function passed as the `activation` argument, `kernel` is a weights matrix created by the layer, and `bias` is a bias vector created by the layer (only applicable if `use_bias` is `True`). Note: if the input to the layer has a rank greater than 2, then it is flattened prior to the initial dot product with `kernel`. # Example ```python # as first layer in a sequential model: model = Sequential() model.add(Dense(32, input_shape=(16,))) # now the model will take as input arrays of shape (*, 16) # and output arrays of shape (*, 32) # after the first layer, you don't need to specify # the size of the input anymore: model.add(Dense(32)) ``` # Arguments units: Positive integer, dimensionality of the output space. activation: Activation function to use (see [activations](../activations.md)). If you don't specify anything, no activation is applied (ie. "linear" activation: `a(x) = x`). use_bias: Boolean, whether the layer uses a bias vector. kernel_initializer: Initializer for the `kernel` weights matrix (see [initializers](../initializers.md)). bias_initializer: Initializer for the bias vector (see [initializers](../initializers.md)). kernel_regularizer: Regularizer function applied to the `kernel` weights matrix (see [regularizer](../regularizers.md)). bias_regularizer: Regularizer function applied to the bias vector (see [regularizer](../regularizers.md)). activity_regularizer: Regularizer function applied to the output of the layer (its "activation"). (see [regularizer](../regularizers.md)). kernel_constraint: Constraint function applied to the `kernel` weights matrix (see [constraints](../constraints.md)). bias_constraint: Constraint function applied to the bias vector (see [constraints](../constraints.md)). # Input shape nD tensor with shape: `(batch_size, ..., input_dim)`. The most common situation would be a 2D input with shape `(batch_size, input_dim)`. # Output shape nD tensor with shape: `(batch_size, ..., units)`. For instance, for a 2D input with shape `(batch_size, input_dim)`, the output would have shape `(batch_size, units)`. """ @interfaces.legacy_dense_support def __init__(self, units, activation=None, use_bias=True, kernel_initializer='glorot_uniform', bias_initializer='zeros', kernel_regularizer=None, bias_regularizer=None, activity_regularizer=None, kernel_constraint=None, bias_constraint=None, **kwargs): if 'input_shape' not in kwargs and 'input_dim' in kwargs: kwargs['input_shape'] = (kwargs.pop('input_dim'),) super(Dense, self).__init__(**kwargs) self.units = units self.activation = activations.get(activation) self.use_bias = use_bias self.kernel_initializer = initializers.get(kernel_initializer) self.bias_initializer = initializers.get(bias_initializer) self.kernel_regularizer = regularizers.get(kernel_regularizer) self.bias_regularizer = regularizers.get(bias_regularizer) self.activity_regularizer = regularizers.get(activity_regularizer) self.kernel_constraint = constraints.get(kernel_constraint) self.bias_constraint = constraints.get(bias_constraint) self.input_spec = InputSpec(min_ndim=2) self.supports_masking = True def build(self, input_shape): assert len(input_shape) >= 2 input_dim = input_shape[-1] self.kernel = self.add_weight(shape=(input_dim, self.units), initializer=self.kernel_initializer, name='kernel', regularizer=self.kernel_regularizer, constraint=self.kernel_constraint) if self.use_bias: self.bias = self.add_weight(shape=(self.units,), initializer=self.bias_initializer, name='bias', regularizer=self.bias_regularizer, constraint=self.bias_constraint) else: self.bias = None self.input_spec = InputSpec(min_ndim=2, axes={-1: input_dim}) self.built = True def call(self, inputs): output = K.dot(inputs, self.kernel) if self.use_bias: output = K.bias_add(output, self.bias) if self.activation is not None: output = self.activation(output) return output def compute_output_shape(self, input_shape): assert input_shape and len(input_shape) >= 2 assert input_shape[-1] output_shape = list(input_shape) output_shape[-1] = self.units return tuple(output_shape) def get_config(self): config = { 'units': self.units, 'activation': activations.serialize(self.activation), 'use_bias': self.use_bias, 'kernel_initializer': initializers.serialize(self.kernel_initializer), 'bias_initializer': initializers.serialize(self.bias_initializer), 'kernel_regularizer': regularizers.serialize(self.kernel_regularizer), 'bias_regularizer': regularizers.serialize(self.bias_regularizer), 'activity_regularizer': regularizers.serialize(self.activity_regularizer), 'kernel_constraint': constraints.serialize(self.kernel_constraint), 'bias_constraint': constraints.serialize(self.bias_constraint) } base_config = super(Dense, self).get_config() return dict(list(base_config.items()) + list(config.items()))
参数说明如下:
units:正整数,输出空间的维数。
activation: 激活函数。如果未指定任何内容,则不会应用任何激活函数。即“线性”激活:a(x)= x)。
use_bias:Boolean,该层是否使用偏向量。
kernel_initializer:权重矩阵的初始化方法。
bias_initializer:偏向量的初始化方法。
kernel_regularizer:权重矩阵的正则化方法。
bias_regularizer:偏向量的正则化方法。
activity_regularizer:输出层正则化方法。
kernel_constraint:权重矩阵约束函数。
bias_constraint:偏向量约束函数。
以上这篇使用keras根据层名称来初始化网络就是小编分享给大家的全部内容了,希望能给大家一个参考,也希望大家多多支持。
keras,层名称,初始化网络
免责声明:本站文章均来自网站采集或用户投稿,网站不提供任何软件下载或自行开发的软件! 如有用户或公司发现本站内容信息存在侵权行为,请邮件告知! 858582#qq.com
RTX 5090要首发 性能要翻倍!三星展示GDDR7显存
三星在GTC上展示了专为下一代游戏GPU设计的GDDR7内存。
首次推出的GDDR7内存模块密度为16GB,每个模块容量为2GB。其速度预设为32 Gbps(PAM3),但也可以降至28 Gbps,以提高产量和初始阶段的整体性能和成本效益。
据三星表示,GDDR7内存的能效将提高20%,同时工作电压仅为1.1V,低于标准的1.2V。通过采用更新的封装材料和优化的电路设计,使得在高速运行时的发热量降低,GDDR7的热阻比GDDR6降低了70%。
更新日志
- PatriciaPaay-Playmate(ExpandedEditionRemastered2024)[24Bit-96kHz]FLAC
- 蒋志光韦绮姗.2014-传奇巨声【环星】【WAV+CUE】
- 关淑怡.2008-演唱会+无尽经典3CD【环球】【WAV+CUE】
- 伍佰.2002-冬之火九重天演唱会特选录音专辑2CD【滚石】【WAV+CUE】
- 李宗盛1996《李宗盛的凡人歌2CD》滚石[WAV+CUE][1G]
- 刘德华 《天意》1:1直刻黑胶LPCD[WAV+CUE][1.1G]
- 刘德丽2024《赤的疑惑HQCD》头版限量编号MQA[低速原抓WAV+CUE]
- 英雄联盟万圣节有什么皮肤返场 2024万圣节皮肤返场一览
- lol万圣节赠礼活动什么时候开始 2024万圣节活动时间介绍
- 2024全球总决赛blg是全华班吗 全球总决赛blg选手所属国家介绍
- 《LOL》S14半决赛:T1战胜GEN晋级决赛!对决BLG
- 《完蛋美女前传》白白演员抱怨:都没人玩我的线
- 玩家热议OLED屏对画面提升巨大:比PS5 Pro值得买
- PatriciaPaay-TheLadyIsAChamp(ExpandedEdition)(2024)[24Bit-96kHz]FLAC
- 尚士达.2024-莫回头【智慧小狗】【DTS-WAV分轨】