SquareFace Blog


  • 首页

  • 关于

  • 标签

  • 归档

  • 搜索

tf.layers.dense()参数

发表于 2019-10-02
字数统计: 315 | 阅读时长 ≈ 1

units 是输出的shape

之前一直对units的理解有出入,认为units是什么这一层神经元的个数,而且tf官方文档给的也是这个含义。但是在使用的时候并不是这个含义,但是按照单元数这个含义,也可以解释通。

今天在复习tf的时候发现自己写的add_layer()函数就是tf.layers.dense()

add_layer()

这个其实就是一个函数,它的功能就是 outputs = activation(inputs * kernel + bias)。说的直白点就是完成矩阵运算。

kernel 就是常说的权重,是matrix 格式,
bias是偏置,是vector格式

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19

def add_layer(input_data, input_size, output_size, activation_function=None):
"""
input_data, 输入的数据
input_size, 输入数据的shape
output_size, 输出数据的shape
activation_function=None 激活函数
"""
# 产生input_size行output_size列的随机数
Weights = tf.Variable(tf.random_normal([input_size, output_size]))
# 产生一行output_size列全为0.1的数
biases = tf.Variable(tf.zeros([1, output_size]) + 0.1)
# input_data * weights + biases
Wx_plus_b = tf.matmul(input_data, Weights) + biases
if activation_function is None:
output = Wx_plus_b
else:
output = activation_function(Wx_plus_b)
return output

tf.layers.dense()

这是tf.layers.dense()所需要的参数,tf忽忽略了input_size这个参数,因为和input_data.shape的数值是一样的。

inputs—->input_data
units —–> output_size

1
2
3
4
5
6
7
8
9
10
11
12
13
14
def dense(
inputs, units,
activation=None,
use_bias=True,
kernel_initializer=None,
bias_initializer=init_ops.zeros_initializer(),
kernel_regularizer=None,
bias_regularizer=None,
activity_regularizer=None,
kernel_constraint=None,
bias_constraint=None,
trainable=True,
name=None,
reuse=None):

论文选择

发表于 2019-09-30
字数统计: 131 | 阅读时长 ≈ 1

GuardCell

保卫细胞膜上的快速阴离子通道具有钙依赖性、时间依赖性、电压依赖性,能够穿过该通道的阴离子主要是CI-。保卫细胞感应到外界刺激后,胞内Ca2+浓度的提高可以激活快速阴离子通道,引起CI-外流。

搜索关键词

“Anion channel” wheat “patch clamp”

还是需要读懂小飞姐找的文献

GCAC: guard cell anion channel

电压刺激下的电流数据

shabala的researchgate 上搜索关键词”patch clamp” “wheat”

tf.matmul()函数用法

发表于 2019-09-28
字数统计: 861 | 阅读时长 ≈ 5
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
@tf_export("linalg.matmul", "matmul")
@dispatch.add_dispatch_support
def matmul(a,
b,
transpose_a=False,
transpose_b=False,
adjoint_a=False,
adjoint_b=False,
a_is_sparse=False,
b_is_sparse=False,
name=None):
"""Multiplies matrix `a` by matrix `b`, producing `a` * `b`.

The inputs must, following any transpositions, be tensors of rank >= 2
where the inner 2 dimensions specify valid matrix multiplication arguments,
and any further outer dimensions match.

Both matrices must be of the same type. The supported types are:
`float16`, `float32`, `float64`, `int32`, `complex64`, `complex128`.

Either matrix can be transposed or adjointed (conjugated and transposed) on
the fly by setting one of the corresponding flag to `True`. These are `False`
by default.

If one or both of the matrices contain a lot of zeros, a more efficient
multiplication algorithm can be used by setting the corresponding
`a_is_sparse` or `b_is_sparse` flag to `True`. These are `False` by default.
This optimization is only available for plain matrices (rank-2 tensors) with
datatypes `bfloat16` or `float32`.

For example:

```python
# 2-D tensor `a`
# [[1, 2, 3],
# [4, 5, 6]]
a = tf.constant([1, 2, 3, 4, 5, 6], shape=[2, 3])

# 2-D tensor `b`
# [[ 7, 8],
# [ 9, 10],
# [11, 12]]
b = tf.constant([7, 8, 9, 10, 11, 12], shape=[3, 2])

# `a` * `b`
# [[ 58, 64],
# [139, 154]]
c = tf.matmul(a, b)


# 3-D tensor `a`
# [[[ 1, 2, 3],
# [ 4, 5, 6]],
# [[ 7, 8, 9],
# [10, 11, 12]]]
a = tf.constant(np.arange(1, 13, dtype=np.int32),
shape=[2, 2, 3])

# 3-D tensor `b`
# [[[13, 14],
# [15, 16],
# [17, 18]],
# [[19, 20],
# [21, 22],
# [23, 24]]]
b = tf.constant(np.arange(13, 25, dtype=np.int32),
shape=[2, 3, 2])

# `a` * `b`
# [[[ 94, 100],
# [229, 244]],
# [[508, 532],
# [697, 730]]]
c = tf.matmul(a, b)

# Since python >= 3.5 the @ operator is supported (see PEP 465).
# In TensorFlow, it simply calls the `tf.matmul()` function, so the
# following lines are equivalent:
d = a @ b @ [[10.], [11.]]
d = tf.matmul(tf.matmul(a, b), [[10.], [11.]])

Args:
a: Tensor of type float16, float32, float64, int32, complex64, complex128 and rank > 1.
b: Tensor with same type and rank as a. transpose_a: If True, a is transposed before multiplication.
transpose_b: If True, b is transposed before multiplication.
adjoint_a: If True, a is conjugated and transposed before
multiplication.
adjoint_b: If True, b is conjugated and transposed before
multiplication.
a_is_sparse: If True, a is treated as a sparse matrix.
b_is_sparse: If True, b is treated as a sparse matrix.
name: Name for the operation (optional).

Returns:
A Tensor of the same type as a and b where each inner-most matrix is
the product of the corresponding matrices in a and b, e.g. if all
transpose or adjoint attributes are False:

`output`[..., i, j] = sum_k (`a`[..., i, k] * `b`[..., k, j]),
for all indices i, j.

Note: This is matrix product, not element-wise product.

Raises:
ValueError: If transpose_a and adjoint_a, or transpose_b and adjoint_b
are both set to True.
“””
with ops.name_scope(name, “MatMul”, [a, b]) as name:
if transpose_a and adjoint_a:
raise ValueError(“Only one of transpose_a and adjoint_a can be True.”)
if transpose_b and adjoint_b:
raise ValueError(“Only one of transpose_b and adjoint_b can be True.”)

if context.executing_eagerly():
  if not isinstance(a, (ops.EagerTensor, _resource_variable_type)):
    a = ops.convert_to_tensor(a, name="a")
  if not isinstance(b, (ops.EagerTensor, _resource_variable_type)):
    b = ops.convert_to_tensor(b, name="b")
else:
  a = ops.convert_to_tensor(a, name="a")
  b = ops.convert_to_tensor(b, name="b")

# TODO(apassos) remove _shape_tuple here when it is not needed.
a_shape = a._shape_tuple()  # pylint: disable=protected-access
b_shape = b._shape_tuple()  # pylint: disable=protected-access

if fwd_compat.forward_compatible(2019, 4, 25):
  output_may_have_non_empty_batch_shape = (
      (a_shape is None or len(a_shape) > 2) or
      (b_shape is None or len(b_shape) > 2))
  batch_mat_mul_fn = gen_math_ops.batch_mat_mul_v2
else:
  output_may_have_non_empty_batch_shape = (
      (a_shape is None or len(a_shape) > 2) and
      (b_shape is None or len(b_shape) > 2))
  batch_mat_mul_fn = gen_math_ops.batch_mat_mul

if (not a_is_sparse and
    not b_is_sparse) and output_may_have_non_empty_batch_shape:
  # BatchMatmul does not support transpose, so we conjugate the matrix and
  # use adjoint instead. Conj() is a noop for real matrices.
  if transpose_a:
    a = conj(a)
    adjoint_a = True
  if transpose_b:
    b = conj(b)
    adjoint_b = True
  return batch_mat_mul_fn(a, b, adj_x=adjoint_a, adj_y=adjoint_b, name=name)

# Neither matmul nor sparse_matmul support adjoint, so we conjugate
# the matrix and use transpose instead. Conj() is a noop for real
# matrices.
if adjoint_a:
  a = conj(a)
  transpose_a = True
if adjoint_b:
  b = conj(b)
  transpose_b = True

use_sparse_matmul = False
if a_is_sparse or b_is_sparse:
  sparse_matmul_types = [dtypes.bfloat16, dtypes.float32]
  use_sparse_matmul = (
      a.dtype in sparse_matmul_types and b.dtype in sparse_matmul_types)
if ((a.dtype == dtypes.bfloat16 or b.dtype == dtypes.bfloat16) and
    a.dtype != b.dtype):
  # matmul currently doesn't handle mixed-precision inputs.
  use_sparse_matmul = True
if use_sparse_matmul:
  ret = sparse_matmul(
      a,
      b,
      transpose_a=transpose_a,
      transpose_b=transpose_b,
      a_is_sparse=a_is_sparse,
      b_is_sparse=b_is_sparse,
      name=name)
  # sparse_matmul always returns float32, even with
  # bfloat16 inputs. This prevents us from configuring bfloat16 training.
  # casting to bfloat16 also matches non-sparse matmul behavior better.
  if a.dtype == dtypes.bfloat16 and b.dtype == dtypes.bfloat16:
    ret = cast(ret, dtypes.bfloat16)
  return ret
else:
  return gen_math_ops.mat_mul(
      a, b, transpose_a=transpose_a, transpose_b=transpose_b, name=name)

拟合门控概率

发表于 2019-09-28
字数统计: 313 | 阅读时长 ≈ 1
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
import numpy as np
import math
import matplotlib.pyplot as plt
import pandas as pd
from scipy.optimize import curve_fit

path = '/Users/squareface/PycharmProjects/testVenv/'


num = 2
# 读取数据
data = pd.read_csv(path+'data'+str(num)+'.csv')
# x赋值
x = np.array(data.iloc[:,0])

G = 36
# 定义方程
def func1(x,m,tm,h,th):
return G*m*m*(1-np.exp(-x/tm))**2*(h-(h-1)*np.exp(-x/th))
def func2(x,m,tm):
return G*m*m*(1-np.exp(-x/tm))**2
# 结果
result = []

# 拟合不同电位
for i in range(1,len(data.columns)):
y = np.array(data.iloc[:,i])
#非线性最小二乘法拟
if i>2:
popt, pcov = curve_fit(func1, x, y,maxfev = 9999999,bounds=(0,60),method='trf')
else:
popt, pcov = curve_fit(func2, x, y,maxfev = 9999999,bounds=(0,60),method='trf')
#获取popt里面是拟合系数
if i>2:
a = popt[0]
b = popt[1]
c = popt[2]
d = popt[3]
yvals = func1(x,a,b,c,d)
else:
a = popt[0]
b = popt[1]
c = np.nan
d = np.nan
yvals = func2(x,a,b)
# 绘图
plt.cla()
plot1 = plt.plot(x, y, 's',label='original values') # 原始点
plot2 = plt.plot(x, yvals, 'r',label='polyfit values') # 拟合线
plt.xlabel('t') # x轴标签
plt.ylabel('gCIR') # y轴标签
plt.legend(loc='best') # 图例
plt.title('V='+list(data.columns)[i]+'mV') # 标题
plt.savefig(path+str(num)+'_'+str(list(data.columns)[i])+'mV.png') # 保存图
result.append([list(data.columns)[i],b,d,G*a*a,a,c])
# 保存数据
result = pd.DataFrame(result,columns=['V','tm','th','GCIRm2','m','h'])
result.to_csv(path+str(num)+'_'+'result.csv',index=False)

仿真HH方程

发表于 2019-09-28
字数统计: 734 | 阅读时长 ≈ 4

仿真恒流刺激

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
import numpy as np
import math
import matplotlib.pyplot as plt
plt.figure(figsize=(10,10))
result = []

DT = 0.001
t = np.linspace(0,50,int(50/DT)+1) # 仿真时间 光 1200
Cm = 1 # 膜电容
I = 90 # 刺激电流 ?????电流不能太大
v = -112 # 静息电位
GK = 24 # GK=3.5;
GCl = 36 # GCl=16;
GCl_slow = 1
GKin = 20
GH = 0.28
n2 = 0
m2 = 0
nn2 = 0
s2 = 0
h2 = 1
k2 = 0
v2 = v

T0 = 25
Tinitial = 18
Tend = 4
Iin0 = 0.005 # Iin0=0.0193;
Iinmax = 3 # Iinmax=2.5;
K1 = 1
n_t1 = 3
Iexmax = 1
Km = 0.5
n_t2 = 2
Q = 10
P0 = 0.005
Kp = 0.5
Ca0 = 0.1 # 细胞质Ca离子浓度,uM
Ca2 = Ca0
KQ = math.log(Q)/10
T1 = Tinitial
Rate = -0.8 # 摄氏度/秒
tc = 45 # 秒

for i in range(1,len(t)):
#---------------------------------------------------------------------
Ca1 = Ca2;
T2 = T1+Rate*DT*math.exp(-t[i]/tc)
temp = (-(Rate*math.exp(-t[i]/tc)-abs(Rate*math.exp(-t[i]/tc)))/2)**n_t1
dCadt = Iin0+Iinmax*temp/(K1**n_t1+temp) - Iexmax*math.exp(KQ*(T1-T0))*(Ca1**n_t2)/(Km**n2+Ca1**n_t2)
Ca2 = Ca1+DT*dCadt
dIexmax = DT*P0*np.sign(Ca1-Ca0)*Ca1/(Kp+Ca1)
Iexmax = Iexmax+dIexmax
Itemperature = -0.6823*dCadt*450 # 单位没有转换 Itemperature=-0.0154*dCadt/0.00005;
T1 = T2
#---------------------------------------------------------------------
n1 = n2
m1 = m2
h1 = h2
s1 = s2
k1 = k2
nn1 = nn2
v1 = v2
V = v1
Ib = 2*(V+127.5) # Ib=2*(V+130.5);
am = 10.55*(V+60)/(1-math.exp(-(V+60)/7.029))
bm = 44.32*math.exp(-V/98.2)
ah = 38.35*math.exp(-V/41.39)
bh = 7.249/(1+0.6061*math.exp(-V/26.58))
an = (0.01812*V+2.598)/(1+0.5954*math.exp(-V/10.8))
bn = 1.56*math.exp(-V/23.4)
AS = 0.02985*math.exp(V/144.6) #慢Cl
bs = 0.03542*math.exp(-V/91.67)
ak = 0.01414*math.exp(-0.03175*V)/(1+math.exp(0.2434*V))# 内向K
bk = 974.1*math.exp(0.04129*V)/(1+math.exp(0.7851*V))
ann = 0.01*(V+60)/(1-math.exp(-(V+60)/2.8))
bnn = 0.05*math.exp(-V/80.4)
if i==1:
n1 = an/(an+bn)
m1 = 0
h1 = 1
s1 = AS/(AS+bs)
k1 = ak/(ak+bk)
nn1 = ann/(ann+bnn)

N = n1
M = m1
H = h1
S = s1
NN = nn1
gK = GK*N*N
gCl = GCl*M*M*H
gCl_slow = GCl_slow*S;
# gKN=GK*NN*NN;% 复极化电导
gKN=40*NN*NN # 最大电导40可调整,调大后,在较大的刺激下也可出现正常的波形
gH = GH/(1+math.exp((65.53+V)/112.2))
IH = gH*(V+231.8)

Ik = gK*(V+53)
ICl = gCl*(V-13.6) # ICl=gCl*(V+23.6);
ICl_slow = gCl_slow*(V-37.8)
IKN = gKN*(V+115) # IKN=gKN*(V+118);
K = k1 # 内向K电流
gKin =GKin*K
Ikin = gKin*(V+75)
kdot = (ak*(1-K)-bk*K)*DT
k2=k1+kdot

ndot = (an*(1-N)-bn*N)*DT
nndot = (ann*(1-NN)-bnn*NN)*DT
# IT=Ik+ICl+ICl_slow+Ikin+Ib+IH;
IT=Ik+ICl+ICl_slow+IKN+Ikin+Ib+IH


# if t[i]<800 %光刺激
# Ilight=71*(1-math.exp(-t[i]/82))^3;
# else
# Ilight=71*math.exp(-(t[i]-800)/56);
# #if rem(t[i],20)==0
# #v1
# #end
# end
# vdot=((Ilight-IT)/Cm)*DT;


if t[i]<100: #温度、恒流电刺激
vdot = ((I-IT)/Cm)*DT
# vdot=(-(Itemperature+IT)/Cm)*DT;
else:
vdot = (-IT/Cm)*DT


v2 = v1+vdot
mdot = (am*(1-M)-bm*M)*DT
n2 = n1+ndot
m2 = m1+mdot
sdot =(AS*(1-S)-bs*S)*DT
s2 = s1+sdot
nn2 = nn1+nndot
if V<-20:
h2=1
else:
hdot = (ah*(1-H)-bh*H)*DT
h2=h1+hdot

'''
if t[i]%0.01==0:
result.append([t[i],v1])
'''

result.append([t[i],v1])
result = np.array(result)
plt.plot(result[:,0],result[:,1],c='b')
#plt.plot(result[:,0],result[:,2],'r-'); #仿真Ca离子浓度随时间的变化关系
plt.xlabel('时间(s)',fontproperties='SimHei')
plt.ylabel('膜电位(mV)',fontproperties='SimHei')
1…121314…25
Square Face

Square Face

少不吃苦是废人,老不吃苦是贵人

121 日志
3 分类
32 标签
GitHub E-Mail YouTube Instagram
© 2020 Square Face | Site words total count: 45.5k
由 Hexo 强力驱动
|
主题 — NexT.Pisces v5.1.4