<>1. about with

with yes python Context manager in , Simple understanding , When a fixed entry is to be made , When returning to operation , You can set the corresponding operations , Put on with
In the required statement . Such as file writing ( Need to open or close files ) etc .

The following is a file write use with Examples of .
with open (filename,'w') as sh: sh.write("#!/bin/bash\n") sh.write("#$ -N "+
'IC'+altas+str(patientNumber)+altas+'\n') sh.write("#$ -o "+pathSh+altas+
'log.log\n') sh.write("#$ -e "+pathSh+altas+'err.log\n') sh.write('source
~/.bashrc\n') sh.write('. "/home/kjsun/anaconda3/etc/profile.d/conda.sh"\n') sh.
write('conda activate python27\n') sh.write('echo "to python"\n') sh.write(
'echo "finish"\n') sh.close()
with Rear part , Can with Statement run after , Return the result to as Variable after (sh), Subsequent code block pairs close Operate .

<>2. about with torch.no_grad():

in use pytorch Time , Not all operations need to generate calculation diagrams ( Construction of calculation process , For gradient back propagation and other operations ). And for tensor Calculation operation of , The default is to build the calculation chart , under these circumstances , have access to
with torch.no_grad():, The calculation graph is not built for the content after forcing .

The following are used and not used respectively :
<>(1) use with torch.no_grad(): with torch.no_grad(): for data in testloader:
images, labels = data outputs = net(images) _, predicted = torch.max(outputs.
data, 1) total += labels.size(0) correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 10000 test images: %d %%' % ( 100 *
correct/ total)) print(outputs)
Operation results :
Accuracy of the network on the 10000 test images: 55 % tensor([[-2.9141, -3.
8210, 2.1426, 3.0883, 2.6363, 2.6878, 2.8766, 0.3396, -4.7505, -3.8502], [-1.
4012, -4.5747, 1.8557, 3.8178, 1.1430, 3.9522, -0.4563, 1.2740, -3.7763, -3.3633
], [ 1.3090, 0.1812, 0.4852, 0.1315, 0.5297, -0.3215, -2.0045, 1.0426, -3.2699,
-0.5084], [-0.5357, -1.9851, -0.2835, -0.3110, 2.6453, 0.7452, -1.4148, 5.6919,
-6.3235, -1.6220]])
At this time outputs No, attribute .
<>(2) Not used with torch.no_grad():
And the corresponding case is not used
for data in testloader: images, labels = data outputs = net(images) _,
predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (
predicted == labels).sum().item() print('Accuracy of the network on the 10000
test images: %d %%' % ( 100 * correct / total)) print(outputs)
give the result as follows :
Accuracy of the network on the 10000 test images: 55 % tensor([[-2.9141, -3.
8210, 2.1426, 3.0883, 2.6363, 2.6878, 2.8766, 0.3396, -4.7505, -3.8502], [-1.
4012, -4.5747, 1.8557, 3.8178, 1.1430, 3.9522, -0.4563, 1.2740, -3.7763, -3.3633
], [ 1.3090, 0.1812, 0.4852, 0.1315, 0.5297, -0.3215, -2.0045, 1.0426, -3.2699,
-0.5084], [-0.5357, -1.9851, -0.2835, -0.3110, 2.6453, 0.7452, -1.4148, 5.6919,
-6.3235, -1.6220]], grad_fn=<AddmmBackward>)
Can see , At this time grad_fn=<AddmmBackward>
attribute , express , The results of the calculation are shown in a calculation diagram , Gradient reverse transmission and other operations can be carried out . however , In fact, there is no difference between the results of the two calculations .

Technology