back to indexTensorFlow Tutorial (Sherry Moore, Google Brain)
Chapters
0:0 Introduction
1:10 Thank you
2:0 What is TensorFlow
5:50 How TensorFlow works
7:45 Frontend libraries
8:50 Portability
9:50 How we use it
10:35 Smart Reply
11:25 Games
11:55 Models
12:20 Highlevel libraries
13:50 Linear Regression
14:20 Mystery Creation
15:35 Jupiter Notebook
19:55 Variable Objects
22:20 Training Graph
25:20 Results
26:35 Other optimizers
28:35 Learn something new
29:50 What is M
31:20 What is important when building a network
35:30 Train graphs
36:25 Placeholders
37:45 Saver
38:45 Reduce Loss
42:20 LS
43:10 Checkpoint
43:45 Return
46:45 Exercises
48:0 Training Evaluations
49:45 TensorFlow is for Machine Learning
50:5 Questions
50:55 Peachy
54:10 Tori
55:5 Load your own data
57:50 Writing TensorFlow in Python
00:00:00.000 |
So I'm going to take a picture so I remember how many of you 00:00:15.640 |
So today I'll be giving a tutorial on TensorFlow. 00:00:18.800 |
First I'll talk a little bit about what TensorFlow is 00:00:28.440 |
going to work with you together to build a couple models 00:00:33.440 |
to solve the most classic machine learning problems, 00:00:37.960 |
so-called get your feet wet for those of you from New Zealand. 00:00:45.240 |
So hopefully at the end, you'll be going home 00:00:51.400 |
all the wonderful things that you have watched today, 00:01:02.800 |
So before I go any further, has everybody installed TensorFlow? 00:01:17.040 |
but if you have Wolf G, TF tutorial is perfectly fine. 00:01:31.200 |
And also, I have my wonderful product boss or product manager 00:01:35.920 |
So if you guys have any request for TensorFlow, 00:02:06.400 |
And ever since then, we have become the most, most popular 00:02:22.200 |
know how hard it is to get one of those acknowledgments. 00:02:43.960 |
of its really flexible data flow infrastructure, 00:02:47.960 |
it makes it really suitable for pretty much any application 00:02:52.840 |
Basically, if your model can be asynchronous and fire 00:02:58.600 |
on when data is ready, it can probably use TensorFlow. 00:03:04.040 |
Originally, we worked alongside with other researchers. 00:03:09.240 |
When I joined the team, I sat right next to Alex, 00:03:17.400 |
As we developed TensorFlow, they would tell us, no, 00:03:21.840 |
Yes, when you do this, it makes our lives a lot easier. 00:03:24.880 |
And this is why we believe that we have developed 00:03:27.280 |
an infrastructure that will work really well for researchers. 00:03:32.800 |
have in mind that we would like to take from research 00:03:38.720 |
We don't want you to write all the code that's typically 00:03:43.680 |
We want you to write code that can literally cut and paste 00:03:46.080 |
and save in a file and productize it immediately. 00:03:49.760 |
So TensorFlow is really designed with that in mind. 00:03:55.440 |
So we are halfway into your deep learning school. 00:03:59.080 |
So can anybody tell me, if you want to build a neural net, 00:04:32.240 |
So all these neurons, what do they operate on? 00:04:45.120 |
And they do something, such as convolution, matrix 00:04:48.800 |
multiplication, max pooling average, pooling dropout, 00:05:00.520 |
Tensor is nothing more than a multidimensional array. 00:05:03.960 |
For those of you who are familiar with NumPy arrays, 00:05:23.640 |
And all these neurons are connected to each other 00:05:28.280 |
So as data become available, they would fire-- 00:05:46.560 |
so I don't know how many of you can actually see this animation. 00:05:53.360 |
So this is to really visualize how TensorFlow works. 00:05:57.200 |
All these nodes, the oval ones are computation. 00:06:03.840 |
So all these nodes, they would generate output, 00:06:08.320 |
And as soon as all the inputs for a particular node 00:06:11.360 |
are available, it would do its thing, produce output. 00:06:20.240 |
which are held in tensors, will flow through your network. 00:06:26.860 |
So everybody's like, wow, this sounds like magic. 00:06:34.520 |
So who said-- is it Sir Arthur Clark that says, 00:06:53.440 |
I know I want to get through this as quickly as possible 00:06:59.040 |
so we can actually do the lab that you're all dying to do. 00:07:15.000 |
Because being modular allows you to innovate, to upgrade, 00:07:19.080 |
to improve, to modify, to do whatever you want with any 00:07:23.000 |
piece, as long as you keep the APIs consistent. 00:07:29.280 |
I think that's one of the wonderful things that's 00:07:32.080 |
Pretty much any infrastructure at Google is really modular. 00:07:37.040 |
All you need to maintain is the API stability. 00:07:45.440 |
I think you guys must have seen some examples of how 00:07:53.720 |
And if C++ and Python is not your favorite language, 00:08:00.200 |
So you construct your graph in your favorite language. 00:08:06.840 |
call it the core TensorFlow execution system. 00:08:11.320 |
And that's what you all will be running today on your laptop 00:08:14.560 |
when you open your Python notebook or Jupyter notebook. 00:08:23.400 |
on where you are going to run this application, 00:08:27.640 |
it will send the kernel to the corresponding device. 00:08:53.760 |
today you'll be running TensorFlow on your laptop. 00:08:59.040 |
Everybody can run it on your iPhone, your Android phone. 00:09:04.700 |
I would love to see people putting it on Raspberry Pi, 00:09:13.920 |
because somebody just stole my bike and my security camera, 00:09:17.340 |
capture all this grainy stuff that I cannot tell. 00:09:20.040 |
Wouldn't it be nice if you do machine learning on this thing 00:09:24.680 |
and they just start taking high-resolution pictures when 00:09:28.360 |
things are moving, rather than constantly capturing 00:09:31.480 |
all those grainy images, which is totally useless? 00:09:42.600 |
So we talked about what TensorFlow is, how it works. 00:09:57.560 |
They can recognize out of the box 1,000 images. 00:10:03.480 |
You have to retrain it if you want it to recognize, say, 00:10:08.720 |
But it's not difficult. And I have links for you to-- 00:10:11.960 |
actually, if you want to train on your own images, 00:10:16.600 |
Wouldn't it be fun if you go to your 40-year reunion 00:10:27.040 |
And we also use it to do Google Voice Search. 00:10:38.480 |
Yeah, yeah, this is awesome, especially for those of you 00:10:42.240 |
who are doing what you're not supposed to do-- 00:10:44.840 |
texting while driving, you saw an email coming in, 00:10:47.720 |
and you can just say, oh, yes, I'll be there. 00:10:50.740 |
So based on the statistics that we collected in February, 00:11:03.480 |
maybe Zach can collect some stats for me later. 00:11:16.880 |
We're like, that's probably not the right answer. 00:11:27.720 |
There are all kinds of games that are being developed. 00:11:43.960 |
I think many of you have done this deep dream. 00:11:46.320 |
If we have time in the end of the lab, we can try this. 00:11:50.800 |
So if we are super fast, we can all try to make some art. 00:11:55.280 |
And all those, what I just talked about, of course, 00:11:57.960 |
Google being this wonderful, generous company, 00:12:02.680 |
So we have actually published all our models. 00:12:07.480 |
find all these inception and captioning, language 00:12:17.120 |
which I think Kwok will be talking about tomorrow. 00:12:41.320 |
We have many libraries that's developed on top 00:12:47.660 |
If whatever is out there does not fit your needs perfectly, 00:12:58.040 |
But these are basically all the models and libraries 00:13:23.460 |
All right, so OK, before you bring up your Python notebook, 00:13:30.540 |
So as I mentioned, there are two classic machine learning 00:13:39.440 |
So we are going to do two simple labs to cover those. 00:13:43.580 |
I do have a lot of small exercises you can play with. 00:13:46.140 |
I encourage you to play with it to be a lot more comfortable. 00:13:52.940 |
So I'm sure it has been covered, yeah, in today's lectures. 00:13:55.820 |
Somebody must have covered linear regression. 00:14:28.620 |
So I think it still kind of makes sense, right? 00:14:44.260 |
I think my friends are still doing on Facebook saying, oh, 00:14:50.340 |
And then they would be like, yeah, I solved it. 00:14:54.860 |
I will unfriend you guys if you click on another one of those. 00:15:08.660 |
And then I will tell you that this is the formula. 00:15:12.460 |
But I'm not going to give you a weight, w and b. 00:15:15.580 |
All of you have learned by now, w stands for weight and b 00:15:20.580 |
So the idea is that if you are given enough samples, 00:15:27.940 |
you should be able to make a pretty good guess what w and b 00:15:35.420 |
So now you can bring up your Jupyter Notebook 00:16:01.380 |
And just to make sure that you're all paying attention, 00:16:07.980 |
I asked Sammy if I was supposed to bring Shrek, and he said no. 00:16:18.140 |
Whoever can answer will get some mystery present. 00:16:27.620 |
there are, I would say, four things that you will need. 00:16:36.580 |
You're going to be building an inference graph. 00:16:39.220 |
I think in other lectures, it's also called a Fourier graph, 00:16:42.980 |
to the point that it produces logits, the logistic outputs. 00:16:47.780 |
And then you're going to have training operations, which 00:16:52.540 |
is where you would define a loss, an optimizer. 00:17:04.260 |
Yeah, and then you will basically run the graph. 00:17:10.500 |
You always have to define your loss and your optimizer. 00:17:15.060 |
And the training is basically to minimize your loss. 00:17:37.060 |
However, let's see what we are producing here. 00:17:46.900 |
You're going to say, so you know what kind of data 00:17:49.940 |
So in this case, when here we turn, what are we seeing? 00:17:58.100 |
when your friend tell me, oh, give me x and y. 00:18:01.380 |
So this is when your x is 0.2, your y is 0.32. 00:18:11.980 |
If at any point you're kind of lost, raise your hand, 00:18:14.980 |
and your buddy next to you will be able to help you. 00:18:18.020 |
So now-- oh, OK, I want to say one more thing. 00:18:25.540 |
So today, the labs are all on really core TensorFlow APIs. 00:18:33.880 |
use Keras use another thing that we heavily advertise, 00:18:41.900 |
So I feel like I'm giving you all the ingredients. 00:18:53.860 |
So I'm giving you all your lobsters, your Kobe beef, 00:19:21.220 |
So actually, I wanted you all to commit this little graph 00:19:28.340 |
to your memory, because you'll be seeing this over and over 00:19:37.540 |
So in TensorFlow, the way we hold all the data, 00:19:42.140 |
the weights and the biases associated with your network 00:19:57.820 |
We are building those square nodes in your network 00:20:07.100 |
That's where the gradients will be applied to, 00:20:09.380 |
so that they will eventually resemble the target network 00:20:36.980 |
I have a link, which is our Google 3 docs, the API docs, 00:20:54.840 |
For example, I can say here, what's the name of this? 00:21:04.300 |
Oh, it's because when I create this variable, 00:21:15.420 |
but so see, now my variable is called Sherry weight. 00:21:25.480 |
so this would be a good practice, because later-- 00:21:27.800 |
Sherry by is-- oh, because I ran this so many times. 00:21:45.400 |
Every single time you run, if you don't restart, 00:21:47.680 |
that is going to continue to grow your current path. 00:21:51.000 |
So to avoid that confusion, let me restart it. 00:22:24.600 |
Now we can actually build our training graph. 00:22:27.920 |
And as you have all learned, we need to define a loss function. 00:22:39.560 |
And your ultimate goal is to minimize your loss. 00:22:46.440 |
You can uncomment all these things that you have created 00:22:53.280 |
And I can tell you these are different operations. 00:22:55.840 |
So that's how you actually get to learn about the network 00:23:00.680 |
In the next line, I'm also not going to uncomment, 00:23:25.560 |
And this is how we connect all these nodes together. 00:23:31.480 |
So what you're seeing right now is your neural net 00:23:47.920 |
So in TensorFlow, do you remember in the architecture 00:23:50.920 |
that I showed, you have the front end, C++ and Python 00:24:12.680 |
So this is different from the other machine learning 00:24:23.640 |
And then you create a session to talk to your runtime 00:24:26.500 |
so that it knows how to run on your different devices. 00:24:29.440 |
That's a very important concept because people constantly 00:24:38.000 |
So now you can also comment to see what the initial values 00:25:01.000 |
Is everybody following what we are trying to do? 00:25:12.160 |
So what was our objective before I started the lab? 00:25:26.040 |
All right, so now all of you can go to the end 00:25:36.280 |
So the green line was what we have initialized our weight 00:25:49.560 |
The blue dots were the initial value, the target values. 00:26:18.400 |
The worst that happens is they would just say, OK, clear all. 00:26:26.600 |
about different loss functions, different optimizers, 00:26:29.640 |
all this crazy different inputs, different data. 00:26:39.120 |
So instead of gradient descent, what are the other optimizers? 00:26:56.440 |
Yes, the GitHub, Google 3, the G3 doc link with the APIs. 00:27:06.280 |
So this is-- when you go there, this is what you can find. 00:27:15.800 |
So you can find all the different optimizers. 00:27:19.400 |
So maybe gradient descent is not the best optimizer you can use. 00:27:24.340 |
So you go there and say, what are the other optimizers? 00:27:28.400 |
And then you can literally come here and search optimizer. 00:27:32.600 |
Well, you can say, wow, I have add the delta, add a grad, 00:27:43.800 |
If you don't like any of these, please do go contribute. 00:27:51.640 |
So I would like to say this over and over again. 00:27:57.660 |
We would love to see your code or your models on GitHub. 00:28:39.480 |
Hit Tab to see all the other optimizers you meant? 00:29:04.640 |
So this is where you can-- all the wonderful things 00:29:20.760 |
So anything else you would like to see with linear regression? 00:30:16.920 |
So it stands for, I think, Mixed National Institute 00:30:20.360 |
of Standards and Technology, something like that. 00:30:24.040 |
So they have this giant collection of digits. 00:30:35.040 |
But our goal today is to build a little network using TensorFlow 00:30:44.280 |
Once again, we will not have all the answers. 00:30:47.040 |
So all we know is that the network, the input 00:30:59.560 |
And then they will look at it and say, no, you're wrong. 00:31:14.600 |
So can anybody tell me what are the three or four things that's 00:31:23.280 |
really important whenever you build a network? 00:31:36.680 |
And with this lab, I'm going to teach you a little bit more. 00:31:42.400 |
Like when you go to a restaurant, I not only give you 00:31:46.720 |
also going to give you a little rock so you can cook it. 00:31:50.000 |
So in this lab, I've also teach some absolutely critical 00:32:09.520 |
So those are the three new pieces of information 00:32:14.560 |
And also, I'll teach you a really, really useful concept. 00:32:22.840 |
We didn't used to have it, but they all came to us and say, 00:32:26.520 |
when I train, I want to be able to feed my network any data 00:32:34.600 |
Whenever you start writing real training code, 00:32:46.240 |
how to save checkpoint, how to load from checkpoint, 00:32:48.800 |
how to run evaluation, and how to use placeholders. 00:32:54.160 |
So once again, we have our typical boilerplate stuff. 00:32:57.760 |
So you hit Return, you import a bunch of libraries. 00:33:02.920 |
The second one, this is just for convenience. 00:33:10.040 |
Some of them you can play with, such as the maximum number 00:33:13.000 |
of steps, where you're going to save all your data, 00:33:16.320 |
how big the batch sizes are, but some other things 00:33:41.120 |
if you don't have /tmp, it might be an issue, 00:33:46.000 |
If you don't have /tmp, change the directory name. 00:34:12.080 |
And I also have a linear layer, which will produce logits. 00:34:20.520 |
So that's what all the inference graphs will always do. 00:34:28.520 |
So once again, here you can uncomment it and see 00:34:34.440 |
Once you have done the whole tutorial by yourself, 00:34:41.480 |
and you can actually load this graph that you have saved. 00:34:44.920 |
And you can visualize it, like what I have shown in the slide. 00:34:53.160 |
So you can see the connection of all your nodes. 00:35:01.560 |
that you have indeed built a graph that you thought. 00:35:10.400 |
So being able to visualize is really important. 00:35:20.640 |
Once again, the hidden layer 1, hidden layer 2. 00:35:22.960 |
They all have weights and biases, weights and biases, 00:35:30.160 |
So here is-- actually, here, there's no new concept. 00:35:38.720 |
We once again pick gradient descent as our optimizer. 00:35:44.560 |
That's what we will use later when we save our checkpoints. 00:35:48.000 |
So you actually know at which point, what checkpoint 00:35:53.480 |
Otherwise, if you always save it to the same name, 00:35:55.960 |
then later you say, wow, this result is so wonderful. 00:36:04.040 |
So that's a training concept that we introduced. 00:36:13.080 |
so you know which checkpoint has the best information. 00:36:33.040 |
So we are going to define two, one to hold your image 00:36:43.880 |
And we will be able to use it for both training, inference, 00:37:09.120 |
But you see there it says, after you create your placeholders, 00:37:13.440 |
I said, add to collection and remember this up. 00:37:17.680 |
And later we'll see how we're going to call this up 00:37:23.160 |
And the next one, we're going to call our inference, 00:37:33.920 |
And then we create our train op and our loss op, 00:37:52.560 |
that's the second new concept that I'm introducing, 00:38:10.120 |
rather than always reinitialize all your variables 00:38:15.200 |
When you're training really big networks, such as Inception, 00:38:19.500 |
Because I think when I first trained Inception, 00:38:28.240 |
it took still-- like, stay of the hour is still 2 and 1/2 00:38:31.520 |
You don't want to have to start from scratch every single time. 00:39:28.500 |
But what if I really want to see what it's doing? 00:39:43.460 |
Oh, I think my training is going really well. 00:40:51.820 |
So as you train, your loss actually goes down. 00:40:55.900 |
So this is how, when you do large-scale training, 00:41:06.420 |
and we know, oh, which one is doing really, really well. 00:41:09.660 |
So of course, that's just when you are prototyping. 00:41:14.580 |
But I'm going to show you something even better. 00:41:27.580 |
that you guys are welcome to cut and paste into a cell. 00:41:34.940 |
sets against your checkpoint so that you know 00:41:53.700 |
Very often, our researchers will cut and paste their Colab code 00:41:58.380 |
and put it in a file, and that's basically their algorithm. 00:42:04.740 |
They would send it to our data scientists or production 00:42:10.060 |
We would actually prototype some of their research. 00:42:13.300 |
This is how easy, literally, from research to prototyping 00:42:17.740 |
Really streamlined, and you can do it in no time. 00:42:27.980 |
wherever you saved that, wherever you declare 00:42:42.460 |
That's after all this work, all this training 00:42:48.580 |
That's where all your ways, your biases are stored, 00:42:52.180 |
so that later you can load this network up and do your inception 00:42:58.540 |
to recognize images, to reply to email, to do art, 00:43:10.700 |
All right, let's move on to 2.8, if you are not already there. 00:43:15.140 |
So can somebody tell me what we are trying to do first? 00:43:24.140 |
And you remember all the things that we told our program 00:43:28.700 |
to remember, the logits, and the image placeholder, 00:43:36.900 |
We're going to feed it some images from our evaluation 00:43:41.980 |
So now if you hit Return, what's the ground truth? 00:44:13.100 |
So you can hit Return again in the same cell. 00:44:24.500 |
You can keep hitting Return and see how well it's doing. 00:44:31.940 |
of hitting Return 100 times and count how many times 00:44:34.860 |
it has gotten it wrong, as I said in one of the exercises, 00:44:42.980 |
do a complete validation on the whole validation set. 00:44:50.980 |
So you can actually handwrite a different digit. 00:44:54.460 |
But the trick is that a lot of people actually tried that 00:44:59.460 |
So remember on the slide, I said this is what the machine sees. 00:45:03.300 |
This is what your eye sees, and this is what the machine sees. 00:45:11.820 |
I could be wrong, but I believe it's between 0 and 1. 00:45:14.300 |
So if you just use a random tool like your phone, 00:45:17.100 |
you write a number and you upload it, number one, 00:45:20.100 |
the picture might be too big and you need to scale it down. 00:45:25.060 |
Number two, it might have a different representation. 00:45:40.220 |
the same set of data, just like when we teach a baby, right? 00:45:46.860 |
you are not going to be able to recognize it. 00:45:49.580 |
Just like with the OREO, one of our colleagues 00:45:59.100 |
Any time when it sees something that it doesn't recognize-- 00:46:02.100 |
have anybody played with that captioning software? 00:46:14.740 |
But any time it sees something that it has never 00:46:17.380 |
been trained on, it would say, man talking on a cell phone. 00:46:26.060 |
and it would say, man talking on a cell phone. 00:46:28.140 |
You put a bunch of furniture in the room with nothing, 00:46:31.220 |
and it would say, man talking on a cell phone. 00:46:34.100 |
But just like with your numbers, if you have never 00:46:44.100 |
But this is pretty fun, so you can play with it. 00:46:49.620 |
See, so far, it's 100% other than the first one, 00:46:54.740 |
So what are some of the exercises that we can do here? 00:47:03.260 |
didn't know that you guys are all experts by now. 00:47:05.860 |
Otherwise, I would have done a much harder lab. 00:47:21.820 |
Can you guys try saving the checkpoint, say, every 100 00:47:32.380 |
but they're tiny, tiny checkpoints, so it's OK. 00:47:34.860 |
And try the run evaluation with a different checkpoint 00:47:41.260 |
So the idea is that when you run the evaluation, 00:47:59.380 |
So as it trains, every so often, say, every half an hour, 00:48:05.140 |
depending on your problem, so with the inception, 00:48:08.220 |
every 10 minutes, we would also run evaluation 00:48:13.300 |
So if our model gets to, say, 78.6%, which I believe 00:48:16.900 |
is the state of the art, it would be like, oh, 00:48:20.260 |
So that's why you want to save checkpoints often and then 00:48:35.980 |
If you try to load from a really early checkpoint, 00:48:41.620 |
how good is it when it tries to identify the digits? 00:49:01.420 |
If you only have one layer, maybe it won't get it right. 00:49:13.940 |
is really try to learn to run evaluation from scratch 00:49:28.420 |
as you build bigger models and you need to run validation. 00:49:44.500 |
The bottom line is that TensorFlow is really-- 00:49:48.700 |
It's really from research to prototyping to production. 00:49:53.540 |
And I really hope everybody in the audience can give it a try. 00:49:58.340 |
And if there are any features that you find it lacking 00:50:08.980 |
Or talk to my wonderful product manager, Zach, 00:50:27.660 |
We have time for questions for those who actually tried it. 00:50:36.900 |
They're all ready to go make arts now, right? 00:50:57.940 |
And first of all, thank you for introducing TensorFlow 00:51:04.980 |
So the first question is, I know that TensorFlow 00:51:11.620 |
So let's say if I use Keras or any of the Python front end, 00:51:16.620 |
Does TensorFlow support that I can pull out the C++ model 00:51:27.340 |
So even if I use, for example, Keras custom layer 00:51:31.060 |
that I code using Python, I still can get those things? 00:51:38.540 |
But we are not as complete on our C++ API design. 00:51:48.380 |
But for the simple models, yes, you can do it. 00:51:54.100 |
but let's say if I just want the testing part. 00:51:57.660 |
I mean, the training I can always do in Python. 00:52:05.820 |
I think that's literally just loading from Checkpoint 00:52:08.620 |
and run the inference in C. That's all written in C++. 00:52:16.900 |
So another thing that I noticed that you support almost 00:52:50.780 |
when I look at the roadmap, I didn't see a clear timeline 00:52:55.980 |
But the thing I know that just like the reason why you cannot 00:53:02.620 |
So let's say, theoretically, I mean, what you think, 00:53:15.700 |
can I expect like just like immediately do TensorFlow, 00:53:37.660 |
Are they available right now for testing and playing 00:53:54.940 |
Do you know when it might be available in the Google Cloud? 00:54:04.180 |
I'm so glad we have a product boss here so that he can-- 00:54:18.620 |
with the open source framework, like MySource and HDFS, 00:54:30.500 |
We are also always actively working on new features. 00:54:33.540 |
But we cannot provide a solid timeline right now. 00:54:49.180 |
So I cannot give you a time saying, yes, by November, 00:55:06.060 |
I was wondering, does TensorFlow have any examples 00:55:15.900 |
Are there examples out there to load your own data set? 00:55:41.220 |
If you go to TensorFlow, we have an example to do retraining. 00:55:50.420 |
So you can definitely download whatever pictures 00:56:16.340 |
Do you provide anything to move the model to Android? 00:56:20.220 |
Because generally, you program in Java there. 00:56:27.700 |
You build a model, and then just send it to the runtime. 00:56:30.180 |
It's the same model running on any of the different platforms. 00:56:36.340 |
Do you have your own specific format for the model? 00:56:43.860 |
Because the model is just a bunch of matrix and values. 00:56:58.340 |
Inception on your phone, because all the convolution 00:57:00.940 |
and the backprop will probably kill it 10 times over. 00:57:04.700 |
So definitely-- so there will be that type of limitation. 00:57:10.260 |
I think you guys talked about the number of parameters. 00:57:12.500 |
If it blows the memory footprint on your phone, 00:57:18.620 |
especially for convolution, it uses a lot of compute. 00:57:28.060 |
There actually are examples like label image. 00:57:39.380 |
So any of these, you can write your own and load a checkpoint 00:57:51.780 |
I have a question related to TensorFlow serving. 00:57:57.900 |
and currently, I think it requires some coding in C++ 00:58:04.740 |
Is there only going to be only Python solution that's 00:58:10.860 |
I think you need to do some first step to create a module 00:58:17.660 |
I am actually surprised to hear that because I'm pretty sure 00:58:21.340 |
that you can write the model in just Python or just C++. 00:58:25.860 |
You don't have to write it in one way or the other. 00:58:35.660 |
I think that's probably what you were talking about. 00:58:37.980 |
But you don't have to build it in any specific way. 00:58:52.020 |
So TensorFlow serving, the tutorial, actually, 00:58:55.220 |
if you go on the site, it had those steps, actually. 00:59:04.220 |
I do know that at one point, they were writing the exporter 00:59:10.300 |
because we are doing another version of TensorFlow serving. 00:59:16.420 |
And is there any plan to provide APIs for other languages? 00:59:35.420 |
And once again, if those languages are not our favorite, 00:59:40.780 |
And if you would like us to do it, talk to Zach. 00:59:50.580 |
So I think that's going to help out in integrating 01:00:06.420 |
And I really wanted to work with the TensorFlow, 01:00:08.780 |
but I got to know that it can only run on x86 boards. 01:00:23.100 |
consulted with my product boss, see when we can add that 01:00:36.980 |
when you have the model and you want to run inference, 01:00:41.380 |
is it possible to make an executable out of it 01:00:49.820 |
Is that something that you guys are looking into? 01:01:11.500 |
into a single binary source that you can just pass around. 01:01:23.060 |
It actually converted all the checkpoints into constants. 01:01:26.060 |
So it doesn't even need to do the slow, et cetera. 01:01:29.220 |
It just reads a bunch of constants and runs it. 01:01:41.580 |
We're going to take a short break of 10 minutes. 01:01:43.580 |
Let me remind you, for those who haven't noticed yet, 01:01:51.100 |
They will be available at some point, as soon as we 01:01:57.260 |
But in any case, I have a lot of TensorFlow stickers up here. 01:02:00.020 |
If you would like one to proudly display on your laptop,