Developer Keynote: Get to the Fun Part (Cloud Next ’19)

>>It’s been a great year since
we were here last together at
Next, and I’m really happy to see all of the amazing momentum
with our partners and our
customers and all of you. You know, we wanted to make this
keynote for all of you, the
developers, the operators, the analysts, the data scientists,
like this is for you. Who here
makes apps, raise your hand. You make apps? Yeah? How about
mobile apps? Mobile apps? A lot
of hands. How about who is on pager duty right now? [Laughter]
>>Found out I was earlier
today. Who works with data, data scientists and analysts? Yeah,
right on. Okay. Good news, you
are all in the right place, and there is incredible, incredible
demand for all of your skills.
And, in fact, there was recently a job survey with,
and Google Cloud was the fastest
growing technology, with demand up over 66% from last year. So
we’ve been out in the community
learning with you. The community has been running Cloud study
jams, been to one of those,
we’ve done more than pretty cool. More than 900 of these
community run learning events
just this year, so far this year, and we’re more 22,000 of
you who have gotten involved,
and you’re learning from your peers which I think is really
wonderful. Now, some of you are
newly certified. Do we have some certified people in the house,
with the blue jackets? All
right. Good job. Good job. [Applause]
>>Actually, you’re going to
have great jobs. You’re going to have awesome jobs. So okay.
This keynote today is all about
that feeling you get when you make something new, kind of that
magic you saw in the opening
video. And really, no matter what title you have, if you get
your hands on the keyboard and
you make things and you do the building and the testing and the
monitoring and the operating,
this keynote is for you and you’ve come to the right place.
So there’s so much going on
right now if you think about it. Serverless, functions, service
mesh, containers, AI, APIs, Auto
ML, hybrid cloud, it’s kind of a lot of noise, right? What’s the
core behind all of that? What’s
it’s about enabling you, the developers, to be the real
stars, getting all of you back
to the heart of development and operations, coding apps and
automating infrastructure. As a
developer, you’re being flooded with messages and you probably
feel like there’s always a new
buzzword to chase, right? So what we want to do today is cut
through all of that noise,
because, in the end, it’s really simple. It’s about helping you
get great work done but have a
great time doing it. So we call that getting to the fun part. So
with that, let’s get started
with one of our awesome developer advocates, Mandy
Waite, come on out and walk us
through it. Mandy? [Applause]>>Hey.>>Hi, Mandy.>>How you doing?>>Thanks for coming.
>>Thank you, yeah, thanks for
having me. So you’re a developer and operations background, and
you spent lots of time inside
and outside of Google talking to developers. What do developers
>>Well, that’s a really good question. We think it’s fairly
simple, or at least from my
point of view. It’s that sense of excitement that everyone
feels when they build and run
their first program. That’s the feeling that we want every day,
the feeling of learning
something new, trying something new, and then building something
new. And it’s not just that. We
want our work to matter, to make a difference to our team or
our company or maybe even the
world. When stuff gets in the way, we end up spending a whole
bunch of our time on things that
not only don’t really matter to the business, they’re also kind
of boring. So whether you call
it micro services, dev ops, managed databases, serverless or
CI/CD, what these things all
have in common is they let us as developers get to the fun part,
to have less sword fighting,
more creating, less hassle and more reward, to get to the fun
>>So you’re saying when, like, I get my code done and checked
in and I have a really long
build process and I’m waiting for the compiler, that’s not
>>That’s not the fun part, no.
>>How about, like, I need my app to scale and not fall over
so I got to like configure
sharding the database?>>That’s not the fun part.
>>Okay. Got a big Java app and
I have to get like the VM size right so I don’t pay too much
but it doesn’t break.
>>Not the fun part.>>Okay. I got one. I got one.
How about you have to rewrite
the entire app because management said move to a
different cloud.
>>That’s definitely not the fun part. [Laughter]
>>Mandy, what’s the fun part?
>>I’m so glad you asked. Let me bring on my team, my demo
team, and we’ll show everyone.
>>All right.>>Thank you. [Applause]
>>You know what’s the most fun
is working together as a team to build amazing apps, so let’s
do that together. At Google,
we’re developers too and we want to make it as easy as possible
for you to stay in the fun zone
of coding apps by automating away all of the boring parts.
This way, we can all spend our
time on innovation. So today, we’re going to show you how
Google Cloud platform helps you
get to the fun. We’ll show you how to make deploying apps to
virtual machines in the cloud
easy or use brand new Google products to make kubernetes
simpler, and we’ll show you how
you can use serverless to go faster. Some apps have more
traditional needs, while others
benefit from modern technologies such as kubernetes and
serverless. So we’ll work
through each of these in our demos. All right. So let’s stop
talking and have some fun
building applications. Nick is one of our developers and he’s
been leading our Cloud migration
efforts. Hey, Nick, what are you working on?
>>Hey, Mandy. My team and I
have been building an implementation of Conway’s Game
of Life, and we built it in a
few different pieces. We have a web tier front end, that’s what
you access from your laptop or
your mobile phone. We have a back end that runs on Google
Compute Engine, that’s the part
I’ve been working on, I wrote it in Go, that’s my favorite
language. We have an API layer
that runs in kubernetes, and finally, we have a few other
pieces that use some of the new
serverless technologies, like Cloud Run.
>>Okay. So can you tell us a
little bit more about how the app is built, tested, and
>>Oh, yeah. So my team have been using Jenkins for years. We
just stuck with what we knew.
We carried on using the same tools.
>>Okay. All right. So
basically you’re using Jenkins. You’ve got Google Compute
Engine. That sounds complicated,
configuration, authentication, all of that. How do you make
that work?
>>Well, actually we use the Ansible plug in for Jenkins.
That’s got great support for
GCP, takes care of all the authentication for us, and it
actually even restarts a service
at the end when we have deployed binary.
>>Okay. So why don’t you show
us that?>>Yeah, I will show you right
>>Okay. Cool.>>So what I’m going to do is
I’m going to speed up the
processing on the back end to take advantage of all the extra
CPU and memory we get on Compute
Engine. So this right here is the main loop of our program.
I’m going to change that to five
times the speed. Let’s go big right here. I’m going to commit
that change. I’m going to push
it up to GitHub. That’s where we store our code. And then I am
going to kick off a build in
Jenkins. So that takes about 25 seconds. It’s going to copy down
the source code from GitHub,
it’s going to build that Go binary, copy it across the
Compute Engine, and then restart
our service.>>Okay. So I seem to remember
this at the workflow that we
used to use when the app was running in the datacenter, you
know, using Jenkins and such
like, so the actual plug in is the only thing that’s different?
>>Yeah, it’s almost exactly
the same. And I even use IntelliJ. It’s the same front
end ID that I used to use.
>>Okay. That’s brilliant.>>Really simple.
>>Okay. And you program in Go.
Is that correct?>>Yeah, I absolutely love Go. I
find multithreading so much
easier than Python, and it has a great support for a lot of new
technologies too.
>>That’s great. Okay. It looks like it is finished.
>>Yeah, I think our build is
finished, and if you look up there, it’s even faster.
>>Okay. So that’s much, much
faster than it was before, and so that’s pretty cool.
>>So Nick was able to migrate an existing application to
Google Compute Engine just by
adding the Ansible plug in to Jenkins, but that took care of
all the boring config and makes
our deployments more reliable and secure, so thanks, Nick.
>>So moving to the cloud not only allows you to perform the
kind of app migrations that Nick
has just shown us but you can also take advantage of many
other new technologies,
including kubernetes. In fact, customers using kubernetes tell
us that they’re able to go
faster, go from code to production in under ten minutes.
It’s not unusual to see 30 to
50 deployments a day. So kubernetes helps manage and
scale your micro service based
apps, but it’s not always easy. There’s a fair amount of
complexity that can get in the
way of this coding. So that’s why we’re really excited today
to be announcing cloud code, an
IDE add on that makes it easy to rapidly build, debug, and
deploy kubernetes apps directly
from your ID E, giving you the local development experience
you’re used to. So [Applause]
>>So hey, Priya. You’ve been working on our kubernetes front
end. And can you show us what
you’re building and give us some insight into the tools that
you’re using to do the build?
>>For sure, Mandy. So I’ve been working on a new share
button to our new Game of Life
game. Now that Nick has made the game so much faster, I think it
would be really cool if the
people playing the game had a way to share it with their
friends. So I’ve been working on
our API layer which lives in kubernetes. And as a team we
actually decided to host the
game in kubernetes because we wanted dit to scale really
easily as more and more people
started to play. And luckily kubernetes does all of that hard
work for us. So I actually got
started developing this feature completely locally with tools
like mini cube. I just started
writing some code in my IDE and I was able to get the feature up
and running without even needing
a network connection which was really nice for me. So I’m at
the point now where I feel like
the feature looks pretty good, and I want to deploy it to our
staging environment before I
deploy to production, so I’m just going to go ahead and do
that right now. So if I remove
these comments, that will add my new share button. And then I
can deploy with cloud code.
>>Okay. So wait, you just edit in your IDE and then build,
debug, and run on Google
kubernetes engine, cloud code deploy, is that all you had to
>>That’s all I had to do. All I had to do was hit the cloud code
deploy button and then cloud
code took over the rest of the deployment for me. If you take a
look at the logs, you can see
that cloud code rebuilt my images, pushed them to my
registry, and then redeployed my
application for me. It’s really nice because it takes care of
all of this hard work for me and
I can just keep on writing code or do something else while it
does all of the hard work. So it
looks like the application is deployed successfully. So if
take a look at our staging
environment, we can see the new share button, and if I just
click on that
>>All right.>>Yep. It looks pretty good,
and I think it’s ready for
production.>>That’s perfect. Did you see
that? [Applause]
>>So our team has been working on cloud code for a long time,
and we’re thrilled to say that
it’s is available this week, today, maybe this week, most of
you have heard about it today
already. So there will be a deep dive on cloud code in the dev
ops spotlight tomorrow and
several breakout sessions, so a big round for cloud code.
>>So thank you. So come along to that tomorrow, to dev ops
spotlight. Priya, what more can
you show us?>>So now that we’ve deployed
the new button to staging, I
think it looks pretty good and I want to deploy to production as
well. And luckily deploying to
production is just as easy. All I have to do is commit these
changes to GitHub and as soon as
they’re merged into master, cloud build will pick up my new
feature and then deploy it to
our production cluster, and we should be able to see the
feature in production in just a
couple of minutes.>>Okay. So just one thing
before you do that. Now that
you’ve created the share button, when we’re in production, we’re
expecting users to share the
app with their friends, so how are we going to handle the
potential increase in traffic
that might come about if the application goes viral?
>>That’s a really good point,
so we actually have a dashboard that tracks our GKE cluster, and
looking at CPU usage in the
cluster right now, it looks pretty good, but you’re right. A
lot more people might start
playing the game. Hopefully they do once we have our share
button. So just to be safe, I’m
going to increase the number of pods in our cluster from one to
ten just so that it can handle
any increased traffic. And luckily this is a really simple
change in my deployment YAML. I
just have to up the number of replicas from one to ten. And it
looks like I may have made a
bit of mistake, but it looks like cloud code also does YAML
validation which is really nice.
So I guess it was expecting an integer. Let me fix that. And at
this point, all I have to do is
push these changes to GitHub. And like I mentioned earlier,
cloud build will pick up our new
feature and deploy it to production, and we should have
our share button in just a
minute or two.>>Okay. So with a single
command, Priya was able to scale
the deployed app to handle the increased load. So Priya, now
that we’re in production, how do
we tie our front end and back end components of the app
together? We need to make sure
that the app is healthy and working as intended, right?
>>Yeah, so you may have
noticed that I was tracking our GKE cluster in Stackdriver
earlier, so our entire team
actually monitors all of the different pieces of the
application with Stackdriver.
And it’s really nice because Stackdriver integrates really
well with all of the different
GCP services we use, so you can see there’s a dashboard tracking
our GCE VM which Nick was just
working on, and I was looking at the dashboard for GKE cluster.
It was actually really easy for
our team to set this up. I think Dan made the whole thing in 10
or 15 minutes.
>>Okay. So Stackdriver gives us a single pane of glass from
observing and monitoring
multiple metrics from the same or from different apps and
services. And here, we’re seeing
metrics in both Nick and Priya’s part of the apps
combined into a single
dashboard. Nick and Priya are not just monitoring the app on
their own, they’re monitoring
the app, the entire app, as a team. They want to understand
what’s happening within the app
and how the users of the app are responding to our changes
together. And this is all
completely automated. It’s very, very easy to create the
dashboard together as a team. So
thank you, Priya. That was awesome.
>>For sure. [Applause]
>>So we’ve seen how we migrate the existing back end service
to the cloud. We then built a
brand new scalable front end to kubernetes in cloud code. I
wonder now how we might extend
this application with some bots or third party integrations.
>>Hey, Mandy. Let me show you a
slack bot we’ve been working on to interact with the game board
but with Google Cloud Run.
>>Hey, it’s Dan. Dan is our, he’s the guy who looks after our
production services, and he
also does all about Slack integrations as well. So what
are you working on, Dan?
>>Hey, Mandy. So I’m actually right in the middle of now of
adding a new feature to this
chatbot. Let me get this merged and deployed and then I’ll
explain how it all works. So
like Priya before, I’ll just save this and commit this and
publish it up to GitHub and
cloud build will automatically deploy it. Normally I would have
done a code review with someone
on the team for this but there are a few thousand people
watching today so I’ll skip that
step. So the app I built here is a way to interact with the
game board inside of Slack. We
use Slack to communicate on our team, and I wanted to add some
administrative and fun features
with the chatbot. So I can do some things here like clearing
the game board, as well as
randomizing it to get some feedback on it to start out
with. The feature that I just
added is a way to create some special shapes that the users of
the app don’t have access to. I
can create one of these here with this command. I can pick a
couple of shapes from the menu
here, I’m going to create a line, I’ll enter some corodies.
What’s your favorite color
today, Mandy?>>What do you think, dev rel?
Red? [Cheering]
>>Awesome. And we can see this appear on the board. So I carry
a pager for my day job, and I
use Google Cloud Run to deploy this application because the
last thing I want to have to
worry about for a fun app like this is to have to configure
monitoring, scaling, YAML and
deployments. So Cloud Run is designed to run containers in a
fully managed environment which
let me to use the language and tooling I was familiar with to
get started. Or in this case,
one I just decided to try out for fun, Node.js. It provides a
totally serverless experience,
abstracting away configuration, monitoring, scaling, and
>>All right.>>Yeah, so Cloud Run lets me
focus on just the code, and best
of all, it’s designed to work seamlessly with cloud built. All
I had to do was submit my code
to GitHub and cloud built automatically packaged up my
application and gets it deployed
and running in Cloud Run.>>All right. Okay. So that’s
fantastic. Dan just added a
completely new feature to our production app reasonably and
very quickly. So well done.
Let’s hear it for Dan, everyone. [Applause]
>>Okay. So to summarize, Nick
has walked us through deploying a back end component of our
application to a virtual
machine, run it on Google Compute Engine, using IntelliJ,
with Jenkins on a Ansible plug
in. Priya then showed us how cloud code helps her build test
and deploy across kubenretes
component of our application including a full local
development workflow. Priya then
pushed the changes to production using cloud build.
And we’re also able to monitor
and observe all of the app components in Stackdriver. Dan
was then able to add a nifty new
feature to the app using Cloud Run which allows developers to
run any stateless containers in a
service environment. To learn more about all of this, please
join us at the dev ops spotlight
tomorrow morning. Also, check out our dev ops booth in the
showcase, and please, please,
download Cloud Code today. So let’s have one more round of
applause for Nick, Priya, and
Dan, our demo team. Great job, team. [Applause]
>>All right. It’s not all
great though. All of this development stuff has been
great, but this is software. You
know, things sometimes fall apart. Maybe we completely
underestimated the amount of
traffic that a share bot will generate for our app, so we need
to plan for that as well, and
we do that as a team. So to talk about sustainable dev ops
practices, I want to bring out
Aja Hammerly. [Applause]>>Thanks, Mandy. I was having a
great time playing the game
backstage go red team but I used to be a tester in my
previous life and it’s kind of
built into my DNA to break stuff. After all, nothing goes
as smoothly as this demo has
actually gone in production. Like it just doesn’t happen. So
let’s bring out the wheel of
misfortune. [Music playing]>>Today, our incident causing
wheel says that the demo team on
stage is going to have to deal with some kind of network error.
So my friends backstage, the
gremlins, are going to cause some sort of network error, and
we’ll see what happens.
>>I’m getting a bad feeling about this part.
>>I would be too. So the wheel
of misfortune is actually an educational tool used internally
at Google, and it’s basically D
and D or LARP’ing for developers and operators. Tools
like this are part of our
culture because we know that incidents happen. Every
application has bugs or
configuration issues that you don’t find until after you’ve
deployed to production. And
annoyingly, some of those bugs don’t seem to show up until it’s
late at night and it’s your
turn to carry the pager, which leads to crazy stories that we
tell each other when we get
together at events like this, like the time before Google that
I was restarting services in a
parking lot of a burger joint because that’s where I was when
the pager went off. Incidents
happen. They happen to everyone. And sometimes they even happen
to our dev team on stage. [Music
playing] Is that the pager I hear?
>>I think so. It sounds like
the wheel of misfortune took down the app. I’ll do some
debugging and get back on it.
>>You get that and I’ll be here rooting for you. So whether
you’re a small start up or a
big company, you have incidents. Even if you’re absolutely
stellar at your production
practices, some stuff is just beyond your control.
Dependencies and operating
systems occasionally have critical security updates, and
even when you test your code
thoroughly, complex systems, by definition, have complex
integration and sometimes you
don’t find those integration issues until the code is live.
Incidents are going to happen
whatever we do, and if we don’t want to have very many
incidents, the only thing that I
know to do is to just not ever deploy anything, and that, that
is boring. It leads to very,
very boring applications. So if incidents are inevitable, what
can we do to make weathering
them, if not fun, at least less stressful maybe? And the only
answers I know to that are good
tooling, solid practices, and a absolutely great team. By good
tooling, I mean having ways to
be quickly notified about incidents and one place where
you track and coordinate all the
folks involved in an incident response. Significant incidents
can involve multiple teams, from
your operators and developers trying to figure out is it
configuration issue, is it a
code issue, to your support and PR teams trying to help your
customers deal with it and
letting them know about any workarounds you might have. And
keeping track of all aspects of a
response like this in one place saves time because you’re not
playing some crazy game of
telephone, having to repeat current status to nine people in
a row. That leaves you with
much more time to address the incident and the issue at hand.
But incident management and
tracking is only one part of tooling. To address issues
quickly, most teams also have
things like crash reporting, service monitoring, aggregated
logging, performance profiling,
and each of you probably has even more tools than that to
deal with stuff like this. And,
of course, if you’re using Stackdriver, you also have
things like live production
debugging, which, after several years at Google, still feels
like it’s slightly magic to me.
Speaking of debugging, how is it going over there, Dan?
>>It looks like the wheel of
misfortune was right and the little gremlins backstage broke
our network connection to Redis.
I’m going to do a failover to another instance.
>>I’m glad that we had
backups. So I’ll give you a few more minutes to get things
sorted out. Earlier, I mentioned
that solid practices are important too. When I say solid
practices, I’m talking about dev
ops best practices like in the like the lessons in the SRE
books. That is, things like
setting an SLO to help you understand the customer impact
of an event and having run books
and procedures for when the pager goes off. And practicing
your response on a regular
basis. A lot of folks I know, they set up their backups to
disaster recovery but they never
practice actually going through the procedures and restoring
from a backup. And as the common
saying goes, if you haven’t recently restored from your
backups, you don’t actually have
backups. At Google, in addition to things like the wheel of
misfortune, we have DIRT. DIRT
stands for disaster recovery testing, and what that means is
that on a regular basis, a large
section of the company, or multiple sections sometimes,
practice dealing with different
scenarios and they’re trying to find the places that were not as
resilient that we want to be
and we practice handling scenarios that thankfully don’t
happen very often, like
potentially a zombie apocalypse, or happen ever, also
potentially a zombie apocalypse.
For me, though, the biggest secret to making production
incidents less unfun is having a
good team. You need a team that works together well, a team
that practices blameless
postmortems and assumes that everyone is doing the best they
can with the information they
have available, a team that knows that when things go
poorly, it means that you’re
tooling, your procedures, or your documentation need
updating, not that one person
messed up. You need a team with a variety of backgrounds and
skills. The applications we work
on have lots and lots of moving parts, lots and lots of
different kinds of storage or
programming languages, and if you have a team that knows all
of those things, you’ll find
find the issues faster. But most of all, you need a team that
believes we shouldn’t have the
same issue twice because having the same issue twice is boring,
and boring is not fun. So I’m
going to tell you all a secret. Despite the fact that we think
about incidents as unfun. For
me, at least, they’re actually a weird kind of fun. I remember
the first time that I wrote an
incident report years and years and years ago, and in that
moment, I felt like I was
finally, finally, a real engineer. There’s a very
distinct kind of satisfaction
that comes from detecting an issue in production, debugging
it, rolling out a fix, and then
verifying it. [Applause] After all, when you fix an issue, you
are helping your customers and
you’re helping your team. You’re making people have a better
day. And this contradictory fun
but also unfun nature of incidents is why, when we get
together, big piles of tech
folks, we talk about that one day, when everything went wrong
at my start up or that one day
that someone at the datacenter hit the big red button and
everything went, we know that
these issues aren’t supposed to be fun, we know that there
aren’t supposed to be any
heroes, but these things are interesting. And part of the
reason they’re interesting is
that they happen to all of us. Everyone can empathize with
stuff going wrong. And since
everyone can empathize, we can all learn from each other. I’ve
learned a lot by reading
incidence reports from other companies, and everything I’ve
learned from the Google SREs has
changed how I think about operations. And if this whole
section sounds deeply unfun to
you, that’s fine too. We all get to have things we like. And
there are lots of folks who just
want to get on with making technology do cool stuff, and
that’s the best part of using
the cloud. You can choose which parts are fun with you and, if
you’re not into infrastructure,
you can leave infrastructure and maintenance to other people.
After all, we’re all here and
we’re all in tech because we find using technology to do cool
stuff fun. So let’s see if our
development team is having any fun with the wheel of
misfortune. Were you able it to
deal with this issue with some of the best practices we just
talked about?
>>I think we need a new rule, if you haven’t failed over to
your backup Redis live on stage,
you haven’t failed over. The app should be back up. Thanks.
>>Congrats. Very well done. So
next, Adam is going to come out and teach us about a bunch more
cool stuff. [Applause]
>>That was so awesome. Oh, awesome. That was great. So in
the other keynotes, all of you
heard from customers like Scotiabank and Colgate and how
they’re all using Google Cloud.
But there’s one thing I know and that I know about developers is
that you like to hear from
other developers and not just the executives, hear their
experiences. So we wanted to
introduce to you some real developers, and we’re lucky to
have the amazing Lauren Buchman
that’s going to introduce us. Come on out, Lauren. [Applause]
>>Thanks, Adam, let’s get
right into it. Please join me in welcoming our first customer to
the stage. We’ve got Yoav and
Leron from Wix. Hey there, welcome.
>>Hi, Lauren. Happy to be
here.>>So for those in the audience
who are not as familiar with
Wix, they provide tools and services to build websites. Wix
has more than 150 million users
and they serve 2 billion API requests a day. So can you tell
me about what you’re working on
now?>>Wix started off by letting
anyone build professional
websites really efficiently. Now, we are working on a new
product with Wix called Corvid,
and Corvid lets developers build web applications in a much more
efficient way.
>>So why the move to web applications?
>>You know
>>The>>We’re expanding in to
applications because consumer
expectations of websites are changing, things like the
content services, integrations,
communications, so the line between web app and the website
is blurring. With Wix, you can
set up your website using the visual builder. Now, with
Corvid, we let developers take
advantage of the same visual builder, adding coding
capabilities, giving them the
most productive way to build web applications.
>>So how can Corvid empower
Google Cloud developers?>>So GCP can empower users of
Corvid in different ways. You
can use, in your Corvid application, any of Google Cloud
databases, Spanner, Cloud SQL,
Firestore. You can monitor all of your applications with
Stackdriver. You can use any of
Google APIs like Pub/Sub or any of the machine learning APIs.
Almost all of your good cloud
products can be used from the Corvid application. So with
that, you take all of the power
of Google Cloud and let it surface into web applications
using Corvid and then you would
be much more effective and much more productive.
>>Sounds great, but maybe we
could stop talking about it and see what you’re going to do. I
hear you’re going to build a
chatbot on stage for us.>>Sure. So as you know,
building a chatbot take days, if
not weeks. Today, Leron and I are going to show you how to
build a chatbot application in
just 60 seconds using Corvid and our workflow. One of our early
adopters of Corvid is a company
called Ilango, and they are connecting boat owners with
people who want to rent a boat
for vacation, and we’re going to use their application as the
>>It sounds pretty cool. Can we switch over to the demo now?
>>Sure. So what we what we see
now is the boat search page. Leron, let’s connect Wix chat
with the workflow. So in order
to do that, Leron is switching to an ID and he is creating a
new file in events such as file
in the Corvid project back end which runs on OGS. In the file
he is adding an event for the
chat on message that is triggered any time there’s a new
message in the chat.
>>So you’re adding the integration into data dialogue
flow now?
>>Yeah, sure. So now with the rest of the code we integrate
into direct flow. So now any
time someone writes anything in the chat, we get a message, send
it to direct flow, we get back
the response, and write it back into the chat. Next, we train
direct flow to detect the best
deal and filtering tense, and we want to create user experience.
Leron is switching to another
file, the search page code, and he’s adding another on message
event. This one runs in client
on the browser, letting us update the filters of the page
or open a light box for best
deal. The only thing left for us to do is to deploy the
application, we call it publish,
and so let’s switch back to the visual builder. And just before
we publish the application,
this is the visual builder that makes Corvid so much more
powerful, giving you all of the
visual elements you need to build a modern web application.
And you can actually see the
code here as well.>>So can I use any ID with
>>You can use any ID. You can actually write code here as
well. And back on the publish
button, you get a application available to everyone globally,
cross cloud, cross region. And
let’s go and try it out.>>Yeah.
>>So imagine you have a
customer who wants to go and find a boat trip to the Bahamas.
So we’re right in the chat, and
the message is sent to direct flow. Direct flow detects what
is there and it is going to ask
us for more information. When do we want to go for vacation? So
let’s try the summer. Again, it
is sent to direct flow. It will detect the filter intent and
we’ll get back the result, we’ll
update the filter on the page the location and dates. We’ll
update in the search, so the
whole page has been updated, and we get the results, although it
looks a little bit pricey, so
let’s try and search for the best deal. So we’re writing
let’s try the best deal of what
can you offer me? It is sent to direct flow. It detects the best
deal, we get back the result,
we have an algorithm to figure out what is the best deal and we
show that in light box. And
that looks way more reasonable. That easy.
>>Hmm. So yeah. [Applause]
So this is so cool. So what we just saw is so much better than
the typical chat window. I mean,
it’s usually separate from like the actual web app, but this
actually brought up dynamic
content within the app itself. How else can developers use
>>Developers can use Corvid to build any kind of data or logic
rich web applications. Corvid
technical preview is available today, and the full version will
be rolled out later this
summer. Go and check it out, Thank you.
>>Thanks, Yoav and Leron. Next
I’d like to welcome on stage Kyle and Shea, cofounders of
>>Thanks for joining us. So I hear you’ve been up to some
really interesting stuff over
the last couple of days regarding the new release of
Anthos. You guys were a premier
Google services partner and you’ve been a design partner
with us on GKE on prem and the
recently released Anthos. Can you tell us a little bit about
what you’ve been talking about,
the projects you’ve been working on?
>>Yeah, thanks for having us
first of all, excited to be here. Our main focus is working
with customers to embrace
automation, micro services and all things kubernetes, along the
way, trying to make everyone’s
job a little bit more fun.>>And I hear you a have demo
of an Anthos use case for us.
Can you set that up?>>Yeah, we do. I’ll tee it up
for you. So let’s face it, we’re
all developers in the room and developers want to write code
and release at a high velocity.
Operations and security, though, wants stability and they want
to reduce change. These goals
are sometimes at odds but can be solved with policy driven
automation, leveraging GCP
services, including config management. So let’s kick it
over to Shea and show off some
of the cool things he’s been working on. Over to you, Shea.
>>Well, maybe we should really
start with what my challenge is most of the time as a
developer. My automated builds
are just too slow. And really, the issue with that is that I
don’t have enough scaleable
infrastructure on my on prem site so what I really want is
cloud. However, my platform and
ops teams, they just have too many rules and I can’t really
fit into them. So really, as a
developer, I don’t want to know about kubernetes, I don’t want
to know how to configure it, and
I don’t want to deal with on prem infrastructure. So really,
what we’ve done is we’ve built
this environment to show how Anthos config management can
ease the pain. In this
environment, I get my own ephemeral kubernetes clusters.
We’ve already cut over to the
demo which is apparently where we are. Inside of each one of
those clusters we’re pulling
from a base config which is my single source of truth and
that’s controlled my operations
team and my security team so I don’t have to know anything
about that. So really what I
want to do is I just want to commit my code and then see it
come up on the other side. And
so what I’m going to do is I’m just going to make a minor
modification to our application
here, super simple, I’m going to change my background
>>You don’t usually use the
editor on line, right?>>I do not, no. This is all
web browser based. We’re going
to commit our change. Now, what that’s going to do for me is
that’s going to kick off a
automated pipeline that’s going to go through Jenkins. I can’t
show it to you right now
because, frankly, the password was not saved in the demo and it
showed up there and so I’m kind
of logged out and I don’t have it. But that’s okay. Because
what I can do this happens in
live demos, guys, let’s be honest. What I can do, if you
haven’t seen Jenkins before
[Applause]>>Nice. If you haven’t seen
Jenkins before, I mean, this is
just, you know, this is just a window here, but that’s fine, it
is going to build us a pipeline
that’s actually going to build a new set of GKE infrastructure
for us, it is going to go ahead
and test that infrastructure for us and then wait to deploy into
my on premise environment. But
the core here is that I don’t want to know much about the
kubernetes side here. So in
here, I’ve got my nice little app. We’re going to take a look
at Toronto. It’s nice and cold
in Toronto. You see the nice cold background. And what should
happen when we finish this off
or if I was able to execute my pipeline, it would change to a
very nice different background.
But what I want to show you and what is most important is inside
of the kubernetes console, I’ve
got a series of ephemeral clusters, so these clusters,
they’re going to go away when
I’m done. I don’t need to know anything about them. I haven’t
had to configure them. They were
just given to me by my ops team through my pipeline. But if I
jump a little bit deeper, I can
go ahead and take a look at my actual apps or my workloads. I
did not configure any name
space. I haven’t configured any services or anything else. I
just configured my Docker file
and that was it and modified my code. What you can see here is
the separation between my prod
name space and which is in my core on prem and my test name
space which gets applied to
every ephemeral cluster that I built. This is the core piece
that’s really important for me.
Now, for those of you who know what this looks like in the
kubernetes world, we have a
series of YAML files that I would typically have to build. I
didn’t have to build this. This
is coming from a single repository, single source of
truth, it gets deployed in every
cluster that comes up. For me, all I really cared about was
actually my cube CTL command, I
just do one patch. I just patch and deploy the update, but this
is the pipeline given to me by
ops. Again, all I did was actually change the background
of my application, and had I
been able to log into Jenkins, I would have actually shown you
the background as well but I
can’t. So I’m going to kick it back over to you, Kyle.
>>Thanks, Shea. So they talked
a little about DR earlier. We also plan for the worst, so Shea
posted a blog up on our site,
it’s up there now, so if you do want to dive in a little bit
deeper, you can certainly do
that, so go check it out. Good job, Shea.
>>Thanks. So it is one thing
to have a use case demo, but can you talk a little bit about the
value customers are getting
from all of this?>>Sure. Well, I think
everybody knows in here, it’s no
secret that private cloud never really delivered to development
teams as far as automation
capabilities and what they really needed. The developers we
work with, they’re looking for a
common experience across GCP and the private cloud, and what
they’ve been experiencing today
is a different experience than what they get in GCP than in on
prem environments, not to
mention operating custom kubernetes environment at scales
has its fair share of
challenges, so Anthos is here to address that, starting with GK
on prem and config management,
and we’ve been excited to be working on it.
>>This is a great description
of what this does for the organization, but for the
individual developers and the
audience here, what should they care about most?
>>Similar to what I said
before, extending GK into your data centers delivers the value
of cloud, provides flexibility
and choice for deployment. It’s just another deployment target,
just another region, and
extending a common toolkit across both let’s us move
faster, with more reliability
and security.>>It’s almost like a white
flag for dev ops.
>>I think so. And the best part about it all, no kubernetes
experience required.
>>No kubernetes. All right. Thank you so much for joining us
and with for all of your
help as a design partner.>>Thanks.
>>All right.
>>Good to be here. [Applause]>>Joining us next on stage is
Twinkle Desai, VP of technology
for Kohl’s. Come on out, Twinkle. [Applause]
>>So Kohl’s is a retail joint
with thousands of stores, hundreds of thousands of
associates, millions of on line
customers and billions in sales. Hey, twinkle.
>>Thank you for having me,
Lauren.>>So you started out, you are a
VP of technology today but you
started off as a developer. I heard you used to be a Unix sys
>>That’s right, Lauren. I started as a system
administrator, but before we go
there, I would like to take this opportunity to give a shout out
to my women in technology.
[Applause] As you can see, I’m going to be a mother of two, and
I still continue my passion in
corporate world which is highly demanding and have high
pressure. I still continue to
enjoy being on my production calls. So if I can do it, you
all can too. [Applause]
>>I never thought I’d get to say this at a enterprise tech
conference but yes, queen. Okay.
So tell us about how you moved into your leadership role?
>>During my days as a system
administrator, I spent a lot of time gathering requirements,
procuring environment,
propagating code, and my least favorite part was the ticketing
process. It slowed down the
entire delivery pipeline. So I came up with a thought process
of automating the entire
pipeline with a click of a button which would reduce the
overhead of operations and speed
up the delivery process. My manager loved it, and that was
my first gig as a leader. Today,
at Kohl’s, we have 200 plus developers and SREs working
together on store and digital
strategy to give the seamless customer experience which is our
goal. We are running hundreds
of micro services and thousands of containers on Google GKE, and
we have delivered two
successful holidays.>>So is there anyone here in
the audience who is an SRE or
manages SRE? Can you raise your hands? Okay. So do you have any
tips on how to make those teams
happy?>>It’s a very high pressure
environment, so as a manager, it
is very important to know that we support our team and give
them what they need to deliver.
We need to make sure they are engaged and they’re supported.
At Kohl’s, we have a lot of
hackathon schedules, and I have personally worked with my
leadership team to invest in
training and tech conferences.>>So what did you learn from
your team about what they needed
to be more effective at work?>>The hackathon was a lot of
fun. We learned what developers
needed: innovation, freedom, no ops. We knew we had to build a
developer experience.
>>So you come up with this happy developer checklist and
then you set out to implement it
across your org. Can you talk a little bit more about some of
those specific fixes?
>>Sure. So we got this checklist. The first thing we
did was to break a monolithic
code into micro services because that gave freedom to the
developers to choose the
framework and the language they would desire to code. Today, we
are using AngularJS, Node.js,
and Java springboard. Also, as a developer, I don’t want to
worry about operations. So my
team came up with a solution where we can deploy the
environment, propagate the code,
and do the integration testing. We use language like Python and
Go Live to build the solution.
Also, we integrated some of the great GKE features like fold
injection and circuit breaker.
It’s all one click. It’s all one click. It’s self serve and
fully automated.
>>It is a great thing you’ve been focussed on helping your
developers stay productive and
happy, but for the managers of dev teams in the audience, do
you have any final tips for them
and their teams?>>Throughout my journey, I
learned a lot. I had great
leaders and team to support me along the way. As a manager, it
is very important to know your
team. So my mantra for a happy team would be simple solution
for complex problems, complement
technology with innovators, and find the right balance between
work and fun.
>>I think we can learn a lot about finding balance from you.
Thank you so much for joining us
today. This was great.>>Thank you for having me.
>>All right. So we’ve been hearing from some customers. Are
you ready for some Googlers?
We’ve got a demo derby for you coming up. Greg, come on out and
get us started. [Applause]
>>Thanks, Lauren. So I have to tell you all, Next is my
absolute favorite time of the
year. In fact, I’m such a nerd about it, you probably can tell
I’m actually wearing some custom
Google themed Next sneakers at the moment. I know, right? But
if Next is my favorite time of
the year, this next block of the keynote is my favorite part of
Next. And it’s because we’re
going to spend some quality time with some Google engineers and
we’re going to walk you through
some live demos and technical drill downs of some of our
favorite parts of Google Cloud.
But you didn’t come here to sit and hear me talk, you came to
see the demos, so let’s get
started and please welcome up my demo all star team, Emma, Eric,
Felipe, Steren. Come on up,
folks. [Applause] Hard to upstage a man in a cape. Let’s
get started. You know, it’s not
often that I get an opportunity to be on stage with someone who
broke a world record. But you
heard [Applause]>>But as you heard yesterday
from Sundar, back on March 14th,
we announced that Emma had done just that. She’d set the world
record for the most precise
calculation of pi ever. And there’s a lot we can learn about
that about building
infrastructure and applications. So please welcome Emma up and
we’re going to walk through
exactly how she did it. Come on up. Emma. [Applause] So I mean,
world record holder, that’s
pretty cool. Can you tell me exactly how many digits you
>>Sure. It’s 31 trillion, 415 billion, 926 million, 535,897
>>So that probably looks a little familiar to some of you,
right, that number?
>>It should. It’s 10 pi trillion digits.
>>I know, very Googlely,
right.>>So maybe can we run that
calculation now on the stage?
>>Yeah, I would love, to but the entire calculation would
take four months, which I
believe would be too long for a live demo.
>>So I know you love live
demos, but I think four months is probably pushing it a bit
between you and Gwen Stefani. So
how about maybe a smaller number.
>>Sure, how about five billion
digits.>>Five billion is the small
version that you’re going to do
right now in the demo? All right. Go for it.
>>Sure. So here’s the machine
we are going to use for the demo. It uses the latest
infrastructure architecture
which is available. It has 50 VCPUs that can run at 3.8
gigahertz and it has 240
gigabytes of memory. Now, I’m going to run a program called y
cruncher. That’s the same
program we use for the world record. Now, I’m going to do
everything in memory. Here the
results we can calculate in memory and I’m going to choose
five billion. Now it started.
>>So while that starts, backstage, Emma, you had told me
that before you did this, it
was generally thought to be impossible to use cloud to break
a world record. Can you sort of
explain why that was?>>Sure. Because we needed a
lot of storage and a very high
network throughput. Y cruncher is one of the most efficient
public available software to
calculate pi and it is designed for one large compute with a lot
of storage, and we needed 170
terabytes.>>170 terabytes of storage.
That’s a fair bit of storage.
And I think you have a picture, right, of what one of the
previous systems looked like?
>>Yep. So here are the pictures of computer they use
for the record in 2013.
>>And you can see from the picture, folks, this is exactly
what we were describing. It’s
basically a great big computer connected to a shed load of
disks. But GCP has sort of two
unique advantages in terms of Cloud. One is, you can attach
very large disks per VM, 64
terabytes per VM, so that helps. And the other is we have a lot
of fault tolerance and
resilience built into the disk system and the network, right?
>>Yep. But even then, everyone
I talk to said on the network throughput, so I run some
experiments, I estimate the
amount of data we read and write, did the math to figure
out how much bandwidth we
needed, and it seemed possible to finish it in four to five
>>So four to five months, we’ve gotten it down to at least
it seems feasible, so I think
you have the architecture, right?
>>Yep. So to optimize the disk
throughput, I used this architecture here. There’s one
node running the program with 96
VCPUs and 1.4 terabytes of memory, and there are 24
additional hosting the storage
and providing bandwidth. With this configuration, I saw 20
bits per second throughput for
reads and 16 gigabytes per second for writes. If you are
interested in learning more
about the technical details, I give a talk at this next with
Alexander Yee, the developer for
y cruncher and the video will be available on YouTube.
>>Now, there’s something
interesting about this architecture I want you to
notice. Usually we think of
scaleup or scale out as being two different approaches, but
Emma is actually doing both. The
core machine is scaleup. It’s a very large VM with a lot of
VCPUs and a lot of ram. But the
storage is a classic scale out, using many nodes and the network
to basically get incredibly
high throughput.>>Yep. So the scale is one
challenge, and the other
challenge was reliability.>>I can imagine, because
remember that architecture,
there’s one node that’s actually doing all the math. If
something goes wrong there,
you’re really in trouble, right?>>Yep. The limiting factor on
scaling pi calculation is a
reliability of the component such as disks and memory, and
every part in this architecture
is a single point of failure. Even a single bit failure
anywhere during the calculation
will invalidate the entire computation.
>>So how long was the actual
calculation period?>>We run the program for 111.8
days or if we count all the 25
machines, it’s 7.6 machine years.
>>7.6 machine years. I don’t
know how you do the math, that’s a lot of uptime.
>>Yep. And because Compute
Engine supports line migrations, we were able to confidently run
these machines for four months
without any unexpected shutdowns or reboots.
>>And doing this in the cloud
also had a personal benefit, right?
>>Yep. For me, it was great
that I could move from Tokyo to Seattle while the calculation
was running and I didn’t have to
move the physical machines from Japan to the U.S.
>>Yeah, try to get that
through the TSA. [Applause]>>So I want to ask the rest of
our demo panel here, what do
you think being able to run a VM for 111 days without a reboot,
fun part or not fun part, what’s
the>>Fun part.
>>Oh, and let me show you
another fun thing. So I’m an engineer which means I don’t
trust anything. No matter how
reliable the system is, I always want backups, right? So here’s
the actual disk we used for the
calculation. And I wrote a simple script with a G Cloud
command to take snapshots of the
persistent disks. I’m going to run this script right now. And
with this one, it is working the
in background, I was able to take snapshots and backups of
240 terabytes of disks in less
than 15 minutes, and it happened without any visible performance
>>So again, think about that. 240 terabytes of backups in 15
minutes with no impact in
performance. So when we started yeah, absolutely. [Applause]
>>When we started, you kicked
off a calculation of pi. Are we still calculating pi?
>>Yep. Actually, we just
finished. It took 97 seconds to calculate pi to a billion
digits, and this is actually 4.7
gigabytes of text file. Let’s take a look inside. I’m going to
count the file. Yep. So here
are the results. This is pi.>>Thanks very much, Emma.
>>Thank you. [Applause] Now,
I’m going to go on a limb here and guess that most of you don’t
have a critical business need
to calculate 30 trillion digits of pi. But I’ll bet some of you
do have workloads where somebody
has said we can’t do that in cloud, we need too many big VMs,
we need too much uptime, I’ve
got old software that can only scaleup. And I have yet to meet a
developer who says they like
their machines to go down. So that’s a quick example of Google
infrastructure being able to
power not just an interesting calculation but really something
breaking the world record. So
let’s move up the stack. We did infrastructure. Let’s move up
the stack to data. Let’s face
it, we’re all digital pack rats. I don’t know when the last time
is I deleted a photo or an
e mail. I just assume I can store it forever. And you know what,
our companies are exactly the
same. So now the problem is, how do I actually make any sense of
out of all of that data, how do
I find meaning in all of that noise? And typically, that’s
where the pain comes in. If
there’s one thing I know, it’s that managing, sharding, scaling
and patching large scale
databases is definitely not the fun part. But luckily we’ve got
this almost magical tool called
BigQuery. So to talk about BigQuery and have some fun with
it, let me bring up our own
captain BigQuery, Felipe Hoffa.>>Hello. Hello. Hi everyone.
>>Hey, Felipe. So Felipe,
we’re used to seeing and reading about how big data is really
targeting some of the big
questions we face as a species. Is there life on other planets?
What’s the nature of the human
genome? Do developers want tabs or spaces? Why don’t you just
pick one of those at random and
solve it for us.>>Hey, hello, everyone. How
are you doing? Let’s yes.
Let’s have some fun with data, let’s analyze data, and let’s
answer some questions that are
important for developers. So how about tabs versus spaces.
>>I think you picked a good
one.>>Well, can I get applause for
>>Team tabs? [Applause]>>Spaces? [Applause]
>>I think we have the answer.
>>I’m hoping you do something a little more scientific than just
a clap contest.
>>Okay. Let’s use data.>>Okay.
>>So we have this public
datasets program that has a lot of datasets ready for you to be
queried. We have hundreds of
these datasets here, ready for you you can see the whole
catalog in the console. And
today, to answer this question, I’m going to use the copy of
GitHub that we have. We have
files that we took out of GitHub, all of the officer
files, and we have them inside
of BigQuery so we can run queries over them. If you have
never used BigQuery, it also has
web UI so anyone can just jump right now, right into the web UI
and start running queries. And
in this case, I have this table that has two terabytes of source
code and more than 250 million
files, and I’m going to extract from it all of the files I’m
interested in, like I’m going to
look at this question language by language. And once I have
extracted all of the files, I
can write a query like this. Let me start running it right now.
>>Okay. So while this query
runs, I want to pause for a second and point out a couple of
things. First of all, BigQuery
is already incredibly easy to use. You just write queries. But
in this case, he didn’t even
have to upload the data because the public dataset program made
it available. The second thing I
want to think about is I mean, I want to go to the team
here and ask, what do we think?
Having to setup a whole bunch of indexes just to get good query
performance, fun part or not fun
part? Not fun. The last thing I want to point out is think
about what Felipe is doing. He’s
writing a SQL query that analyzes source code. Most of us
think of SQL for structure
data, financial records, business transactions. I’ve seen
a lot of developer source code
in my life, and a lot of it is not exactly what I’d call
structured. And so but he’s
using SQL to actually analyze that.
>>Exactly. Like we don’t need
this here, we just can’t start writing SQL queries, and in this
case I have a SQL query that is
going file by file, it’s explaining this file, the lines,
it’s looking at the first
character, is it a space or a tab, and then we are tallying up
our numbers.
>>And what are our results?>>These are out numbers. Let
me visualize it. I have it here.
>>So it turns out that generally people like spaces.
You Java folks, you’re a little
unsure. And man, I don’t want to put a space in one of your
gopher files, like you are
really team tabs.>>Yes. That is cool.
>>So that was analyzing a bit
about source code. What else can you learn about developers by
using BigQuery?
>>Sure. So is tabs versus spaces the most important
question for developers?
>>Probably not.>>Can we somehow measure
what’s the most important
question? Where do you all go when you’re lost?
>>Where do you look for info?
>>When you need help?>>Stack Overflow? I heard it.
>>Stack Overflow. Yeah. And we
have a copy inside too so now we can analyze it. And we can
look for the top questions just
by the number of views or we can do something a little more
special and we can look at the
top questions right now, like, in this case, I have a snapshot
of 17 million questions and I’m
going to compare this snapshot with the previous snapshot to
know what were the top questions
only during this quarter. And let me run the query.
>>So while it runs, any
guesses on what’s the most commonly asked question? I heard
how to exit VM somewhere down
here. What is the top question, Felipe?
>>Okay. I got my results here,
and with more than 400,000 views, this last quarter, is how
do I undo the most recent
command in Git. Yes, that’s regret, and we’ve all felt it.
>>We’ve all had regret, yes.
So so far we figured out that most people like spaces, except
gophers who are delightfully
different, and that everybody has a moment of regret of having
to back out of a code change.
But BigQuery is a product for developers, so can you get more
specific and find out about
BigQuery developers?>>Let’s go deeper. Let’s use
BigQuery to query Stack Overflow
to see what people are asking about BigQuery.
>>That’s an inception moment,
but I’m going to trust you.>>It’s basically the same
query that I just had but now
I’m filtering here for only questions start with the query.
And I have the answers here. And
you can see that the top question for BigQuery this last
quarter was can BigQuery delete
rows?>>And can it?
>>BigQuery can now delete rows
>>Now it can.>>Yes.
>>We use data to listen to
you. [Applause]>>Yes.
>>So what you saw was BigQuery
engineers using BigQuery to improve BigQuery. Now, of
course, Stack Overflow isn’t
just about asking questions, it’s about answering questions.
What can you tell us about that?
>>Yes. So let’s see the answers part, who is answering
the BigQuery questions? So now I
can look at the answers. I will run a join, it’s a little
logger query to join it to see
questions start with BigQuery, find who answered them, and then
find who got the most app votes
this quarter. And the results, oh, let me analyze all of these
questions around join, I I’m
there.>>Well, I hope you’re there.
>>Yes. But what’s really,
really interesting for me, and I love this, is everyone here
involved, they are not Googlers.
They are just experts that love the community, they love
sharing what they know, they
love answering questions, and they are my real heroes. Thank
you. [Applause]
>>Thank you, William, and thank you, Ben.
>>Thanks, Felipe. [Applause]
>>Felipe may wear the cape, but you all who answer the
questions are the real heros. So
we’ve seen how BigQuery is kind of the ultimate serverless data
analytics platform, but there’s
another kind of serverless, isn’t there? Serverless compute.
And here at Next, we’ve
announced Cloud Run, bringing serverless to containers. And
again, we thought let’s do a
deep demo and actually show you Cloud Run in action. So to do
that, please welcome Steren up
and Ryan. Steren and Ryan, come on up. [Applause]
>>Hi. We as developers, we
love serverless. We love that we can focus on our code, deploy,
and let the platform take care
of the rest. But, you know, developers keep asking us, can
we run more things serverless?
What about going beyond simple functions? Introducing Cloud
Run. Cloud Run is a brand new
product that allows you to run any stateless HTTP container on
serverless, and it’s available
today. [Applause] You know what, let’s see how easy it is to run
a container on Cloud Run. So
from the cloud console, Ryan simply navigates to Cloud Run,
opens the deployment page, paste
the URL of the content or image, and clicks create.
>>That’s it. That’s>>Sometimes clicking the mouse
is the hardest part of a demo,
folks, just so you know.>>That’s all we needed to
deploy the content to Cloud Run,
no infrastructure to provision in advance, no YAML file, and no
servers. So Cloud Run has
imported your image, made sure that it started, and gathers a
stable and secure HTTPS
endpoint. So what we just deployed. Pretty cool.
>>Thank you. So what we just deployed is a scalable micro
service that transforms micro
document in to PDFs. Let’s see it in action. Let’s give it a
doc to convert. Upload.
Converting. And we get a PDF back. [Applause]
>>So what is very interesting
here is that we are only paying for when we use it. That means
that when our micro service is
not processing any documents, well, we just don’t pay
>>So that’s really cool. But I think it would be interesting
for people to see how you
actually implemented the service. So how did you actually
do the doc to PDF?
>>Easy. We looked around and decided to add open office to
our content or image.
>>Now, wait a second, open office is not exactly a modern
piece of software. It’s about a
15 year old binary, it is about 200 megs, and you’re saying you
just took that binary and
deployed it in to a serverless workload with Cloud Run?
>>Yep, Cloud Run supports the
Cloud containers. That means you can run any programming
language you want or any
software, in a serverless way.>>Well, let’s look
at the code then.
>>Exactly. Let’s take a look at the code. So here, what we
have is a small piece of Python
Cloud that listens for incoming HTTP requests and then calls
open office to convert our
document. We also have very small file named a Docker file,
and on this scale, if you’re not
familiar with this, it’s very simple. It starts by defining
our base image. In our case,
it’s the official Python based image. Later we install open
office and we specify old style
command. Then we package all of this into a content or image
using cloud build and deployed
it to Cloud Run. On Cloud Run, our micro service can
automatically be scaled to
thousands of content or instances in just a few seconds
if needed, and as I said, we
only pay for the exact resources that we use.
>>I’m going to go to the
judges one more time on this. So taking a legacy app, being able
to bring it in to a micro
services environment and deploy it with no code change in a
serverless, is that fun or not
>>But wait, wait. We are not
done yet. Developers love serverless, but sometimes you
might want to have a little bit
more control. For example, you want bigger CPU sizes, you want
access to GPUs, you want more
memory, or maybe just your corporate IT wants you to run in
a kubernetes engine cluster.
That’s why we are also introducing Cloud Run on GKE.
Let’s take a look. Using the
exact same user interface, Ryan is going to deploy the exact
same content or image, this time
in our kubernetes engine cluster. And instead of a fully
managed region, Ryan is now
picking our GKE cluster. We get the same Cloud Run developer
experience. It’s deploying. And
our micro service is creating, as before, we get a stable and
secured end point that
automatically scales our micro service.
>>That’s cool, Steren. Are you
done yet?>>Nope. Nope. There is more.
>>All right. Keep going.
>>So behind the scenes Cloud Run and Cloud Run on GKE are
power by Knative, a project to
end serverless workloads that we launched last year. This means
we can actually deploy the exact
same micro service to any kubernetes cluster running
Knative. Let’s take a look. To
do this, Ryan exports all micro service into a file. Then using
the cube control command, he’s
going to deploy it to manage Knative on IBM Cloud. And like
before, this is creating the
exact same micro service auto scaled and giving us an endpoint
to invoke it. We have it on IBM
cloud. This portability is enabled by Knative.
>>That’s awesome. Thanks,
Steren. Thanks, Ryan. [Applause]>>So Cloud Run gives you
everything you love about
serverless. There’s no servers to manage. As a developer, you
get to stay in the code which is
kind of where you want to be. Fast scale up. Even more
importantly, scale down to zero.
You literally pay nothing when there’s no cycles being run. And
it let’s you do that using any
binary or language because it’s on the flexibility of
containers. You get a consistent
experience wherever you want it, and that can be on a fully
managed environment or on GKE.
This is in beta today. You can all go and try it now. So
please, I encourage you to do
that. So let’s come back to that theme of data. But let’s shift
the focus just a little bit.
Some of you heard yesterday the work that we did student
developers in the NCAA around
March Madness analysis, but in case you didn’t, we have a quick
little series of video snippets
that we put together around the program. So let’s go ahead and
run those.
>>Google Cloud and the NCAA have teamed up to find new ways
to measure college basketball.
This March Madness, see how Google Cloud student developers
are using data to analyze games
all tournament long.>>When student developer Izana
isn’t mixing data science with
college basketball, he’s mixing something like this. See how
Google Cloud is being used to
analyze March Madness.>>Student developer Belisa
enjoys executing complex SQL,
deploying logistic regression models, and knitting.
>>It’s true.
>>See how Google Cloud is being used to analyze March
Madness. [Applause]
>>You know, it was a real pleasure to work with that next
generation of software
developers, and we’re really excited that they’ve been here
at Next with us. So please give
us one more round of applause for these student developers who
have done such amazing work. So
let’s take the idea of sports and data and let’s turn it up to
11. And to do that, let me
bring up Eric Schmidt, one of our developer advocates and a
pretty special customer, Pabail
Sidhu, the director of analytics and basketball innovation for
the Golden State Warriors. Come
on up, Eric and Pabail. [Applause]
>>Wow, Pabail, thanks for
being here today, and congratulations on making the
playoffs. You are in contention
for your third consecutive world championship.
>>Thanks, Eric. I’m a huge fan
of all things Google, so thanks for having me.
>>You bet. And I’m a huge fan
of the Warriors. Now, getting to the playoffs isn’t easy,
winning championships isn’t
easy, and I know data science is a really, really big part of
the equation for the success of
the Warriors. Can you talk a little bit about how much data
you have to manage and also
share some light on how much data is generated in an NBA
game? I’m sure most of the folks
in the audience maybe doesn’t have an understanding of that.
>>Yeah, absolutely. So we know
on the court, we have two teams, five players each, two
baskets and one ball. What you
probably didn’t know is that we have tracking cameras in place
that are capturing player and
ball movement. We’re looking at every single dribble, every
single pass, every single catch
and shoot, every single dive to the basket. And by the time the
game is over, we’ve collected
over 1 million events for the game. Now, if you do the math
on, that’s 82 regular season
games, 30 NBA teams, so we’re looking at big data. And it is
my job to get this raw data and
make it actionable so I can provide key insights to our
coaches, players, front office,
and ownership group.>>Now, you and I have been
working on your migration to the
cloud over the past couple of weeks, and one of the things I
found interesting is that since
the NBA provides the same data to all the teams, how do you end
up competing?
>>That’s a good question. We realized our biggest advantage
would be creative analysis and
unique insights crafted around our philosophies, but in order
to do so, we needed all of our
data in our own cloud environment.
>>Great. So let’s talk data.
Now, inside of the cloud show over here, I’ve pulled up some
of our sample raw data and our
cloud storage data lake, and what do we see here on the
>>Yeah, this is a raw event file showing what happened on
the court and the players
involved. It’s super powerful information. We’re talking tens
of thousands of files across
seven different schemas.>>And how much data are we
talking in aggregate?
>>Raw, close to a terabyte.>>Wow. So how did you manage
all of this data, like we’re
going to get to analysis here in a minute, but before you get to
analysis, you have to solve the
management problem.>>Yeah, well, it was really
hard. We used lots of manual
scripts and only pulled the data that we thought we needed. You
know what, we never really had
the full picture.>>Got it. So I’ve been keeping
score here a little bit, no pun
intended, you’ve talked about siloed data, you’ve talked about
high fidelity data, you’ve
talked about time series data at high volume. It’s all very
powerful if you can get your
hands on it. I hate to break it to you, and don’t take this
personally, but you look and
sound like a classic enterprise.>>You know, in many ways, we
are like any other enterprise,
except we just get bigger trophies. But you know what, we
deal with the classic 80/20
problem where we spend 80 percent of our time wrangling
the data which only allows us
20% of the time to analyze the data.
>>All right. So you and I,
we’ve been working on your migration to the Google Cloud,
so let’s take a look at your new
serverless data analytics environment. We’re going to do a
little bit of analysis to
attack some problems in the playoffs, and we’ll try to flip
this 80/20 problem around. Now,
I’m going to pull up the Cloud Dataflow console. Now, before we
could get to Cloud Dataflow, we
had to do a couple of things. One, we had to work on data
movement. So for that, we use
the Cloud’s storage transfer service which provides an
automated way to migrate all of
the data in S3 where it’s sitting raw over to Cloud
storage. So we’ve solved that
problem. Then we use Cloud composer to automate all of our
ETL processes inside of
Dataflow. Now, for my fellow data engineers out there, your
ETL processes probably look at
things like orders and line items and genomes and quick
stream data. For the Warriors.
>>We’re looking at events.>>It is all about the event
data. So in Dataflow, as you can
see here, we express a series of transformations. So the first
one is I go ahead and read,
I’ll read not just one file but hundreds of thousands of files
from GCS, and then apply
additional transforms to parse and then load that Jaycon, in
this case, to over 35 different
schemas. So the beauty here is that there’s no more manual
scripts involved, right, so
we’re going to spend less time on management and more time on
>>Yep.>>So with our data warehouse
in place, which is the output of
these final transforms, we’re going to jump over to BigQuery
and start looking at some data.
So Pabail, what do we see here?>>Love this part. What we have
here is distinct tables looking
for each logical event type in the game. We’re looking at
passes, we’re looking at picks,
we’re looking at dribbles. We have 4.62 gigs of dribbles.
That’s a lot of dribbles.
>>Yeah. In fact, we actually now have 4.63 because we just
updated the database this
morning from games last night, so it is a lot of dribbles. Now,
the fun part is once we have
all of the data actionable, we can start writing some queries.
So I’ve written a quick query to
analyze how many times Steph Curry has dribbled the ball this
season. Hey, Felipe, by the
way, dribbles versus passes is the tabs versus spaces of
basketball. So there you go. All
right. So let’s go ahead and run this query. Scan, scan,
scan. And there you go. We just
scanned 608 megs of data in .8 seconds. Steph Curry has
dribbled the bowel 18,710 times.
That’s pretty awesome.>>That’s cute, Eric. That’s
something you can share with
your friends at Google, but it’s not going to help you win any
>>Why not? What do you need, man?
>>I need some more meaningful
analysis, and I also need the ability to look it up without
using SQL queries.
>>You mean every Warrior isn’t writing SQL?
>>Not that I know of, but I
wouldn’t put it pass Steph.>>You’re talking
about Steph Curry.
>>Yeah.>>That guy can shoot the ball,
shoot the three, can write SQL,
and I’m pretty sure by now he probably has Anthos running on
his mobile phone or something
like that so. All right. So I’ll be the first to admit that
yeah, BigQuery can be kind of a
lonely place for data, you’re just basically querying, you’re
looking at information, we need
to bring this data to life and BigQuery has amazing integration
with different visualization
tools like Looker, Tableau. In this case we’re going to go
ahead and use Data Studio. So if
I click on this button, I can jump over into visualization.
Now, for the sake of the demo,
we fast forward a little bit, it took us about six, seven
minutes to put this
visualization together, and I have access to all of that data
inside of BigQuery. Another
thing to point out is that for the sake of competitiveness, to
help protect competitiveness as
we get into playoffs, all of this data is real, except we’ve
masked out a couple of the teams
that we think you’re going to play. So congratulations, you’re
going to play what I believe to
be one of the greatest NBA teams of all time, may they rest
in peace, the Seattle
Supersonics.>>Yes, my hometown team. It is
always exciting to go back
home. But this right here, what we’re looking at is a team
profile page for the Seattle
Supersonics. What immediately stands out is how efficient they
are in offense, ranked sixth
overall. But the two things that really pop out to me is, one,
they’re number one in the NBA in
transition offense. And also, they’re bottom of the league in
half court offense, 30th.
>>Dead last.>>Dead last.
>So you just looked at this
report, you start ripping off a bunch of information. This is
what you do every day. Like you
can glean insights like this once data comes to life and then
you end up having to
collaborate and share this with your teammates and coaches. How
have you typically shared this
information in the past?>>Well, we share lots of
screenshots and one off
documents, death by PDFs.>>Death by PDF may be the
worse thing I have heard all
day. So let’s go ahead and solve that problem. Inside of Data
Studio, it’s really, really easy
to share information. So I’m going to go in say and share
this report, and we’ll type in
coach Steve and we’ll paste in a message and need to look into
transition offense, pretty cool,
I’m going to go ahead and send. This is going to end up in his
inbox. Our ATL processes are
running, B queries up to date, boom, he comes in, he can see
exactly what’s going on, problem
solved.>>Eric, that is not cool.
>>What, why?
>>You sent him a problem but no solution.
>>This is true.
>>It’s okay though.>>All right.
>>So what we know with the
Sonics is they have their number one transition offense, that’s
good to know, but what I’m
interested in seeing is what players, specifically which trio
of players, make up their
biggest strengths.>>Got it. So we’re going to
start kind of drilling into this
problem from a data science perspective. Here we have some
high level descriptive
information kind of looking at the whats. Yes, we kind of see
where you’re going to have
potentially some challenges with the Sonics. So to dive into our
data science environment, we’re
going to get out of visualizations and we’re going
to get into an i Python
environment, and to do that, we will use AI platform notebooks
which provides unmanaged
environment for Jupyter. So once I go ahead and deploy one of
these Jupyter labs, I no longer
have to do any installation locally. I’m done. So I have
this fully managed running in
the cloud. If I say click and open in Jupyter lab, here’s the
notebook that you and I and some
of the folks on my team have been working on for the past
couple of days. Now, at the top,
we’ll go ahead and layer in a couple of the traditional
libraries that we feel good
about, things like Pandas and NumPy and Maptplot. We also add a
reference to BigQuery. So inside
of this notebook we have access to all of the queries and views
that we have previously built.
So in this case, we’ve put the query I put a query together
to identify which three man
combo is the most dangerous on the Sonics. Let’s go ahead and
run that. It’s going to go out
to BigQuery, and there’s the answer to the question, Shawn
Kemp, Gary Payton, and Detlef
Schrempf. Problem solved.>>We’re getting there, but not
quite. There’s a great trio
right there, very efficient, but what I’m interested to see how
is what makes them so efficient?
>>Got it. So we want to get deeper into the kind of how. And
for that, we need to do deeper
analysis. In this case, I took the output from the previous
cell and we’ve done a couple of
things. One we’re laying in some additional SQL. Felipe, I hope
you like this, this query is 508
lines long. It does multiple joins. It uses most of the
analytical functions, etc. We
also have some Python code that we’ll go ahead and fuse together
to do a little bit more of
statistical analysis. Let’s run this and see if we can answer
the question, what makes these
guys so threatening? So the output of that is two rows. So
I’m looking at the trio, 38
different metrics, compared to the team and all of their
average metrics. Now, this data
is a little entractable, right, this is not going to help us
>>Visuals.>>Next thing we’re going to do
is pipe this frame through a
little bit more pandas code and build a visual. See what this
looks like. All right. Pabail,
what do we see here about the Sonics?
>>Okay. Now, this is fun. I
like this. What we have here is the difference between the trio
and the team average across a
bunch of stats. Let’s take a look at the red bar on the
right, in transition. So we know
the Sonics have the number one transition offense, and this
trio is right around that
average, slightly better, so I need to make sure that I
emphasize to our coaching staff
that our transient defense needs to be on point
throughout the whole game.
>>So this is kind of interesting, like this is the
problem we were trying to solve,
we kept looking at transition, transition, and it turns out
this trio is really no better
than the rest of the team.>>Yes. But you know what
stands out to me is on the left,
half court offense. Now, if you recall from the team profile
page, they were ranked 30th in
the league. This trio is significantly better. This is
great information. Because now
I’m starting to see more of a complete picture.
>>Got it.
>>But I’m not done yet.>>Okay.
>>Can we take a look at the
look at their kind of their efficiency, what makes them so
>>Sure. So the beauty of the results from that last cell is
that we can then dive into
additional information. So in this case, we’re going to start
looking at their tendencies. So
what do you see here?>>So now I’m starting to see a
much clearer picture. This team
is pretty unselfish when it comes to comparing them to the
rest of the team. They do a
excellent job with ball movement and they utilize the off ball
screen at a high rate, and they
don’t dribble the ball much, very efficient style basketball.
>>Yeah, I love this type of
data analysis. We started with one problem, attempting to solve
it, then we found another one,
which ended up leading to real, actionable insight. And the fun
part is we were able to flip
this 80/20 challenge around, providing you more time for
insight discovery. And I got one
question for you.>>Sure.
>>How are you going to address
this threat in the playoffs?>>Well, I know what I’m doing.
>>Okay. Can you share?
>>I can’t tell you guys. I’ve shared enough today. I have a
playoff series I’m trying to
win.>>All right, man. Pabail,
thanks for being here today and
partnering with us on your smart analytics journey. Good luck
against the Supersonics, and if
you need any help writing queries, give me a call.
>>I appreciate that. Thanks,
Eric.>>Thanks, man. [Applause]
>>So before you go, Pabail,
we’ve been talking a lot and asking people about the fun part
and the not fun part, so when
you think about the whole journey and data scientists for
you, what’s the real fun part?
>>You know what, it would be it probably would be that’s
a good question.
>>Might it be this?>>That’s what it is.
>>Thank everybody. Thank Pabail, thank Eric, thank
Felipe, Staren, and Emma. I
can’t top the trophy so we’re just going to wrap the demo
derby and bring Adam back up.
Thanks, everybody.>>Oh, fantastic. All right.
Thanks to Greg and everyone on
the demo derby team. Over the course of the afternoon, I hope a
couple of big ideas came across.
Number one, we love developers. Whether you’re building an app,
running an ops team, migrating a
database or figuring out how to win championships, the people
that put their hands on the
keyboards, you’re the MVPs in any company. The second is we
want to help you get to the fun
part of your job, the part where you’re excited, back when you
first started being a developer.
And then maybe the last thing, we want to help you, everything
you see here, we want to help
you spend less time on the everyday hassles that get in the
way of your just being
creative, solving problems, learning new stuff and growing
your career. So we’ve actually
got hundreds and even thousands of engineers that work at Google
on our products and in dev rel
and they’re eager to talk to you and hear about your challenges
and your opportunities and help
you with all of this technology.

About the author

Leave a Reply

Your email address will not be published. Required fields are marked *