12 Comments
Nov 30, 2020Liked by Doug O'Laughlin

Let's say as a software engineer, I have the next 10 years to invest my free time to build a "moat" for my own career by "going down the stack" and integrating specialized hardware with my software, where should I start my learning journey or what skills do you think will become essential?

I'm tempted to go to Apple, Google, FB, and AWS's career page and find such info, but I'm really curious to what you have to say!

Thank you!

Expand full comment

Hey man, I'm curious if you plan to provide an update on how have these compute platforms evolved over the past three years. Very curious to understand better Google's Tensorflow platform given the drama since ChatGPT.

What's your preferred way to track each platform and their capabilities? I'd assume that some of the in-house silicon will be used for internal workloads vs cloud compute for clients (e.g. the video chip from Google).

Expand full comment

What do you think about the ARM-NVDIA integration? Will that spook the market and, worse, lock each other in the same way Intel fab and design did?

On spooking the market- many IaaS are looking for RISC-V architectures now. I think NVDA's ambition with CUDA was lauded but I think further integration with ARM made its ambition too overt and the customers realized that NVDA will (or at least can) strong arm them into NVDA's own roadmap.

On locking each other in, what if one of the roadmaps are off? Can NVDA, trying to pursue very specific verticals on the design side, still maintain the relevance of ARM in the long run?

Expand full comment