Seth Bertalotto

Senior Principal Software Engineer

Code Journey

All developers take different paths throughout their coding careers. I like to call this their “Code Journey”. This is my story into how I learned to program and what keeps me doing it to this day.

1996: Discovery

1996 Packard Bell D160
1996 Packard Bell D160

My first computer was my families’ 1996 Packard Bell D160. With 133MHz processor and only 8MB pre-installed, I remembering thinking this was the best computer that was ever made! However, just trying to play a video game like Madden 98 was impossible, without upgrading it to 16MB of memory.

At this time, the internet was in full swing. I would go to my friends house after school and try to squeeze out as much of the internet we could in the free 200 minutes of AOL usage. I also had access to WebTV, an early TV based web surfing device, many years ahead of its time, but horribly slow and hard to use.

Soon after getting online, I got more curious about how various websites I visited were made. I would see some interesting interaction, like menu drop downs and try to figure out how it was done.

1998 Microsoft.com Screenshot
1998 Microsoft.com Screenshot

One of my favorites sites to peak under the hood was the Microsoft homepage, which had the aforementioned drop down menus. Viewing source on this page presented a mess of tables and div tags that was almost incomprehensible to understand. My tried and true tactic was to copy the source code into MS Notepad and little by little delete code and check to see that the menu’s still worked. Then I would repeat this until I was left with the “minimal” amount of code to make it work.

Eventually, I wanted to understand the code more than just copy paste other peoples work. So I ditched Notepad and started using Macromedia’s Dreamweaver editor. I still wrote my HTML from scratch, but dabbled in their interactive libraries that added “mm_” prefixes all over the code base. Learning how to do image mouse over effects and mouse trail animations was a good to way learn how interactive and expressive websites could be.

1998: Static Websites

Fast forward a year or two later and I was building all sorts of websites. I remember having sites for my favorite music bands, a site of 100’s of animated gif’s I found on the web (all loading at once and causing the browser to crawl to a halt) and even a video game site of cheat codes for my favorite video games.

Having all these sites on my computer wouldn’t work since I couldn’t share these with my friends and random internet strangers. This is when I found the website hosting service, Tripod, Most other developers I knew were on Geocities, but I didn’t like the community aspect of it. At the time, I thought Tripod was easier to use and just worked.

My tooling at the time was still Dreamweaver, but with a simple FTP setup to automatically push my changes to the server whenever I saved locally. No testing or CI in place to catch issues, this was the early days of just pushing to production and debugging live on site.

2000: Semantic Markup

Up until this point, if you “Viewed Source” on any of my sites, you would find a mess of capitalized tag names, table layouts, spacer.gif hacks and invalid markup that would make accessibility experts faint.

This is when I discovered something called Semantic Markup, the thought that the tags actually had meaning and could be used in the correct context was an “a-ha” moment for me. I loved the idea of decorating the website with proper tags that would help with SEO and accessibility.

I deep dived into A List Apart, semantic HTML books from Simplebits and other sites that evangelized a web that was accessible to everyone. Building websites with CSS 2 was a completely different thought process and really sparked a new creative direction for my websites.

2004: Dynamic Websites

Static sites were fun, but I was tired of creating hundreds of .html files. Copying header and footer markup into each file was tedious. I also tried to use frames to make this easier, but they were buggy and error prone across browsers.

PHP logo
PHP logo

This is when I discovered PHP and MySQL databases. I wasn’t quite sure what either was, however, I knew it would let me build more dynamic and complex websites. After more research, I wanted to build something that would leverage these technologies more than just a simple static sites.

At the time, mobile phones were becoming popular and with that, ringtone usage. I had been using other websites on the internet to download MIDI ringtone files, but I found them to be hard navigate or riddled with pop-up ads and flashing banner images.

I decided to build my own ringtone website that would solve all these issues. This site became MIDI Delight, still active to this day and something that helped me land a job later in my career. This site allowed me to leverage MySQL to build a database of artists, songs, user profiles, favorites, polls and uploading capabilities. It really helped me learn how to build a more complex, data driven website from scratch.

2009: Server-side JS

Node.Js logo
Node.Js logo

At this point, I had been working at Yahoo for a few years, we had been building all our sites with PHP. Things were working well, but PHP had been showing its age and was getting hard to work with (this is before the improvements made in PHP 5+). We were looking for something new to replace our aging PHP stacks and started to look into the new server side JavaScript technology called Node.js.

Node.js was a game changer. It allowed me to use my frontend JavaScript skills on the backend and removed the need to context switch between languages. It also allowed us to leverage open source technology more broadly and re-use the libraries that were battle tested by hundreds of other websites. We started building all new websites with Node.js at Yahoo.

2013: Universal Webapps

Even though we were able to leverage the same language on the server and client, we still found ourselves rewriting the same business logic in each run time. This lead to more work and bugs, as we had to maintain two different frameworks in our applications.

React logo
React logo

Around this time, Facebook released React to the world. At first, we were skeptical of it and as confused as others in regards to mixing HTML and JS. We prototyped a few projects and started to discover how it could be used to not only make highly interactive and dynamic sites, but could also be leveraged on the server using much of the same code between the browser and server.

Leveraging React for templating was a step in the right direction, but we still needed to figure out how to manage data and state. At this point in time, Facebook had released the Flux architecture, but no actual companion library. This lead to a proliferation of client based Flux libraries. However, none that satisfied our business requirements within Yahoo (this was before Redux).

Fluxible logo
Fluxible logo

Therefore, we decided to build our own open sourced universal Flux framework, called it Fluxible. Fluxible was a truly universal library that handled routing, state management, hydration on the client and much more. It helped solve many application requirements that we had internally.

2016: Typed JS

With Fluxible and React in tow, our applications got more sophisticated. We were now able to shared business logic across run times. This allowed us to break down the application into smaller chunks (or modules) and shared responsibility of various parts of the applications across teams.

This worked exceedingly well for the most part, but given JavaScripts lack of type system and dynamic nature led to scalability and maintainability issues. Changes across teams would break each others code, refactoring what challenging since there were so many decoupled parts of the system. Dev’s were afraid to make changes in fear of breaking a part of the application they were not familiar with.

TypeScript logo
TypeScript logo

All these issues led us to TypeScript. The static typing, ability to catch errors before committing and the ease of refactoring were big wins for our projects. Selling developers on these advantages took some time, but once they got over the learning curve, the benefits outweighed the doubts.

Over the past few years, all new projects we have worked on have been started with TypeScript. It has made our code more readable, maintainable and easier to work than our past non-typed efforts.

Future

I’m not quite sure what the future holds for web development. The industry really blossomed the last 10 years to radically change what could be done with HTML, CSS and JS. With all this progress, it does seem like the past few years that the industry had settled on React and its ecosystem of libraries and components. I like React, but I would prefer to see the industry focus on open standards rather than proprietary technology, governed by one company.

Web Components seem like the next natural phase of evolution, but they have been around for a few years and have still yet to see widespread adoption amongst developers and applications.

I’m also excited about more widespread adoption of ESModules in modern browsers, removing the need for complicated bundling tools like Webpack is a win for users and developers alike.

Technology like service workers and the advent of PWA’s has really pushed webapps to a more app like experience. I hope that Apple and Google continue to push the industry forward to give the web a fighting chance.

The web has changed drastically since I started way back on my Pack Bell, but I’m excited to see what the future has in store. I look forward to adding more to this story as the years pass…