Computers touch all most every aspect of our lives today. We take the way they work for granted and the unsung heroes who built the technology, protocols, philosophies, and circuit boards, patched them all together - and sometimes willed amazingness out of nothing. Not in this podcast. Welcome to the History of Computing. Let's get our nerd on!
Lotus: From Yoga to Software
Nelumbo nucifera, or the sacred lotus, is a plant that grows in flood plains, rivers, and deltas. Their seeds can remain dormant for years and when floods come along, blossom into a colony of plants and flowers. Some of the oldest seeds can be found in China, where they’re known to represent longevity. No surprise, given their level of nitrition and connection to the waters that irrigated crops by then. They also grow in far away lands, all the way to India and out to Australia. The flower is sacred in Hinduism and Buddhism, and further back in ancient Egypt. Padmasana is a Sanskrit term meaning lotus, or Padma, and Asana, or posture. The Pashupati seal from the Indus Valley civilization shows a diety in what’s widely considered the first documented yoga pose, from around 2,500 BCE. 2,700 years later (give or take a century), the Hindu author and mystic Patanjali wrote a work referred to as the Yoga Sutras. Here he outlined the original asanas, or sitting yoga poses. The Rig Veda, from around 1,500 BCE, is the oldest currently known Vedic text. It is also the first to use the word “yoga”. It describes songs, rituals, and mantras the Brahmans of the day used - as well as the Padma. Further Vedic texts explore how the lotus grew out of Lord Vishnu with Brahma in the center. He created the Universe out of lotus petals. Lakshmi went on to grow out of a lotus from Vishnu as well. It was only natural that humans would attempt to align their own meditation practices with the beautiful meditatios of the lotus. By the 300s, art and coins showed people in the lotus position. It was described in texts that survive from the 8th century. Over the centuries contradictions in texts were clarified in a period known as Classical Yoga, then Tantra and and Hatha Yoga were developed and codified in the Post-Classical Yoga age, and as empires grew and India became a part of the British empire, Yoga began to travel to the west in the late 1800s. By 1893, Swami Vivekananda gave lectures at the Parliament of Religions in Chicago. More practicioners meant more systems of yoga. Yogendra brought asanas to the United States in 1919, as more Indians migrated to the United States. Babaji’s kriya yoga arrived in Boston in 1920. Then, as we’ve discussed in previous episodes, the United States tightened immigration in the 1920s and people had to go to India to get more training. Theos Bernard’s Hatha Yoga: The Report of a Personal Experience brought some of that knowledge home when he came back in 1947. Indra Devi opened a yoga studio in Hollywood and wrote books for housewives. She brought a whole system, or branch home. Walt and Magana Baptiste opened a studio in San Francisco. Swamis began to come to the US and more schools were opened. Richard Hittleman began to teach yoga in New York and began to teach on television in 1961. He was one of the first to seperate the religious aspect from the health benefits. By 1965, the immigration quotas were removed and a wave of teachers came to the US to teach yoga. The Beatles went to India in 1966 and 1968, and for many Transcendental Meditation took root, which has now grown to over a thousand training centers and over 40,000 teachers. Swamis opened meditation centers, institutes, started magazines, and even magazines. Yoga became so big that Rupert Holmes even poked fun of it in his song “Escape (The Piña Colada Song)” in 1979. Yoga had become part of the counter-culture, and the generation that followed represented a backlash of sorts. A common theme of the rise of personal computers is that the early pioneers were a part of that counter-culture. Mitch Kapor graduated high school in 1967, just in time to be one of the best examples of that. Kapor built his own calculator in as a kid before going to camp to get his first exposure to programming on a Bendix. His high school got one of the 1620 IBM minicomputers and he got the bug. He went off to Yale at 16 and learned to program in APL and then found Computer Lib by Ted Nelson and learned BASIC. Then he discovered the Apple II. Kapor did some programming for $5 per hour as a consultant, started the first east coast Apple User Group, and did some work around town. There are generations of people who did and do this kind of consulting, although now the rates are far higher. He met a grad student through the user group named Eric Rosenfeld who was working on his dissertation and needed some help programming, so Kapor wrote a little tool that took the idea of statistical analysis from the Time Shared Reactive Online Library, or TROLL, and ported it to the microcomputer, which he called Tiny Troll. Then he enrolled in the MBA program at MIT. He got a chance to see VisiCalc and meet Bob Frankston and Dan Bricklin, who introduced him to the team at Personal Software. Personal Software was founded by Dan Fylstra and Peter Jennings when they published Microchips for the KIM-1 computer. That led to ports for the 1977 Trinity of the Commodore PET, Apple II, and TRS-80 and by then they had taken Bricklin and Franston’s VisiCalc to market. VisiCalc was the killer app for those early PCs and helped make the Apple II successful. Personal Software brought Kapor on, as well as Bill Coleman of BEA Systems and Electronic Arts cofounder Rich Mellon. Today, software developers get around 70 percent royalties to publish software on app stores but at the time, fees were closer to 8 percent, a model pulled from book royalties. Much of the rest went to production of the box and disks, the sales and marketing, and support. Kapor was to write a product that could work with VisiCalc. By then Rosenfeld was off to the world of corporate finance so Kapor moved to Silicon Valley, learned how to run a startup, moved back east in 1979, and released VisiPlot and VisiTrend in 1981. He made over half a million dollars in the first six months in royalties. By then, he bought out Rosenfeld’s shares in what he was doing, hired Jonathan Sachs, who had been at MIT earlier, where he wrote the STOIC programming language, and then went to work at Data General. Sachs worked on spreadsheet ideas at Data General with a manager there, John Henderson, but after they left Data General, and the partnership fell apart, he worked with Kapor instead. They knew that for software to be fast, it needed to be written in a lower level language, so they picked the Intel 8088 assembly language given that C wasn’t fast enough yet. The IBM PC came in 1981 and everything changed. Mitch Kapor and Jonathan Sachs started Lotus in 1982. Sachs got to work on what would become Lotus 1-2-3. Kapor turned out to be a great marketer and product manager. He listened to what customers said in focus groups. He pushed to make things simpler and use less jargon. They released a new spreadsheet tool in 1983 and it worked flawlessly on the IBM PC and while Microsoft had Multiplan and VisCalc was the incumbent spreadsheet program, Lotus quickly took market share from then and SuperCalc. Conceptually it looked similar to VisiCalc. They used the letter A for the first column, B for the second, etc. That has now become a standard in spreadsheets. They used the number 1 for the first row, the number 2 for the second. That too is now a standard. They added a split screen, also now a standard. They added macros, with branching if-then logic. They added different video modes, which could give color and bitmapping. They added an underlined letter so users could pull up a menu and quickly select the item they wanted once they had those orders memorized, now a standard in most menuing systems. They added the ability to add bar charts, pie charts, and line charts. One could even spread their sheet across multiple monitors like in a magazine. They refined how fields are calculated and took advantage of the larger amounts of memory to make Lotus far faster than anything else on the market. They went to Comdex towards the end of the year and introduced Lotus 1-2-3 to the world. The software could be used as a spreadsheet, but the 2 and 3 referred to graphics and database management. They did $900,000 in orders there before they went home. They couldn’t even keep up with the duplication of disks. Comdex was still invitation only. It became so popular that it was used to test for IBM compatibility by clone makers and where VisiCalc became the app that helped propel the Apple II to success, Lotus 1-2-3 became the app that helped propel the IBM PC to success. Lotus was rewarded with $53 million in sales for 1983 and $156 million in 1984. Mitch Kapor found himself. They quickly scaled from less than 20 to 750 employees. They brought in Freada Klein who got her PhD to be the Head of Employee Relations and charged her with making them the most progressive employer around. After her success at Lotus, she left to start her own company and later married. Sachs left the company in 1985 and moved on to focus solely on graphics software. He still responds to requests on the phpBB forum at dl-c.com. They ran TV commercials. They released a suite of Mac apps they called Lotus Jazz. More television commercials. Jazz didn’t go anywhere and only sold 20,000 copies. Meanwhile, Microsoft released Excel for the Mac, which sold ten times as many. Some blamed the lack os sales on the stringent copy protection. Others blamed the lack of memory to do cool stuff. Others blamed the high price. It was the first major setback for the young company. After a meteoric rise, Kapor left the company in 1986, at about the height of their success. He replaced himself with Jim Manzi. Manzi pushed the company into network applications. These would become the center of the market but were just catching on and didn’t prove to be a profitable venture just yet. A defensive posture rather than expanding into an adjacent market would have made sense, at least if anyone knew how aggressive Microsoft was about to get it would have. Manzi was far more concerned about the millions of illegal copies of the software in the market than innovation though. As we turned the page to the 1990s, Lotus had moved to a product built in C and introduced the ability to use graphical components in the software but not wouldn’t be ported to the new Windows operating system until 1991 for Windows 3. By then there were plenty of competitors, including Quattro Pro and while Microsoft Excel began on the Mac, it had been a showcase of cool new features a windowing operating system could provide an application since released for Windows in 1987. Especially what they called 3d charts and tabbed spreadsheets. There was no catching up to Microsoft by then and sales steadily declined. By then, Lotus released Lotus Agenda, an information manager that could be used for time management, project management, and as a database. Kapor was a great product manager so it stands to reason he would build a great product to manage products. Agenda never found commercial success though, so was later open sourced under a GPL license. Bill Gross wrote Magellan there before he left to found GoTo.com, which was renamed to Overture and pioneered the idea of paid search advertising, which was acquired by Yahoo!. Magellan cataloged the internal drive and so became a search engine for that. It sold half a million copies and should have been profitable but was cancelled in 1990. They also released a word processor called Manuscript in 1986, which never gained traction and that was cancelled in 1989, just when a suite of office automation apps needed to be more cohesive. Ray Ozzie had been hired at Software Arts to work on VisiCalc and then helped Lotus get Symphony out the door. Symphony shipped in 1984 and expanded from a spreadsheet to add on text with the DOC word processor, and charts with the GRAPH graphics program, FORM for a table management solution, and COM for communications. Ozzie dutifully shipped what he was hired to work on but had a deal that he could build a company when they were done that would design software that Lotus would then sell. A match made in heaven as Ozzie worked on PLATO and borrowed the ideas of PLATO Notes, a collaboration tool developed at the University of Illinois Champagne-Urbana to build what he called Lotus Notes. PLATO was more more than productivity. It was a community that spanned decades and Control Data Corporation had failed to take it to the mass corporate market. Ozzie took the best parts for a company and built it in isolation from the rest of Lotus. They finally released it as Lotus Notes in 1989. It was a huge success and Lotus bought Iris in 1994. Yet they never found commercial success with other socket-based client server programs and IBM acquired Lotus in 1995. That product is now known as Domino, the name of the Notes 4 server, released in 1996. Ozzie went on to build a company called Groove Networks, which was acquired by Microsoft, who appointed him one of their Chief Technology Officers. When Bill Gates left Microsoft, Ozzie took the position of Chief Software Architect he vacated. He and Dave Cutler went on to work on a project called Red Dog, which evolved into what we now know as Microsoft Azure. Few would have guessed that Ozzie and Kapor’s handshake agreement on Notes could have become a real product. Not only could people not understand the concept of collaboration and productivity on a network in the late 1980s but the type of deal hadn’t been done. But Kapor by then realized that larger companies had a hard time shipping net-new software properly. Sometimes those projects are best done in isolation. And all the better if the parties involved are financially motivated with shares like Kapor wanted in Personal Software in the 1970s before he wrote Lotus 1-2-3. VisiCalc had sold about a million copies but that would cease production the same year Excel was released. Lotus hung on longer than most who competed with Microsoft on any beachhead they blitzkrieged. Microsoft released Exchange Server in 1996 and Notes had a few good years before Exchange moved in to become the standard in that market. Excel began on the Mac but took the market from Lotus eventually, after Charles Simonyi stepped in to help make the product great. Along the way, the Lotus ecosystem created other companies, just as they were born in the Visi ecosystem. Symantec became what we now call a “portfolio” company in 1985 when they introduced NoteIt, a natural language processing tool used to annotate docs in Lotus 1-2-3. But Bill Gates mentioned Lotus by name multiple times as a competitor in his Internet Tidal Wave memo in 1995. He mentioned specific features, like how they could do secure internet browsing and that they had a web publisher tool - Microsoft’s own FrontPage was released in 1995 as well. He mentioned an internet directory project with Novell and AT&T. Active Directory was released a few years later in 1999, after Jim Allchin had come in to help shepherd LAN Manager. Notes itself survived into the modern era, but by 2004 Blackberry released their Exchange connector before they released the Lotus Domino connector. That’s never a good sign. Some of the history of Lotus is covered in Scott Rosenberg’s 2008 book, Dreaming in Code. Others are documented here and there in other places. Still others are lost to time. Kapor went on to invest in UUNET, which became a huge early internet service provider. He invested in Real Networks, who launched the first streaming media service on the Internet. He invested in the creators of Second Life. He never seemed vindictive with Microsoft but after AOL acquired Netscape and Microsoft won the first browser war, he became the founding chair of the Mozilla Foundation and so helped bring Firefox to market. By 2006, Firefox took 10 percent of the market and went on to be a dominant force in browsers. Kapor has also sat on boards and acted as an angel investor for startups ever since leaving the company he founded. He also flew to Wyoming in 1990 after he read a post on The WELL from John Perry Barlow. Barlow was one of the great thinkers of the early Internet. They worked with Sun Microsystems and GNU Debugging Cypherpunk John Gilmore to found the Electronic Frontier Foundation, or EFF. The EFF has since been the nonprofit who leads the fight for “digital privacy, free speech, and innovation.” So not everything is about business.
6/27/2023 • 24 minutes, 22 seconds
Section 230 and the Concept of Internet Exceptionalism
We covered computer and internet copyright law in a previous episode. That type of law began with interpretations that tried to take the technology out of cases so they could be interpreted as though what was being protected was a printed work, or at least it did for a time. But when it came to the internet, laws, case law, and their knock-on effects, the body of jurisprudence work began to diverge. Safe Harbor mostly refers to the Online Copyright Infringement Liability Limitation Act, or OCILLA for short, was a law passed in the late 1980s that shields online portals and internet service providers from copyright infringement. Copyright infringement is one form of immunity, but more was needed. Section 230 was another law that protects those same organizations from being sued for 3rd party content uploaded on their sites. That’s the law Trump wanted overturned during his final year in office but given that the EU has Directive 2000/31/EC, Australia has the Defamation Act of 2005, Italy has the Electronic Commerce Directive 2000, and lots of other countries like England and Germany have had courts find similarly, it is now part of being an Internet company. Although the future of “big tech” cases (and the damage many claim is being done to democracy) may find it refined or limited. That’s because the concept of Internet Exceptionalism itself is being reconsidered now that the internet is here to stay. Internet Exceptionalism is a term that notes that laws that diverge from precedents for other forms of media distribution. For example, a newspaper can be sued for liable or defamation, but a website is mostly shielded from such suits because the internet is different. Pages are available instantly, changes be made instantly, and the reach is far greater than ever before. The internet has arguably become the greatest tool to spread democracy and yet potentially one of its biggest threats. Which some might have argued about newspapers, magazines, and other forms of print media in centuries past. The very idea of Internet Exceptionalism has eclipsed the original intent. Chris Cox and Ron Widen initially intended to help fledgling Internet Service Providers (ISPs) jumpstart content on the internet. The internet had been privatized in 1995 and companies like CompuServe, AOL, and Prodigy were already under fire for the content on their closed networks. Cubby v CompuServe in 1991 had found that online providers weren’t considered publishers of content and couldn’t be held liable for free speech practiced on their platforms in part because they did not exercise editorial control of that content. Stratton Oakmont v Prodigy found that Prodigy did have editorial control (and in fact advertised themselves as having a better service because of it) and so could be found liable like a newspaper would. Cox and Widen were one of the few conservative and liberal pairs of lawmakers who could get along in the decisive era when Newt Gingrich came to power and tried to block everything Bill Clinton tried to do. Yet there were aspects of the United States that were changing outside of politics. Congress spent years negotiating a telecommunications overhaul bill that came to be known as The Telecommunications Act of 1996. New technology led to new options. Some saw content they found to be indecent and so the Communications Decency Act (or Title V of the Telecommunications Act) was passed in 1996, but in Reno v ACLU found to be a violation of the first amendment, and struck down by the Supreme Court in 1997. Section 230 of that act was specifically about the preservation of free speech and so severed from the act and stood alone. It would be adjudicated time and time and eventually became an impenetrable shield that protects online providers from the need to scan every message posted to a service to see if it would get them sued. Keep in mind that society itself was changing quickly in the early 1990s. Tipper Gore wanted to slap a label on music to warn parents that it had explicit lyrics. The “Satanic Panic” as it’s called by history reused tropes such as cannibalism and child murder to give the moral majority an excuse to try to restrict that which they did not understand. Conservative and progressive politics have always been a 2 steps forward and 1 step back truce. Heavy metal would seem like nothin’ once parents heard the lyrics of gagster rap. But Section 230 continued on. It stated that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” It only took 27 words to change the world. They said that the people that host the content can’t be sued for the content because, as courts interpreted it, it’s free speech. Think of a public forum like a hall on a college campus that might restrict one group from speaking and so suppress speech or censer a group. Now, Section 230 didn’t say it wasn’t allowed to screen material but instead shielded providers from being held liable for that material. The authors of the bill felt that if providers would be held liable for any editing that they wouldn’t do any. Now providers could edit some without reviewing every post. And keep in mind the volume of posts in message boards and of new websites had already become too much in the late 1990s to be manually monitored. Further, as those companies became bigger business they became more attractive to law suits. Section 230 had some specific exclusions. Any criminal law could still be applied, as could state, sex trafficking, and privacy laws. Intellectual property laws also remained untouched, thus OCILLA. To be clear, reading the law, the authors sought to promote the growth of the internet - and it worked. Yelp gets sued over revues but cases are dismissed. Twitter can get sued over a Tweet when someone doesn’t like what is said, but it’s the poster and not Twitter who is liable. Parody sites, whistleblower sites, watchdog sites, revue sites, blogs, and an entire industry was born, which each player of what would later be known as the Web 2.0 market could self-regulate themselves. Those businesses grew far beyond the message boards of the 1990s. This was also a time when machine learning became more useful. A site like Facebook could show a feed of posts not in reverse chronological order, but instead by “relevance.” Google could sell ads and show them based on the relevance of a search term. Google could buy YouTube and they could have ads on videos. Case after case poked at the edges of what could be used to hold a site liable. The fact that the courts saw a post on Reddit as free speech, no matter how deplorable the comments, provided a broad immunity to liability that was, well, exceptional in a way. Some countries could fine or imprison people if they posted something negative about the royal family or party in charge. Some of those countries saw the freedom of speech so important as a weapon that could be used against the US in a way. The US became a safe haven in a way to free speech and many parts of the internet were anonymous. In this way (as was previously done with films and other sources of entertainment and news) the US began to export the culture of free speech. But every country also takes imports. Some of those were real, true ideas homegrown or brought in from abroad. Early posters of message boards maybe thought the Armenian Genocide was a hoax - or the Holocaust. A single post could ruin a career. Craigslist allowed for sex trafficking and while they eventually removed that, sites like Backpage have received immunity. So even some of the exceptions are, um, not. Further, extremist groups use pages to spread propaganda and even recruit soldiers to spread terror. The courts found that sites were immune to suits over fake profiles on dating sites - even if it was a famous person and the person was getting threatening calls. The courts initially found sites needed to take down content if they were informed it was libelous - but have received broad immunity even when they don’t due to the sheer amount of content. Batzel v Smith saw a lawyers firm ruined over false reports she was the granddaughter of Nazi Heinrich Himmler and the beneficiary of Nazi art theft, even though she wasn’t - she too lost her case. Sites provide neutral tools and so are shielded from defamation - even if they’re neutralish you rarely see them held to account. In Goddard v. Google, the Google Keyword Tool recommended that advertisers include the word “free” in mobile content, which Goddard claimed led to fraudulent subscription service recruitment. This was machine learning-based recommendations. The court again found provided the Keyword Tool was neutral that advertisers could adopt or reject the recommendation. Still, time and time again the idea of safe harbor for internet companies and whether internet exceptionalism should continue comes up. The internet gave a voice to the oppressed, but also to the oppressors. That’s neutrality in a way, except that the oppressors (especially when state sponsored actors are involved) often have more resources to drown out other voices, just like in real life. Some have argued a platform like Facebook should be held accountable for their part in the Capitol riots, which is to say as a place where people practiced free speech. Others look to Backpage as facilitating the exploitation of children or as a means of oppression. Others still see terrorist networks as existing and growing because of the ability to recruit online. The Supreme Court is set to hear docket number 21-1333 in 2022. Gonzalez v. Google was brought by Reynaldo Gonzalez, and looks at whether 230 can immunize Google even though they have made targeted recommendations - in this case when ISIS used YouTube vides to recruit new members - through the recommendation algorithm. An algorithm that would be neutral. But does a platform as powerful have a duty to do more, especially when there’s a chance that Section 230 bumps up against anti-terrorism legislation. Again and again the district courts in the United States have found section 230 provides broad immunization to online content providers. Now, the Supreme Court will weigh in. After that, billions of dollars may have to be pumped into better content filtration or they may continue to apply broad first amendment guidance. The Supreme Court is packed with “originalists”. They still have phones, which the framers did not. The duty that common law places on those who can disseminate negligent or reckless content has lost the requirement for reasonable care due to the liability protections afforded purveyors of content by Section 230. This has given rise to hate speech and misinformation. John Perry Barlow’s infamous A Declaration of the Independence of Cyberspace in protest of the CDA was supported by Section 230 of that same law. But the removal of the idea and duty of reasonable care and the exemptions have now removed any accountability from what seems like any speech. Out of the ashes of accountability the very concept of free speech and where the duty of reasonable care lies may be reborn. We now have the ability to monitor via machine learning, we’ve now redefined what it means to moderate, and there’s now a robust competition for eyeballs on the internet. We’ve also seen how a lack of reasonable standards can lead to real life consequences and that an independent cyberspace can bleed through into the real world. If the Supreme Court simply upholds findings from the past then the movement towards internet sovereignty may accelerate or may stay the same. Look to where venture capital flows for clues as to how the First Amendment will crash into the free market, and see if its salty waters leave data and content aggregators with valuations far lower than where they once were. The asset of content may some day become a liability with injuries that could provide an existential threat to the owner. The characters may walk the astral plane but eventually must return to the prime material plane along their tether to take a long rest or face dire consequences. The world simply can’t continue to become more and more toxic - and yet there’s a reason the First Amendment is, well, first. Check out Twenty-Six Words Created the Internet. What Will It Take to Save It?
6/5/2023 • 19 minutes, 9 seconds
Bluetooth: From Kings to Personal Area Networks
Bluetooth The King Ragnar Lodbrok was a legendary Norse king, conquering parts of Denmark and Sweden. And if we’re to believe the songs, he led some of the best raids against the Franks and the the loose patchwork of nations Charlemagne put together called the Holy Roman Empire. We use the term legendary as the stories of Ragnar were passed down orally and don’t necessarily reconcile with other written events. In other words, it’s likely that the man in the songs sung by the bards of old are likely in fact a composite of deeds from many a different hero of the norse. Ragnar supposedly died in a pit of snakes at the hands of the Northumbrian king and his six sons formed a Great Heathen Army to avenge their father. His sons ravaged modern England int he wake of their fathers death before becoming leaders of various lands they either inherited or conquered. One of those sons, Sigurd Snake-in-the-Eye, returned home to rule his lands and had children, including Harthacnut. He in turn had a son named Gorm. Gorm the Old was a Danish king who lived to be nearly 60 in a time when life expectancy for most was about half that. Gorm raised a Jelling stone in honor of his wife Thyra. As did his son, in the honor of his wife. That stone is carved with runes that say: “King Haraldr ordered this monument made in memory of Gormr, his father, and in memory of Thyrvé, his mother; that Haraldr who won for himself all of Denmark and Norway and made the Danes Christian.” That stone was erected by a Danish king named Herald Gormsson. He converted to Christianity as part of a treaty with the Holy Roman Emperor of the day. He united the tribes of Denmark into a kingdom. One that would go on to expand the reach and reign of the line. Just as Bluetooth would unite devices. Even the logo is a combination of runes that make up his initials HB. Once united, their descendants would go on to rule Denmark, Norway, and England. For a time. Just as Bluetooth would go on to be an important wireless protocol. For a time. Personal Area Networks Many early devices shipped with infrared so people could use a mouse or keyboard. But those never seemed to work so great. And computers with a mouse and keyboard and drawing pad and camera and Zip drive and everything else meant that not only did devices have to be connected to sync but they also had to pull a lot of power and create an even bigger mess on our desks. What the world needed instead was an inexpensive chip that could communicate wirelessly and not pull a massive amount of power since some would be in constant communication. And if we needed a power cord then might as well just use USB or those RS-232 interfaces (serial ports) that were initially developed in 1960 - that were slow and cumbersome. And we could call this a Personal Area Network, or PAN. The Palm Pilot was popular, but docking and pluging in that serial port was not exactly optimal. Yet every ATX motherboard had a port or two. So a Bluetooth Special Interest Group was formed to conceive and manage the standard in 1988 and while initially had half a dozen companies now has over 30,000. The initial development started in the late 1990s with Ericcson. It would use short-range UHF radio waves in the 2.402 GHz and 2.48 GHz bands to exchange data with computers and cell phones, which were evolving into mobile devices at the time. The technology was initially showcased at COMDEX in 1999. Within a couple of years there were phones that could sync, kits for cars, headsets, and chips that could be put into devices - or cards or USB adapters, to get a device to sync 721 Kbps. We could add 2 to 8 Bluetooth secondary devices that paired to our primary. They then frequency hopped using their Bluetooth device address provided by the primary, which sends a radio signal to secondaries with a range of addresses to use. The secondaries then respond with the frequency and clock state. And unlike a lot of other wireless technologies, it just kinda’ worked. And life seemed good. Bluetooth went to the IEEE, which had assigned networking the 802 standard with Ethernet being 802.3 and Wi-Fi being 802.11. So Personal Area Networks became 802.15, with Bluetooth 1.1 becoming 802.15.1. And the first phone shipped in 2001, the Sony Ericsson T39. Bluetooth 2 came in 2005 and gave us 2.1 Mbps speeds and increased the range from 10 to 30 meters. By then, over 5 million devices were shipping every week. More devices mean we have a larger attack surface space. And security researchers were certainly knocking at the door. Bluetooth 2.1 added secure simple pairing. Then Bluetooth 3 in 2009 bringing those speeds up to 24 Mbps and once connected allowing Wi-Fi to pick up connections once established. But we were trading speed for energy and this wasn’t really the direction Bluetooth needed to go. Even if a billion devices had shipped by the end of 2006. Bluetooth 4 The mobility era was upon us and it was increasingly important, not just for the ARM chips, but also for the rest of the increasing number of devices, to use less power. Bluetooth 4 came along in 2010 and was slower at 1 Mbps, but used less energy. This is when the iPhone 4S line fully embraced the technology, helping make it a standard. While not directly responsible for the fitness tracker craze, it certainly paved the way for a small coin cell battery to run these types of devices for long periods of time. And it allowed for connecting devices 100 meters, or well over 300 feet away. So leave the laptop in one room and those headphones should be fine in the next. And while we’re at it, maybe we want those headphones to work on two different devices. This is where Multipoint comes into play. That’s the feature of Bluetooth 4 that allows those devices to pass seamlessly between the phone and the laptop, maintaining a connection to each. Apple calls their implementation of this feature Handoff. Bluetooth 5 came in 2016, allowing for connections up to 240 meters, or around 800 feet. Well, according to what’s between us and our devices, as with other protocols. We also got up to 2 Mbps, which dropped as we moved further away from devices. Thus we might get buffering issues or slower transfers with weaker connections. But not outright dropping the connection. Bluetooth Evolves Bluetooth was in large part developed to allow our phones to sync to our computers. Most don’t do that any more. And the developers wanted to pave the way for wireless headsets. But it also allowed us to get smart scales, smart bulbs, wearables like smart watches and glasses, Bluetooth printers, webcams, keyboards, mice, GPS devices, thermostats, and even a little device that tells me when I need to water the plants. Many home automation devices, or IoT as we seem to call them these days began as Bluetooth but given that we want them to work when we take all our mostly mobile computing devices out of the home, many of those have moved over to Wi-Fi these days. Bluetooth was initially conceived as a replacement for the serial port. Higher throughput needs moved to USB and USB-C. Lower throughput has largely moved to Bluetooth, with the protocol split between Low Energy and higher bandwidth application which with high definition audio now includes headphones. Once the higher throughput needs went to parallel and SCSI but now there are so many other options. And the line is blurred between what goes where. Billions of routers and switches have been sold, billions of wireless access points. Systems on a Chip now include Wi-Fi and Bluetooth together on the same chip. The programming languages for native apps have also given us frameworks and APIs where we can establish a connection over 5G, Wi-Fi, or Bluetooth, and then hand them off where the needs diverge. Seamless to those who use our software and elegant when done right. Today over four billion bluetooth devices ship per year, growing at about 10 percent a year. The original needs that various aspects of Bluetooth was designed for have moved to other protocols and the future of the Personal Area Network may be at least in part moved to Wi-Fi or 5G. But for now it’s a standard that has aged well and continues to make life easier for those who use it.
5/17/2023 • 13 minutes, 10 seconds
One History Of 3D Printing
One of the hardest parts of telling any history, is which innovations are significant enough to warrant mention. Too much, and the history is so vast that it can't be told. Too few, and it's incomplete. Arguably, no history is ever complete. Yet there's a critical path of innovation to get where we are today, and hundreds of smaller innovations that get missed along the way, or are out of scope for this exact story. Children have probably been placing sand into buckets to make sandcastles since the beginning of time. Bricks have survived from round 7500BC in modern-day Turkey where humans made molds to allow clay to dry and bake in the sun until it formed bricks. Bricks that could be stacked. And it wasn’t long before molds were used for more. Now we can just print a mold on a 3d printer. A mold is simply a block with a hollow cavity that allows putting some material in there. People then allow it to set and pull out a shape. Humanity has known how to do this for more than 6,000 years, initially with lost wax casting with statues surviving from the Indus Valley Civilization, stretching between parts of modern day Pakistan and India. That evolved to allow casting in gold and silver and copper and then flourished in the Bronze Age when stone molds were used to cast axes around 3,000 BCE. The Egyptians used plaster to cast molds of the heads of rulers. So molds and then casting were known throughout the time of the earliest written works and so the beginning of civilization. The next few thousand years saw humanity learn to pack more into those molds, to replace objects from nature with those we made synthetically, and ultimately molding and casting did its part on the path to industrialization. As we came out of the industrial revolution, the impact of all these technologies gave us more and more options both in terms of free time as humans to think as well as new modes of thinking. And so in 1868 John Wesley Hyatt invented injection molding, patenting the machine in 1872. And we were able to mass produce not just with metal and glass and clay but with synthetics. And more options came but that whole idea of a mold to avoid manual carving and be able to produce replicas stretched back far into the history of humanity. So here we are on the precipice of yet another world-changing technology becoming ubiquitous. And yet not. 3d printing still feels like a hobbyists journey rather than a mature technology like we see in science fiction shows like Star Trek with their replicators or printing a gun in the Netflix show Lost In Space. In fact the initial idea of 3d printing came from a story called Things Pass By written all the way back in 1945! I have a love-hate relationship with 3D printing. Some jobs just work out great. Others feel very much like personal computers in the hobbyist era - just hacking away until things work. It’s usually my fault when things go awry. Just as it was when I wanted to print things out on the dot matrix printer on the Apple II. Maybe I fed the paper crooked or didn’t check that there was ink first or sent the print job using the wrong driver. One of the many things that could go wrong. But those fast prints don’t match with the reality of leveling and cleaning nozzles and waiting for them to heat up and pulling filament out of weird places (how did it get there, exactly)! Or printing 10 add-ons for a printer to make it work the way it probably should have out of the box. Another area where 3d printing is similar to the early days of the personal computer revolution is that there are a few different types of technology in use today. These include color-jet printing (CJP), direct metal printing (DMP), fused deposition modeling (FDM), Laser Additive Manufacturing (LAM, multi-jet printing (MJP), stereolithography (SLA), selective laser melting (SLM), and selective laser sintering (SLS). Each could be better for a given type of print job to be done. Some forms have flourished while others are either their infancy or have been abandoned like extinct languages. Language isolates are languages that don’t fit into other families. Many are the last in a branch of a larger language family tree. Others come out of geographically isolated groups. Technology also has isolates. Konrad Zuse built computers in pre-World War II Germany and after that aren’t considered to influence other computers. In other words, every technology seems to have a couple of false starts. Hideo Kodama filed the first patent to 3d print in 1980 - but his method of using UV lights to harden material doesn’t get commercialized. Another type of 3d printing includes printers that were inkjets that shot metal alloys onto surfaces. Inkjet printing was invented by Ichiro Endo at Canon in the 1950s, supposedly when he left a hot iron on a pen and ink bubbled out. Thus the “Bubble jet” printer. And Jon Vaught at HP was working on the same idea at about the same time. These were patented and used to print images from computers over the coming decades. Johannes Gottwald patented a printer like this in 1971. Experiments continued through the 1970s when companies like Exxon were trying to improve various prototyping processes. Some of their engineers joined an inventor Robert Howard in the early 1980s to found a company called Howtek and they produced the Pixelmaster, using hot-melt inks to increment the ink jet with solid inks, which then went on to be used by Sanders Prototype, which evolved into a company called Solidscape to market the Modelmaker. And some have been used to print solar cells, living cells, tissue, and even edible birthday cakes. That same technique is available with a number of different solutions but isn’t the most widely marketable amongst the types of 3D printers available. SLA There’s often a root from which most technology of the day is derived. Charles, or Chuck, Hull coined the term stereolithography, where he could lay down small layers of an object and then cure the object with UV light, much as the dentists do with fillings today. This is made possibly by photopolymers, or plastics that are easily cured by an ultraviolet light. He then invented the stereolithography apparatus, or SLA for short, a machine that printed from the bottom to the top by focusing a laser on photopolymer while in a liquid form to cure the plastic into place. He worked on it in 1983, filed the patent in 1984, and was granted the patent in 1986. Hull also developed a file format for 3D printing called STL. STL files describe the surface of a three-dimensional object, geometrically using Cartesian coordinates. Describing coordinates and vectors means we can make objects bigger or smaller when we’re ready to print them. 3D printers print using layers, or slices. Those can change based on the filament on the head of a modern printer, the size of the liquid being cured, and even the heat of a nozzle. So the STL file gets put into a slicer that then converts the coordinates on the outside to the polygons that are cured. These are polygons in layers, so they may appear striated rather than perfectly curved according to the size of the layers. However, more layers take more time and energy. Such is the evolution of 3D printing. Hull then founded a company called 3D Systems in Valencia California to take his innovation to market. They sold their first printer, the SLA-1 in 1988. New technologies start out big and expensive. And that was the case with 3D Systems. They initially sold to large engineering companies but when solid-state lasers came along in 1996 they were able to provide better systems for cheaper. Languages also have other branches. Another branch in 3d printing came in 1987, just before the first SLA-1 was sold. Carl Deckard and his academic adviser Joe Beaman at the University of Texas worked on a DARPA grant to experiment with creating physical objects with lasers. They formed a company to take their solution to market called DTM and filed a patent for what they called selective laser sintering. This compacts and hardens a material with a heat source without having to liquify it. So a laser, guided by a computer, can move around a material and harden areas to produce a 3D model. Now in addition to SLA we had a second option, with the release of the Sinterstation 2500plus. Then 3D Systems then acquired DTM for $45 million in 2001. FDM After Hull published his findings for SLA and created the STL format, other standards we use today emerged. FDM is short for Fused Deposition Modeling and was created by Scott Crump in 1989. He then started a company with his wife Lisa to take the product to market, taking the company public in 1994. Crump’s first patent expired in 2009. In addition to FDM, there are other formats and techniques. AeroMat made the first 3D printer that could produce metal in 1997. These use a laser additive manufacturing process, where lasers fuse powdered titanium alloys. Some go the opposite direction and create out of bacteria or tissue. That began in 1999, when Wake Forest Institute of Regenerative medicine grew a 3D printed urinary bladder in a lab to be used as a transplant. We now call this bioprinting and can take tissue and lasers to rebuild damaged organs or even create a new organ. Organs are still in their infancy with success trials on smaller animals like rabbits. Another aspect is printing dinner using cell fibers from cows or other animals. There are a number of types of materials used in 3D printing. Most printers today use a continuous feed of one of these filaments, or small coiled fibers of thermoplastics that melt instead of burn when they’re heated up. The most common in use today is PLA, or polylactic acid, is a plastic initially created by Wall Carothers of DuPont, the same person that brought us nylon, neoprene, and other plastic derivatives. It typically melts between 200 and 260 degrees Celsius. Printers can also take ABS filament, which is short for acrylonitrile-butadien-styerene. Other filament types include HIPS, PET, CPE, PVA, and their derivative forms. Filament is fed into a heated extruder assembly that melts the plastic. Once melted, filament extrudes into place through a nozzle as a motor sends the nozzle on a x and y axis per layer. Once a layer of plastic is finished being delivered to the areas required to make up the desired slice, the motor moves the extruder assembly up or down on a z axis between layers. Filament is just between 1.75 millimeters and 3 millimeters and comes in spools between half a kilogram and two kilograms. These thermoplastics cool very quickly. Once all of the slices are squirted into place, the print is removed from the bed and the nozzle cools off. Filament comes in a number of colors and styles. For example, wood fibers can be added to filament to get a wood-grained finish. Metal can be added to make prints appear metallic and be part metal. Printing isn’t foolproof, though. Filament often gets jammed or the spool gets stuck, usually when something goes wrong. Filament also needs to be stored in a temperature and moisture controlled location or it can cause jobs to fail. Sometimes the software used to slice the .stl file has an incorrect setting, like the wrong size of filament. But in general, 3D printing using the FDM format is pretty straight forward these days. Yet this is technology that should have moved faster in terms of adoption. The past 10 years have seen more progress than the previous ten though. Primarily due to the maker community. Enter the Makers The FDM patent expired in 2009. In 2005, a few years before the FDM patent expired, Dr. Adrian Bowyer started a project to bring inexpensive 3D printers to labs and homes around the world. That project evolved into what we now call the Replicating Rapid Prototyper, or RepRap for short. RepRap evolved into an open source concept to create self-replicating 3D printers and by 2008, the Darwin printer was the first printer to use RepRap. As a community started to form, more collaborators designed more parts. Some were custom parts to improve the performance of the printer, or replicate the printer to become other printers. Others held the computing mechanisms in place. Some even wrote code to make the printer able to boot off a MicroSD card and then added a network interface so files could be uploaded to the printer wirelessly. There was a rising tide of printers. People were reading about what 3D printers were doing and wanted to get involved. There was also a movement in the maker space, so people wanted to make things themselves. There was a craft to it. Part of that was wanting to share. Whether that was at a maker space or share ideas and plans and code online. Like the RepRap team had done. One of those maker spaces was NYC Resistor, founded in 2007. Bre Pettis, Adam Mayer, and Zach Smith from there took some of the work from the RepRap project and had ideas for a few new projects they’d like to start. The first was a site that Zach Smith created called Thingiverse. Bre Pettis joined in and they allowed users to upload .stl files and trade them. It’s now the largest site for trading hundreds of thousands of designs to print about anything imaginable. Well, everything except guns. Then comes 2009. The patent for FDM expires and a number of companies respond by launching printers and services. Almost overnight the price for a 3D printer fell from $10,000 to $1,000 and continued to drop. Shapeways had created a company the year before to take files and print them for people. Pettis, Mayer, and Smith from NYC Resistor also founded a company called MakerBot Industries. They’d already made a little bit of a name for themselves with the Thingiverse site. They knew the mind of a maker. And so they decided to make a kit to sell to people that wanted to build their own printers. They sold 3,500 kits in the first couple of years. They had a good brand and knew the people who bought these kinds of devices. So they took venture funding to grow the company. So they raised $10M in funding in 2011 in a round led by the Foundry Group, along with Bezos, RRE, 500 Startups and a few others. They hired and grew fast. Smith left in 2012 and they were getting closer and closer with Stratasys, who if we remember were the original creators of FDM. So Stratasys ended up buying out the company in 2013 for $403M. Sales were disappointing so there was a changeup in leadership, with Pettis leaving and they’ve become much more about additive manufacturing than a company built to appeal to makers. And yet the opportunity to own that market is still there. This was also an era of Kickstarter campaigns. Plenty of 3D printing companies launched through kickstarter including some to take PLA (a biodegradable filament) and ABS materials to the next level. The ExtrusionBot, the MagicBox, the ProtoPlant, the Protopasta, Mixture, Plybot, Robo3D, Mantis, and so many more. Meanwhile, 3D printing was in the news. 2011 saw the University of Southhampton design a 3d printed aircraft. Ecologic printing cars, and practically every other car company following suit that they were fabricating prototypes with 3d printers, even full cars that ran. Some on their own, some accidentally when parts are published in .stl files online violating various patents. Ultimaker was another RepRap company that came out of the early Darwin reviews. Martijn Elserman, Erik de Bruin, and Siert Wijnia who couldn’t get the Darwin to work so they designed a new printer and took it to market. After a few iterations, they came up with the Ultimaker 2 and have since been growing and releasing new printers A few years later, a team of Chinese makers, Jack Chen, Huilin Liu, Jingke Tang, Danjun Ao, and Dr. Shengui Chen took the RepRap designs and started a company to manufacturing (Do It Yourself) kits called Creality. They have maintained the open source manifesto of 3D printing that they inherited from RepRap and developed version after version, even raising over $33M to develop the Ender6 on Kickstarter in 2018, then building a new factory and now have the capacity to ship well over half a million printers a year. The future of 3D Printing We can now buy 3D printing pens, over 170 3D Printer manufacturers including 3D systems, Stratasys, and Ceality but also down-market solutions like Fusion3, Formlabs, Desktop Metal, Prusa, and Voxel8. There’s also a RecycleBot concept and additional patents expiring every year. There is little doubt that at some point, instead of driving to Home Depot to get screws or basic parts, we’ll print them. Need a new auger for the snow blower? Just print it. Cover on the weed eater break? Print it. Need a dracolich mini for the next Dungeons and Dragons game? Print it. Need a new pinky toe. OK, maybe that’s a bit far. Or is it? In 2015, Swedish Cellink releases bio-ink made from seaweed and algae, which could be used to print cartilage and later released the INKREDIBLE 3D printer for bio printing. The market in 2020 was valued at $13.78 billion with 2.1 million printers shipped. That’s expected to grow at a compound annual growth rate of 21% for the next few years. But a lot of that is healthcare, automotive, aerospace, and prototyping still. Apple made the personal computer simple and elegant. But no Apple has emerged for 3D printing. Instead it still feels like the Apple II era, where there are 3D printers in a lot of schools and many offer classes on generating files and printing. 3D printers are certainly great for prototypers and additive manufacturing. They’re great for hobbyists, which we call makers these days. But there will be a time when there is a printer in most homes, the way we have electricity, televisions, phones, and other critical technologies. But there are a few things that have to happen first, to make the printers easier to use. These include: Every printer needs to automatically level. This is one of the biggest reasons jobs fail and new users become frustrated. More consistent filament. Spools are still all just a little bit different. Printers need sensors in the extruder that detect if a job should be paused because the filament is jammed, humid, or caught. This adds the ability to potentially resume print jobs and waste less filament and time. Automated slicing in the printer microcode that senses the filament and slices. Better system boards (e.g. there’s a tool called Klipper that moves the math from the system board on a Creality Ender 3 to a Raspberry Pi). Cameras on the printer should watch jobs and use TinyML to determine if they are going to fail as early as possible to halt printing so it can start over. Most of the consumer solutions don’t have great support. Maybe users are limited to calling a place in a foreign country where support hours don’t make sense for them or maybe the products are just too much of a hacker/maker/hobbyist solution. There needs to be an option for color printing. This could be a really expensive sprayer or ink like inkjet printers use at first We love to paint minis we make for Dungeons and Dragons but could get amazingly accurate resolutions to create amazing things with automated coloring. For a real game changer, the RecycleBot concept needs to be merged with the printer. Imagine if we dropped our plastics into a recycling bin that 3D printers of the world used to create filament. This would help reduce the amount of plastics used in the world in general. And when combined with less moving around of cheap plastic goods that could be printed at home, this also means less energy consumed by transporting goods. The 3D printing technology is still a generation or two away from getting truly mass-marketed. Most hobbyists don’t necessarily think of building an elegant, easy-to-use solution because they are so experienced it’s hard to understand what the barriers of entry are for any old person. But the company who finally manages to crack that nut might just be the next Apple, Microsoft, or Google of the world.
5/3/2023 • 30 minutes, 59 seconds
Adobe: From Pueblos to Fonts and Graphics to Marketing
The Mogollon culture was an indigenous culture in the Western United States and Mexico that ranged from New Mexico and Arizona to Sonora, Mexico and out to Texas. They flourished from around 200 CE until the Spanish showed up and claimed their lands. The cultures that pre-existed them date back thousands more years, although archaeology has yet to pinpoint exactly how those evolved. Like many early cultures, they farmed and foraged. As they farmed more, their homes become more permanent and around 800 CE they began to create more durable homes that helped protect them from wild swings in the climate. We call those homes adobes today and the people who lived in those peublos and irrigated water, often moving higher into mountains, we call the Peubloans - or Pueblo Peoples. Adobe homes are similar to those found in ancient cultures in what we call Turkey today. It’s an independent evolution. Adobe Creek was once called Arroyo de las Yeguas by the monks from Mission Santa Clara and then renamed to San Antonio Creek by a soldier Juan Prado Mesa when the land around it was given to him by the governor of Alto California at the time, Juan Bautista Alvarado. That’s the same Alvarado as the street if you live in the area. The creek runs for over 14 miles north from the Black Mountain and through Palo Alto, California. The ranchers built their adobes close to the creeks. American settlers led the Bear Flag Revolt in 1846, and took over the garrison of Sonoma, establishing the California Republic - which covered much of the lands of the Peubloans. There were only 33 of them at first, but after John Fremont (yes, he of whom that street is named after as well) encouraged the Americans, they raised an army of over 100 men and Fremont helped them march on Sutter’s fort, now with the flag of the United States, thanks to Joseph Revere of the US Navy (yes, another street in San Francisco bears his name). James Polk had pushed to expand the United States. Manfiest Destiny. Remember The Alamo. Etc. The fort at Monterey fell, the army marched south. Admiral Sloat got involved. They named a street after him. General Castro surrendered - he got a district named after him. Commodore Stockton announced the US had taken all of Calfironia soon after that. Manifest destiny was nearly complete. He’s now basically the patron saint of a city, even if few there know who he was. The forts along the El Camino Real that linked the 21 Spanish Missions, a 600-mile road once walked by their proverbial father, Junípero Serra following the Portolá expedition of 1769, fell. Stockton took each, moving into Los Angeles, then San Diego. Practically all of Alto California fell with few shots. This was nothing like the battles for the independence of Texas, like when Santa Anna reclaimed the Alamo Mission. Meanwhile, the waters of Adobe Creek continued to flow. The creek was renamed in the 1850s after Mesa built an adobe on the site. Adobe Creek it was. Over the next 100 years, the area evolved into a paradise with groves of trees and then groves of technology companies. The story of one begins a little beyond the borders of California. Utah was initialy explored by Francisco Vázquez de Coronado in 1540 and settled by Europeans in search of furs and others who colonized the desert, including those who established the Church of Jesus Christ of Latter-day Saints, or the Mormons - who settled there in 1847, just after the Bear Flag Revolt. The United States officially settled for the territory in 1848 and Utah became a territory and after a number of map changes wher ethe territory got smaller, was finally made a state in 1896. The University of Utah had been founded all the way back in 1850, though - and re-established in the 1860s. 100 years later, the University of Utah was a hotbed of engineers who pioneered a number of graphical advancements in computing. John Warnock went to grad school there and then went on to co-found Adobe and help bring us PostScript. Historically, PS, or Postscript was a message to be placed at the end of a letter, following the signature of the author. The PostScript language was a language to describe a page of text computationally. It was created by Adobe when Warnock, Doug Brotz, Charles Geschke, Bill Paxton (who worked on the Mother of All Demos with Doug Englebart during the development of Online System, or NLS in the late 70s and then at Xerox PARC), and Ed Taft. Warnock invented the Warnock algorithm while working on his PhD and went to work at Evans & Sutherland with Ivan Sutherland who effectively created the field of computer graphics. Geschke got his PhD at Carnegie Melon in the early 1970s and then went of to Xerox PARC. They worked with Paxton at PARC and before long, these PhDs and mathematicians had worked out the algorithms and then the languages to display images on computers while working on InterPress graphics at Xerox and Gerschke left Xerox and started Adobe. Warnock joined them and they went to market with Interpress as PostScript, which became a foundation for the Apple LaswerWriter to print graphics. Not only that, PostScript could be used to define typefaces programmatically and later to display any old image. Those technologies became the foundation for the desktop publishing industry. Apple released the 1984 Mac and other vendors brought in PostScript to describe graphics in their proprietary fashion and by 1991 they released PostScript Level 2 and then PostScript 3 in 1997. Other vendors made their own or furthered standards in their own ways and Adobe could have faded off into the history books of computing. But Adobe didn’t create one product, they created an industry and the company they created to support that young industry created more products in that mission. Steve Jobs tried to buy Adobe before that first Mac as released, for $5,000,000. But Warnock and Geschke had a vision for an industry in mind. They had a lot of ideas but development was fairly capital intensive, as were go to market strategies. So they went public on the NASDAQ in 1986. They expanded their PostScript distribution and sold it to companies like Texas Instruments for their laser printer, and other companies who made IBM-compatible companies. They got up to $16 million in sales that year. Warnock’s wife was a graphic designer. This is where we see a diversity of ideas help us think about more than math. He saw how she worked and could see a world where Ivan Sutherland’s Sketchpad was much more given how far CPUs had come since the TX-0 days at MIT. So Adobe built and released Illustrator in 1987. By 1988 they broke even on sales and it raked in $19 million in revenue. Sales were strong in the universities but PostScript was still the hot product, selling to printer companies, typesetters, and other places were Adobe signed license agreements. At this point, we see where the math, cartesian coordinates, drawn by geometric algorithms put pixels where they should be. But while this was far more efficient than just drawing a dot in a coordinate for larger images, drawing a dot in a pixel location was still the easier technology to understand. They created Adobe Screenline in 1989 and Collectors Edition to create patterns. They listened to graphic designers and built what they heard humans wanted. Photoshop Nearly every graphic designer raves about Adobe Photoshop. That’s because Photoshop is the best selling graphics editorial tool that has matured far beyond most other traditional solutions and now has thousands of features that allow users to manipulate images in practically any way they want. Adobe Illustrator was created in 1987 and quickly became the de facto standard in vector-based graphics. Photoshop began life in 1987 as well, when Thomas and John Knoll, wanted to build a simpler tool to create graphics on a computer. Rather than vector graphics they created a raster graphical editor. They made a deal with Barneyscan, a well-known scanner company that managed to distribute over two hundred copies of Photoshop with their scanners and Photoshop became a hit as it was the first editing software people heard about. Vector images are typically generated with Cartesian coordinates based on geometric formulas and so scale out more easily. Raster images are comprised of a grid of dots, or pixels, and can be more realistic. Great products are rewarded with competitions. CorelDRAW was created in 1989 when Michael Bouillon and Pat Beirne built a tool to create vector illustrations. The sales got slim after other competitors entered the market and the Knoll brothers got in touch with Adobe and licensed the product through them. The software was then launched as Adobe Photoshop 1 in 1990. They released Photoshop 2 in 1991. By now they had support for paths, and given that Adobe also made Illustrator, EPS and CMYK rasterization, still a feature in Photoshop. They launched Adobe Photoshop 2.5 in 1993, the first version that could be installed on Windows. This version came with a toolbar for filters and 16-bit channel support. Photoshop 3 came in 1994 and Thomas Knoll created what was probably one of the most important features added, and one that’s become a standard in graphical applications since, layers. Now a designer could create a few layers that each had their own elements and hide layers or make layers more transparent. These could separate the subject from the background and led to entire new capabilities, like an almost faux 3 dimensional appearance of graphics.. Then version four in 1996 and this was one of the more widely distributed versions and very stable. They added automation and this was later considered part of becoming a platform - open up a scripting language or subset of a language so others built tools that integrated with or sat on top of those of a product, thus locking people into using products once they automated tasks to increase human efficiency. Adobe Photoshop 5.0 added editable type, or rasterized text. Keep in mind that Adobe owned technology like PostScript and so could bring technology from Illustrator to Photoshop or vice versa, and integrate with other products - like export to PDF by then. They also added a number of undo options, a magnetic lasso, improved color management and it was now a great tool for more advanced designers. Then in 5.5 they added a save for web feature in a sign of the times. They could created vector shapes and continued to improve the user interface. Adobe 5 was also a big jump in complexity. Layers were easy enough to understand, but Photoshop was meant to be a subset of Illustrator features and had become far more than that. So in 2001 they released Photoshop Elements. By now they had a large portfolio of products and Elements was meant to appeal to the original customer base - the ones who were beginners and maybe not professional designers. By now, some people spent 40 or more hours a day in tools like Photoshop and Illustrator. Adobe Today Adobe had released PostScript, Illustrator, and Photoshop. But they have one of the most substantial portfolios of products of any company. They also released Premiere in 1991 to get into video editing. They acquired Aldus Corporation to get into more publishing workflows with PageMaker. They used that acquisition to get into motion graphics with After Effects. They acquired dozens of companies and released their products as well. Adobe also released the PDF format do describe full pages of information (or files that spread across multiple pages) in 1993 and Adobe Acrobat to use those. Acrobat became the de facto standard for page distribution so people didn’t have to download fonts to render pages properly. They dabbled in audio editing when they acquired Cool Edit Pro from Syntrillium Software and so now sell Adobe Audition. Adobe’s biggest acquisition was Macromedia in 2005. Here, they added a dozen new products to the portfolio, which included Flash, Fireworks, WYSYWIG web editor Dreamweaver, ColdFusion, Flex, and Breeze, which is now called Adobe Connect. By now, they’d also created what we call Creative Suite, which are packages of applications that could be used for given tasks. Creative Suite also signaled a transition into a software as a service, or SaaS mindset. Now customers could pay a monthly fee for a user license rather than buy large software packages each time a new version was released. Adobe had always been a company who made products to create graphics. They expanded into online marketing and web analytics when they bought Omniture in 2009 for $1.8 billion. These products are now normalized into the naming convention used for the rest as Adobe Marketing Cloud. Flash fell by the wayside and so the next wave of acquisitions were for more mobile-oriented products. This began with Day Software and then Nitobi in 2011. And they furthered their Marketing Cloud support with an acquisition of one of the larger competitors when they acquired Marketo in 2018 and acquiring Workfront in 2020. Given how many people started working from home, they also extended their offerings into pure-cloud video tooling with an acquisition of Frame.io in 2021. And here we see a company started by a bunch of true computer sciencists from academia in the early days of the personal computer that has become far more. They could have been rolled into Apple but had a vision of a creative suite of products that could be used to make the world a prettier place. Creative Suite then Creative Cloud shows a move of the same tools into a more online delivery model. Other companies come along to do similar tasks, like infinite digital whiteboard Miro - so they have to innovate to stay marketable. They have to continue to increase sales so they expand into other markets like the most adjacent Marketing Cloud. At 22,500+ employees and with well over $12 billion in revenues, they have a lot of families dependent on maintaining that growth rate. And so the company becomes more than the culmination of their software. They become more than graphic design, web design, video editing, animation, and visual effects. Because in software, if revenues don’t grow at a rate greater than 10 percent per year, the company simply isn’t outgrowing the size of the market and likely won’t be able to justify stock prices at an inflated earnings to price ratio that shows explosive growth. And yet once a company saturates sales in a given market they have shareholders to justify their existence to. Adobe has survived many an economic downturn and boom time with smart, measured growth and is likely to continue doing so for a long time to come.
4/16/2023 • 22 minutes, 2 seconds
The Evolution of Fonts on Computers
Gutenburg shipped the first working printing press around 1450 and typeface was born. Before then most books were hand written, often in blackletter calligraphy. And they were expensive. The next few decades saw Nicolas Jensen develop the Roman typeface, Aldus Manutius and Francesco Griffo create the first italic typeface. This represented a period where people were experimenting with making type that would save space. The 1700s saw the start of a focus on readability. William Caslon created the Old Style typeface in 1734. John Baskerville developed Transitional typefaces in 1757. And Firmin Didot and Giambattista Bodoni created two typefaces that would become the modern family of Serif. Then slab Serif, which we now call Antique, came in 1815 ushering in an era of experimenting with using type for larger formats, suitable for advertisements in various printed materials. These were necessary as more presses were printing more books and made possible by new levels of precision in the metal-casting. People started experimenting with various forms of typewriters in the mid-1860s and by the 1920s we got Frederic Goudy, the first real full-time type designer. Before him, it was part of a job. After him, it was a job. And we still use some of the typefaces he crafted, like Copperplate Gothic. And we saw an explosion of new fonts like Times New Roman in 1931. At the time, most typewriters used typefaces on the end of a metal shaft. Hit a kit, the shaft hammers onto a strip of ink and leaves a letter on the page. Kerning, or the space between characters, and letter placement were often there to reduce the chance that those metal hammers jammed. And replacing a font would have meant replacing tons of precision parts. Then came the IBM Selectric typewriter in 1961. Here we saw precision parts that put all those letters on a ball. Hit a key, the ball rotates and presses the ink onto the paper. And the ball could be replaced. A single document could now have multiple fonts without a ton of work. Xerox exploded that same year with the Xerox 914, one of the most successful products of all time. Now, we could type amazing documents with multiple fonts in the same document quickly - and photocopy them. And some of the numbers on those fancy documents were being spat out by those fancy computers, with their tubes. But as computers became transistorized heading into the 60s, it was only a matter of time before we put fonts on computer screens. Here, we initially used bitmaps to render letters onto a screen. By bitmap we mean that a series, or an array of pixels on a screen is a map of bits and where each should be displayed on a screen. We used to call these raster fonts, but the drawback was that to make characters bigger, we needed a whole new map of bits. To go to a bigger screen, we probably needed a whole new map of bits. As people thought about things like bold, underline, italics, guess what - also a new file. But through the 50s, transistor counts weren’t nearly high enough to do something different than bitmaps as they rendered very quickly and you know, displays weren’t very high quality so who could tell the difference anyways. Whirlwind was the first computer to project real-time graphics on the screen and the characters were simple blocky letters. But as the resolution of screens and the speed of interactivity increased, so did what was possible with drawing glyphs on screens. Rudolf Hell was a German, experimenting with using cathode ray tubes to project a CRT image onto paper that was photosensitive and thus print using CRT. He designed a simple font called Digital Grotesk, in 1968. It looked good on the CRT and the paper. And so that font would not only be used to digitize typesetting, loosely based on Neuzeit Book. And we quickly realized bitmaps weren’t efficient to draw fonts to screen and by 1974 moved to outline, or vector, fonts. Here a Bézier curve was drawn onto the screen using an algorithm that created the character, or glyph using an outline and then filling in the space between. These took up less memory and so drew on the screen faster. Those could be defined in an operating system, and were used not only to draw characters but also by some game designers to draw entire screens of information by defining a character as a block and so taking up less memory to do graphics. These were scalable and by 1979 another German, Peter Karow, used spline algorithms wrote Ikarus, software that allowed a person to draw a shape on a screen and rasterize that. Now we could graphically create fonts that were scalable. In the meantime, the team at Xerox PARC had been experimenting with different ways to send pages of content to the first laser printers. Bob Sproull and Bill Newman created the Press format for the Star. But this wasn’t incredibly flexible like what Karow would create. John Gaffney who was working with Ivan Sutherland at Evans & Sutherland, had been working with John Warnock on an interpreter that could pull information from a database of graphics. When he went to Xerox, he teamed up with Martin Newell to create J&M, which harnessed the latest chips to process graphics and character type onto printers. As it progressed, they renamed it to Interpress. Chuck Geschke started the Imaging Sciences Laboratory at Xerox PARC and eventually left Xerox with Warnock to start a company called Adobe in Warnock’s garage, which they named after a creek behind his house. Bill Paxton had worked on “The Mother of All Demos” with Doug Engelbart at Stanford, where he got his PhD and then moved to Xerox PARC. There he worked on bitmap displays, laser printers, and GUIs - and so he joined Adobe as a co-founder in 1983 and worked on the font algorithms and helped ship a page description language, along with Chuck Geschke, Doug Brotz, and Ed Taft. Steve Jobs tried to buy Adobe in 1982 for $5 million. But instead they sold him just shy of 20% of the company and got a five-year license for PostScript. This allowed them to focus on making the PostScript language more extensible, and creating the Type 1 fonts. These had 2 parts. One that was a set of bit maps And another that was a font file that could be used to send the font to a device. We see this time and time again. The simpler an interface and the more down-market the science gets, the faster we see innovative industries come out of the work done. There were lots of fonts by now. The original 1984 Mac saw Susan Kare work with Jobs and others to ship a bunch of fonts named after cities like Chicago and San Francisco. She would design the fonts on paper and then conjure up the hex (that’s hexadecimal) for graphics and fonts. She would then manually type the hexadecimal notation for each letter of each font. Previously, custom fonts were reserved for high end marketing and industrial designers. Apple considered licensing existing fonts but decided to go their own route. She painstakingly created new fonts and gave them the names of towns along train stops around Philadelphia where she grew up. Steve Jobs went for the city approach but insisted they be cool cities. And so the Chicago, Monaco, New York, Cairo, Toronto, Venice, Geneva, and Los Angeles fonts were born - with her personally developing Geneva, Chicago, and Cairo. And she did it in 9 x 7. I can still remember the magic of sitting down at a computer with a graphical interface for the first time. I remember opening MacPaint and changing between the fonts, marveling at the typefaces. I’d certainly seen different fonts in books. But never had I made a document and been able to set my own typeface! Not only that they could be in italics, outline, and bold. Those were all her. And she inspired a whole generation of innovation. Here, we see a clean line from Ivan Sutherland and the pioneering work done at MIT to the University of Utah to Stanford through the oNLine System (or NLS) to Xerox PARC and then to Apple. But with the rise of Windows and other graphical operating systems. As Apple’s 5 year license for PostScript came and went they started developing their own font standard as a competitor to Adobe, which they called TrueType. Here we saw Times Roman, Courier, and symbols that could replace the PostScript fonts and updating to Geneva, Monaco, and others. They may not have gotten along with Microsoft, but they licensed TrueType to them nonetheless to make sure it was more widely adopted. And in exchange they got a license for TrueImage, which was a page description language that was compatible with PostScript. Given how high resolution screens had gotten it was time for the birth of anti-aliasing. He we could clean up the blocky “jaggies” as the gamers call them. Vertical and horizontal lines in the 8-bit era looked fine but distorted at higher resolutions and so spatial anti-aliasing and then post-processing anti-aliasing was born. By the 90s, Adobe was looking for the answer to TrueImage. So 1993 brought us PDF, now an international standard in ISO 32000-1:2008. But PDF Reader and other tools were good to Adobe for many years, along with Illustrator and then Photoshop and then the other products in the Adobe portfolio. By this time, even though Steve Jobs was gone, Apple was hard at work on new font technology that resulted in Apple Advanced Typography, or AAT. AAT gave us ligature control, better kerning and the ability to write characters on different axes. But even though Jobs was gone, negotiations between Apple and Microsoft broke down to license AAT to Microsoft. They were bitter competitors and Windows 95 wasn’t even out yet. So Microsoft started work on OpenType, their own font standardized language in 1994 and Adobe joined the project to ship the next generation in 1997. And that would evolve into an open standard by the mid-2000s. And once an open standard, sometimes the de facto standard as opposed to those that need to be licensed. By then the web had become a thing. Early browsers and the wars between them to increment features meant developers had to build and test on potentially 4 or 5 different computers and often be frustrated by the results. So the WC3 began standardizing how a lot of elements worked in Extensible Markup Language, or XML. Images, layouts, colors, even fonts. SVGs are XML-based vector image. In other words the browser interprets a language that displays the image. That became a way to render Web Open Format or WOFF 1 was published in 2009 with contributions by Dutch educator Erik van Blokland, Jonathan Kew, and Tal Leming. This built on the CSS font styling rules that had shipped in Internet Explorer 4 and would slowly be added to every browser shipped, including Firefox since 3.6, Chrome since 6.0, Internet Explorer since 9, and Apple’s Safari since 5.1. Then WOFF 2 added Brotli compression to get sizes down and render faster. WOFF has been a part of the W3C open web standard since 2011. Out of Apple’s TrueType came TrueType GX, which added variable fonts. Here, a single font file could contain a number or range of variants to the initial font. So a family of fonts could be in a single file. OpenType added variable fonts in 2016, with Apple, Microsoft, and Google all announcing support. And of course the company that had been there since the beginning, Adobe, jumped on board as well. Fewer font files, faster page loads. So here we’ve looked at the progression of fonts from the printing press, becoming more efficient to conserve paper, through the advent of the electronic typewriter to the early bitmap fonts for screens to the vectorization led by Adobe into the Mac then Windows. We also see rethinking the font entirely so multiple scripts and character sets and axes can be represented and rendered efficiently. I am now converting all my user names into pig Latin for maximum security. Luckily those are character sets that are pretty widely supported. The ability to add color to pig Latin means that OpenType-SVG will allow me add spiffy color to my glyphs. It makes us wonder what’s next for fonts. Maybe being able to design our own, or more to the point, customize those developed by others to make them our own. We didn’t touch on emoji yet. But we’ll just have to save the evolution of character sets and emoji for another day. In the meantime, let’s think on the fact that fonts are such a big deal because Steve Jobs took a caligraphy class from a Trappist monk named Robert Palladino while enrolled at Reed College. Today we can painstakingly choose just the right font with just the right meaning because Palladino left the monastic life to marry and have a son. He taught jobs about serif and san serif and kerning and the art of typography. That style and attention to detail was one aspect of the original Mac that taught the world that computers could have style and grace as well. It’s not hard to imagine if entire computers still only supported one font or even one font per document. Palladino never owned or used a computer though. His influence can be felt through the influence his pupil Jobs had. And it’s actually amazing how many people who had such dramatic impacts on computing never really used one. Because so many smaller evolutions came after them. What evolutions do we see on the horizon today? And how many who put a snippet of code on a service like GitHub may never know the impact they have on so many?
4/10/2023 • 20 minutes, 4 seconds
Flight Part II: From Balloons to Autopilot to Drones
In our previous episode, we looked at the history of flight - from dinosaurs to the modern aircraft that carry people and things all over the world. Those helped to make the world smaller, but UAVs and drones have had a very different impact in how we lead our lives - and will have an even more substantial impact in the future. That might not have seemed so likely in the 1700s, though - when unmann Unmanned Aircraft Napoleon conquered Venice in 1797 and then ceded control to the Austrians the same year. He then took it as part of a treaty in 1805 and established the first Kingdom of Italy. Then lost it in 1814. And so they revolted in 1848. One of the ways the Austrians crushed the revolt, in part employing balloons, which had been invented in 1783, that were packed with explosives. 200 balloons packed with bombs later, one found a target. Not a huge surprise that such techniques didn’t get used again for some time. The Japanese tried a similar tactic to bomb the US in World War II - then there were random balloons in the 2020s, just for funsies. A few other inventions needed to find one another in order to evolve into something entirely new. Radio was invented in the 1890s. Nikola Tesla built a radio controlled boat in 1898. Airplanes came along in 1903. Then came airships moved by radio. So it was just a matter of time before the cost of radio equipment came down enough to match the cost of building smaller airplanes that could be controlled with remote controls as well. The first documented occurrence of that was in 1907 when Percy Sperry filed a patent for a kite fashioned to look and operate like a plane, but glide in the wind. The kite string was the first remote control. Then electrical signals went through those strings and eventually the wire turned into radio - the same progress we see with most manual machinery that needs to be mobile. Technology moves upmarket, so Sperry Corporation the aircraft with autopilot features in 1912. At this point, that was just a gyroscopic heading indicator and attitude indicator that had been connected to hydraulically operated elevators and rudders but over time would be able to react to all types of environmental changes to save pilots from having to constantly manually react while flying. That helped to pave the way for longer and safer flights, as automation often does. Then came World War I. Tesla discussed aerial combat using unmanned aircraft in 1915 and Charles Kettering (who developed the electric cash register and the electric car starter) gave us The Kettering Bug, a flying, remote controlled torpedo of sorts. Elmer Sperry worked on a similar device. British war engineers like Archibald Low were also working on attempts but the technology didn’t evolve fast enough and by the end of the war there wasn’t much interest in military funding. But a couple of decades can do a lot. Both for miniaturization and maturity of technology. 1936 saw the development of the first navy UAV aircraft by the name of Queen Bee by Admiral William H. Stanley then the QF2. They was primarily used for aerial target practice as a low-cost radio-controlled drone. The idea was an instant hit and later on, the military called for the development of similar systems, many of which came from Hollywood of all places. Reginald Denny was a British gunner in World War I. They shot things from airplanes. After the war he moved to Hollywood to be an actor. By the 1930s he got interested in model airplanes that could fly and joined up with Paul Whittier to open a chain of hobby shops. He designed a few planes and eventually grew them to be sold to the US military as targets. The Radioplane as they would be known even got joysticks and they sold tens of thousands during World War II. War wasn’t the only use for UAVs. Others were experimenting and by 1936 we got the first radio controlled model airplane competition in 1936, a movement that continued to grow and evolve into the 1970s. We got the Academy of Model Aeronautics (or AMA) in 1936, who launched a magazine called Model Aviation and continues to publish, provide insurance, and act as the UAV, RC airplane, and drone community representative to the FAA. Their membership still runs close to 200,000. Most of these model planes were managed from the ground using radio remote controls. The Federal Communications Commission, or FCC, was established in 1934 to manage the airwaves. They stepped in to manage what frequencies could be used for different use cases in the US, including radio controlled planes. Where there is activity, there are stars. The Big Guff, built by brothers Walt and Bill Guff, was the first truly successful RC airplane in that hobbiest market. Over the next decades solid state electronics got smaller, cheaper, and more practical. As did the way we could transmit bits over those wireless links. 1947 saw the first radar-guided missile, the subsonic Firebird, which over time evolved into a number of programs. Electro-mechanical computers had been used to calculate trajectories for ordinances during World War II so with knowledge of infrared, we got infrared homing then television cameras mounted into missiles and when combined with the proximity fuse, which came with small pressure, magnetic, acoustic, radio, then optical transmitters. We got much better at blowing things up. Part of that was studying the German V-2 rocket programs. They used an analog computer to control the direction and altitude of missiles. The US Polaris and Minuteman missile programs added transistors then microchips to missiles to control the guidance systems. Rockets had computers and so they showed up in airplanes to aid humans in guiding those, often replacing Sperry’s original gyroscopic automations. The Apollo Guidance Computer from the 1969 moon landing was an early example of times when humans even put their lives in the hands of computers - with manual override capabilities of course. Then as the price of chips fell in the 1980s we started to see them in model airplanes. Modern Drones By now, radio controlled aircraft had been used for target practice, to deliver payloads and blow things up, and even for spying. Aircraft without humans to weight them down could run on electric motors rather than combustable engines. Thus they were quieter. This technology allowed the UAVs to fly undetected thus laying the very foundation for the modern depiction of drones used by the military for covert operations. As the costs fell and carrying capacity increased, we saw them used in filmmaking, surveying, weather monitoring, and anywhere else a hobbyist could use their hobby in their career. But the cameras weren’t that great yet. Then Fairchild developed the charge-coupled device, or CCD, in 1969. The first digital camera arguably came out of Eastman Kodak in 1975 when Steven Sasson built a prototype using a mixture of batteries, movie camera lenses, Fairchild CCD sensors, and Motorola parts. Sony came out with the Magnetic Video Camera in 1981 and Canon put the RC701 on the market in 1986. Fuji, Dycam, even the Apple QuickTake, came out in the next few years. Cameras were getting better resolution, and as we turned the page into the 1990s, those cameras got smaller and used CompactFlash to store images and video files. The first aerial photograph is attributed to Gaspar Tournachon, but the militaries of the world used UAVs that were B-17 and Grumman Hellcats from World War II that had been converted to drones full of sensors to study nuclear radiation clouds when testing weapons. Those evolved into Reconnaisance drones like the Aerojet SD-2, with mounted analog cameras in the 50s and 60s. During that time we saw the Ryan Firebees and DC-130As run thousands of flights snapping photos to aid intelligence gathering. Every country was in on it. The USSR, Iran, North Korea, Britain. And the DARPA-instigated Amber and then Predator drones might be considered the modern precursor to drones we play with today. Again, we see the larger military uses come down market once secrecy and cost meet a cool factor down-market. DARPA spent $40 million on the Amber program. Manufacturers of consumer drones have certainly made far more than that. Hobbyists started to develop Do It Yourself (DIY) drone kits in the early 2000s. Now that there were websites, we didn’t have to wait for magazines to show up, we could take to the World Wide Web forums and trade ideas for how to do what the US CIA had done when they conducted the first armed drone strike in 2001 - just maybe without the weapon systems since this was in the back yard. Lithium-ion batteries were getting cheaper and lighter. As were much faster chips. Robotics had come a long way as well, and moving small parts of model aircraft was much simpler than avoiding all the chairs in a room at Stanford. Hobbyists turned into companies that built and sold drones of all sizes, some of which got in the way of commercial aircraft. So the FAA started issuing drone permits in 2006. Every technology had a point, where the confluence of all these technologies meets into a truly commercially viable product. We had Wi-Fi, RF (or radio frequency), iPhones, mobile apps, tiny digital cameras in our phones, and even in spy teddy bears, we understood flight, propellers, plastics were heavier-than-air, but lighter than metal. So in 2010 we got the Parrot AR Drone. This was the first drone that was sold to the masses that was just plug and play. And an explosion of drone makers followed, with consumer products ranging from around $20 to hundreds now. Drone races, drone aerogymnastics, drone footage on our Apple and Google TV screens, and with TinyML projects for every possible machine learning need we can imagine, UAVs that stabilize cameras, can find objects based on information we program into it, and any other use we can imagine. The concept of drones or unmanned aerial vehicles (UAV) has come a long way since the Austrians tried to bomb the Venetians into submission. Today there are mini drones, foldable drones, massive drones that can carry packages, racing drones, and even military drones programmed to kill. In fact, right now there are debates raging in the UN around whether to allow drones to autonomously kill. Because Skynet. We’re also experimenting with passenger drone technology. Because autonomous driving is another convergence just waiting in the wings. Imagine going to the top of a building and getting in a small pod then flying a few buildings over - or to the next city. Maybe in our lifetimes, but not as soon as some of the companies who have gone public to do just this thought.
4/3/2023 • 19 minutes, 6 seconds
Flight: From Dinosaurs to Space
Humans have probably considered flight since they found birds. As far as 228 million years ago, the Pterosaurs used flight to reign down onto other animals from above and eat them. The first known bird-like dinosaur was the Archaeopteryx, which lived around 150 million years ago. It’s not considered an ancestor of modern birds - but other dinosaurs from the same era, the theropods, are. 25 million years later, in modern China, the Confuciusornis sanctus had feathers and could have flown. The first humans wouldn’t emerge from Africa until 23 million years later. By the 2300s BCE, the Summerians depicted shepherds riding eagles, as humanity looked to the skies in our myths and legends. These were creatures, not vehicles. The first documented vehicle of flight was as far back as the 7th century BCE when the Rāmāyana told of the Pushpaka Vimāna, a palace made by Vishwakarma for Brahma, complete with chariots that flew the king Rama high into the atmosphere. The Odyssey was written around the same time and tells of the Greek pantheon of Gods but doesn’t reference flight as we think of it today. Modern interpretations might move floating islands to the sky, but it seems more likely that the floating island of Aeollia is really the islands off Aeolis, or Anatolia, which we might refer to as the modern land of Turkey. Greek myths from a few hundred years later introduced more who were capable of flight. Icarus flew into the sun with wings that had been fashioned by Daedalus. By then, they could have been aware, through trade routes cut by Alexander and later rulers, of kites from China. The earliest attempts at flight trace their known origins to 500 BCE in China. Kites were, like most physical objects, heavier than air and could still be used to lift an object into flight. Some of those early records even mention the ability to lift humans off the ground with a kite. The principle used in kites was used later in the development of gliders and then when propulsion was added, modern aircraft. Any connection between any of these is conjecture as we can’t know how well the whisper net worked in those ages. Many legends are based on real events. The history of humanity is vast and many of our myths are handed down through the generations. The Greeks had far more advanced engineering capabilities than some of the societies that came after. They were still weary of what happened if they flew too close to the sun. In fact, emperors of China are reported to have forced some to leap from cliffs on a glider as a means of punishment. Perhaps that was where the fear of flight for some originated from. Chinese emperor Wang Mang used a scout with bird features to glide on a scouting mission around the same time as the Icarus myth might have been documented. Whether this knowledge informed the storytellers Ovid documented in his story of Icarus is lost to history, since he didn’t post it to Twitter. Once the Chinese took the string off the kite and they got large enough to fly with a human, they had also developed hang gliders. In the third century BCE, Chinese inventors added the concept of rotors for vertical flight when they developed helicopter-style toys. Those were then used to frighten off enemies. Some of those evolved into the beautiful paper lanterns that fly when lit.There were plenty of other evolutions and false starts with flight after that. Abbas ibn Ferns also glided with feathers in the 9th century. A Benedictine monk did so again in the 11th century. Both were injured when they jumped out of towers in the Middle Ages that spanned the Muslim Golden Age to England. Leonardo da Vinci studied flight for much of his life. His studies produced another human-power ornithopter and other contraptions; however he eventually realized that humans would not be able to fly on their own power alone. Others attempted the same old wings made of bird feathers, wings that flapped on the arms, wings tied to legs, different types of feathers, finding higher places to jump from, and anything they could think of. Many broke bones, which continued until we found ways to supplement human power to propel us into the air. Then a pair of brothers in the Ottoman Empire had some of the best luck. Hezarafen Ahmed Çelebi crossed the Bosphorus strait on a glider. That was 1633, and by then gunpowder already helped the Ottomans conquer Constantinople. That ended the last vestiges of ancient Roman influence along with the Byzantine empire as the conquerers renamed the city to Instanbul. That was the power of gunpowder. His brother then built a rocket using gunpowder and launched himself high in the air, before he glided back to the ground. The next major step was the hot air balloon. The modern hot air balloon was built by the Montgolfier brothers in France and first ridden in 1783 and (Petrescu & Petrescu, 2013). 10 days later, the first gas balloon was invented by Nicholas Louis Robert and Jacques Alexander Charles. The gas balloon used hydrogen and in 1785, used to cross the English Channel. That trip sparked the era of dirigibles. We built larger balloons to lift engines with propellers. That began a period that culminated with the Zeppelin. From the 1700s and on, much of what da Vinci realized was rediscovered, but this time published, and the body of knowledge built out. The physics of flight were then studied as new sciences emerged. Sir George Cayley started to actually apply physics to flight in the 1790s. Powered Flight We see this over and over in history; once we understand the physics and can apply science, progress starts to speed up. That was true when Archimedes defined force multipliers with the simple machines in the 3rd century BCE, true with solid state electronics far later, and true with Cayley’s research. Cayley conducted experiments, documented his results, and proved hypotheses. He finally got to codifying bird flight and why it worked. He studied the Chinese tops that worked like modern helicopters. He documented glided flight and applied math to why it worked. He defined drag and measured the force of windmill blades. In effect, he got to the point that he knew how much power was required based on the ratio of weight to actually sustain flight. Then to achieve that, he explored the physics of fixed-wing aircraft, complete with an engine, tail assembly, and fuel. His work culminated in a work called “On Aerial Navigation” that was published in 1810. By the mid-1850s, there was plenty of research that flowed into the goal for sustained air travel. Ideas like rotors led to rotor crafts. Those were all still gliding. Even with Cayley’s research, we had triplane gliders, gliders launched from balloons. After that, the first aircrafts that looked like the modern airplanes we think of today were developed. Cayley’s contributions were profound. He even described how to mix air with gasoline to build an engine. Influenced by his work, others built propellers. Some of those were steam powered and others powered by tight springs, like clockworks. Aeronautical societies were created, wing counters and cambering were experimented with, and wheels were added to try to lift off. Some even lifted a little off the ground. By the 1890s, the first gasoline powered biplane gliders were developed and flown, even if those early experiments crashed. Humanity was finally ready for powered flight. The Smithsonian housed some of the earliest experiments. They hired their third director, Samuel Langley, in 1887. He had been interested in aircraft for decades and as with many others had studied the Cayley work closely. He was a consummate tinkerer and had already worked in solar physics and developed the Allegheny Time System. The United States War department gave him grants to pursue his ideas to build an airplane. By then, there was enough science that humanity knew it was possible to fly and so there was a race to build powered aircraft. We knew the concepts of drag, rudders, thrust from some of the engineering built into ships. Some of that had been successfully used in the motorcar. We also knew how to build steam engines, which is what he used in his craft. He called it the Aerodrome and built a number of models. He was able to make it further than anyone at the time. He abandoned flight in 1903 when someone beat him to the finish line. That’s the year humans stepped beyond gliding and into the first controlled, sustained, and powered flight. There are reports that Gustave Whitehead beat the Wright Brothers, but he didn’t keep detailed notes or logs, and so the Wrights are often credited with the discovery. They managed to solve the problem of how to roll, built steerable rudders, and built the first biplane with an internal combustion engine. They flew their first airplane out of North Carolina when Orville Wright went 120 feet and his brother went 852 feet later that day. That plane now lives at the National Air and Space Museum in Washington DC and December 17th, 1903 represents the start of the age of flight. The Wright’s spent two years testing gliders and managed to document their results. They studied in wind tunnels, tinkered with engines, and were methodical if not scientific in their approach. They didn’t manage to have a public demonstration until 1908 though and so there was a lengthy battle over the patents they filed. Turns out it was a race and there were a lot of people who flew within months of one another. Decades of research culminated into what had to be: airplanes. Innovation happened quickly. Flight improved enough that planes could cross English Channel by 1909. There were advances after that, but patent wars over the invention drug on and so investors stayed away from the unproven technology. Flight for the Masses The superpowers of the world were at odds for the first half of the 1900s. An Italian pilot flew a reconnaissance mission in Libya in the Italo-Turkish war in 1911. It took only 9 days before they went from just reconnaissance and dropped grenades on Turkish troops from the planes. The age of aerial warfare had begun. The Wrights had received an order for the first plane from the military back in 1908. Military powers took note and by World War I there was an air arm of every military power. Intelligence wins wars. The innovation was ready for the assembly lines, so during and after the war, the first airplane manufacturers were born. Dutch engineer Anthony Fokker was inspired by Wilbur Wright’s exhibition in 1908. He went on to start a company and design the Fokker M.5, which evolved into the Fokker E.I. after World War I broke out in 1914. They mounted a machine gun and synchronized it to the propeller in 1915. Manfred von Richthofen, also known as the Red Baron, flew one before he upgraded to the Fokker D.VII and later an Albatros. Fokker made it all the way into the 1990s before they went bankrupt. Albatros was founded in 1909 by Enno Huth, who went on to found the German Air Force before the war. The Bristol Aeroplane Company was born in 1910 after Sir George White, who was involved in transportation already, met Wilbur Wright in France. Previous companies were built to help hobbyists, similar to how many early PC companies came from inventors as well. This can be seen with people like Maurice Mallet, who helped design gas balloons and dirigibles. He licensed airplane designs to Bristol who later brought in Frank Barnwell and other engineers that helped design the Scout. They based the Bristol Fighters that were used in World War I on those designs. Another British manufacturer was Sopwith, started by Thomas Sopwith, who taught himself to fly and then started a company to make planes. They built over 16,000 by the end of the war. After the war they pivoted to make ABC motorcycles and eventually sold to Hawker Aircraft in 1920, which later sold to Raytheon. The same paradigm played out elsewhere in the world, including the United States. Once those patent disputes were settled, plenty knew flight would help change the world. By 1917 the patent wars in the US had to end as the countries contributions to flight suffered. No investor wanted to touch the space and so there was a lack of capital to expand. Orville Write passed away in 1912 and Wilbur sold his rights to the patents, so the Assistant Secretary of the Navy, Franklin D. Roosevelt, stepped in and brought all the parties to the table to develop a cross-licensing organization. After almost 25 years, we could finally get innovation in flight back on track globally. In rapid succession, Loughead Aircraft, Lockheed, and Douglas Aircraft were founded. Then Jack Northrop left those and started his own aircraft company. Boeing was founded in 1957 as Aero Products and then United Aircraft, which was spun off into United Airlines as a carrier in the 1930s with Boeing continuing to make planes. United was only one of many a commercial airline that was created. Passenger air travel started after the first air flights with the first airline ferrying passengers in 1914. With plenty of airplanes assembled at all these companies, commercial travel was bound to explode into its own big business. Delta started as a cropdusting service in Macon, Georgia in 1925 and has grown into an empire. The worlds largest airline at the time of this writing is American Airlines, which started in 1926 when a number of smaller airlines banded together. Practically every country had at least one airline. Pan American (Panam for short) in 1927, Ryan Air started in 1926, Slow-Air in 1924, Finnair in 1923, Quantus in 1920, KLM in 1919, and the list goes on. Enough that the US passed the Air Commerce Act in 1926, which over time led to the department of Air Commerce, which evolved into the Federal Aviation Administration, or FAA we know today. Aircrafts were refined and made more functional. World War I brought with it the age of aerial combat. Plenty of supply after the war and then the growth of manufacturers Brough further innovation to compete with one another, and commercial aircraft and industrial uses (like cropdusting) enabled more investment into R&D In 1926, the first flying boat service was inaugurated from New York to Argentina. Another significant development in aviation was in the 1930s when the jet engine was invented. This invention was done by Frank Whittle who registered a turbojet engine patent. A jet plane was also developed by Hans von Ohain and was called the Heinkel He 178 (Grant, 2017). The plane first flew in 1939, but the Whittle jet engine is the ancestor of those found in planes in World War II and beyond. And from there to the monster airliners and stealth fighters or X-15 becomes a much larger story. The aerospace industry continued to innovate both in the skies and into space. The history of flight entered another phase in the Cold War. Rand corporation developed the concept of Intercontinental Ballistic Missiles (or ICBMs) and the Soviet Union launched the first satellite into space in 1957. Then in 1969, Neil Armstrong and Buzz Aldrin made the first landing on the moon and we continued to launch into space throughout the 1970s to 1990s, before opening up space travel to private industry. Those projects got bigger and bigger and bigger. But generations of enthusiasts and engineers were inspired by devices far smaller, and without pilots in the device.
3/25/2023 • 22 minutes, 57 seconds
SABRE and the Travel Global Distribution System
Computing has totally changed how people buy and experience travel. That process seemed to start with sites that made it easy to book travel, but as with most things we experience in our modern lives, it actually began far sooner and moved down-market as generations of computing led to more consumer options for desktops, the internet, and the convergence of these technologies. Systems like SABRE did the original work to re-think travel - to take logic and rules out of the heads of booking and travel agents and put them into a digital medium. In so doing, they paved the way for future generations of technology and to this day retain a valuation of over $2 billion. SABRE is short for Semi-Automated Business Research Environment. It’s used to manage over a third of global travel, to the tune of over a quarter trillion US dollars a year. It’s used by travel agencies and travel services to reserve car rentals, flights, hotel rooms, and tours. Since Sabre was released services like Amadeus and Travelport were created to give the world a Global Distribution System, or GDS. Passenger air travel began when airlines ferrying passengers cropped up in 1914 but the big companies began in the 1920s, with KLM in 1919, Finnair in 1923, Delta in 1925, American Airlines and Ryan Air in 1926, Pan American in 1927, and the list goes on. They grew quickly and by 1926 the Air Commerce Act led to a new department in the government called Air Commerce, which evolved into the FAA, or Federal Aviation Administration in the US. And each country, given the possible dangers these aircraft posed as they got bigger and loaded with more and more fuel, also had their own such departments. The aviation industry blossomed in the roaring 20s as people traveled and found romance and vacation. At the time, most airlines were somewhat regional and people found travel agents to help them along their journey to book travel, lodgings, and often food. The travel agent naturally took over air travel much as they’d handled sea travel before. But there were dangers in traveling in those years between the two World Wars. Nazis rising to power in Germany, Mussolini in Italy, communist cleansings in Russia and China. Yet, a trip to the Great Pyramid of Giza could now be a week instead of months. Following World War II, there was a fracture in the world between Eastern and Western powers, or those who aligned with the former British empire and those who aligned with the former Russian empire, now known as the Soviet Union. Travel within the West exploded as those areas were usually safe and often happy to accept the US dollar. Commercial air travel boomed not just for the wealthy, but for all. People had their own phones now, and could look up a phone number in a phone book and call a travel agent. The travel agents then spent hours trying to build the right travel package. That meant time on the phone with hotels and time on the phone with airlines. Airlines like American head. To hire larger and larger call centers of humans to help find flights. We didn’t just read about Paris, we wanted to go. Wars had connected the world and now people wanted to visit the places they’d previously just seen in art books or read about in history books. But those call centers grew. A company like American Airlines couldn’t handle all of its ticketing needs and the story goes that the CEO was sitting beside a seller from IBM when they came up with the idea of a computerized reservation system. And so SABRE was born in the 1950s, when American Airlines agreed to develop a real-time computing platform. Here, we see people calling in and pressing buttons to run commands on computers. The tones weren’t that different than a punch card, really. The system worked well enough for American that they decided to sell access to other firms. The computers used were based loosely after the IBM mainframes used in the SAGE missile air defense system. Here we see the commercial impacts of the AN/FSQ-7 the US government hired IBM to build as IBM added the transistorized options to the IBM 704 mainframe in 1955. That gave IBM the interactive computing technology that evolved into the 7000 series mainframes. Now that IBM had the interactive technology, and a thorough study had been done to evaluate the costs and impacts of a new reservation system, American and IBM signed a contract to build the system in 1957. They went live to test reservation booking shortly thereafter. But it turns out there was a much bigger opportunity here. See, American and other airlines had paper processes to track how many people were on a flight and quickly find open seats for passengers, but it could take an hour or three to book tickets. This was fairly common before software ate the world. Everything from standing in line at the bank, booking dinner at a restaurant, reserving a rental car, booking hotel rooms, and the list goes on. There were a lot of manual processes in the world - people weren’t just going to punch holes in a card to program their own flight and wait for some drum storage to tell them if there was an available seat. That was the plan American initially had in 1952 with the Magnetronic Reservisor. That never worked out. American had grown to one of the largest airlines and knew the perils and costs of developing software and hardware like this. Their system cost $40 million in 1950s money to build with IBM. They also knew that as other airlines grew to accommodate more people flying around the world, that the more flights, the longer that hour or three took. So they should of course sell the solution they built to other airlines. Thus, parlaying the SAGE name, famous as a Cold War shield against the nuclear winter, Sabre Corporation began. It was fairly simple at first, with a pair of IBM 7090 mainframes that could take over 80,000 calls a day in 1960. Some travel agents weren’t fans of the new system, but those who embraced it found they could get more done in less time. Sabre sold reservation systems to airlines and soon expanded to become the largest data-processor in the world. Far better than the Reservisor would have been and now able to help bring the whole world into the age of jet airplane travel. That exploded to thousands of flights an hour in the 1960s and even turned over all booking to the computer. The system got busy and over the years IBM upgraded the computers to the S/360. They also began to lease systems to travel agencies in the 1970s after Max Hopper joined the company and began the plan to open up the platform as TWA had done with their PARS system. Then they went international, opened service bureaus in other cities (given that we once had to pay for a toll charge to call a number). And by the 1980s Sabre was how the travel agents booked flights. The 1980s brought easysabjre, so people could use their own computers to book flights and by then - and through to the modern era, a little over a third of all reservations are made on Sabre. By the mid-1980s, United had their own system called Apollo, Delta had one called Datas, and other airlines had their own as well. But SABRE could be made to be airline neutral. IBM had been involved with many American competitors, developing Deltamatic for Delta, PANAMAC for Pan Am, and other systems. But SABRE could be hooked to thee new online services for a whole new way to connect systems. One of these was CompuServe in 1980, then Prodigy’s GEnie and AOL as we turned the corner into the 1990s. Then they started a site called Travelocity in 1996 which was later sold to Expedia. In the meantime, they got serious competition, which eventually led to a slew of acquisitions to remain compeititve. The competition included Amadeus, Galileo International, and Worldspan on provider in the Travelport GDS. The first of them originated from United Airlines, and by 1987 was joined by Aer Lingus, Air Portugal, Alitalia, British Airways, KLM, Olympic, Sabena, and Swissair to create Galileo, which was then merged with the Apollo reservation system. The technology was acquired through a company called Videcom International, which initially started developing reservation software in 1972, shortly after the Apollo and Datas services went online. They focused on travel agents and branched out into reservation systems of all sorts in the 1980s. As other systems arose they provided an aggregation to them by connecting to Amadeus, Galileo, and Worldspan. Amadeus was created in 1987 to be a neutral GDS after the issues with Sabre directing reservations to American Airlines. That was through a consortium of Air France, Iberia, Lufthansa, and SAS. They acquired the assets of the bankrupt System One and they eventually added other travel options including hotels, cars rentals, travel insurance, and other amenities. They went public in 1999 just before Sabre did and then were also taken private just before Sabre was. Worldspan was created in 1990 and the result of merging or interconnecting the systems of Delta, Northwest Airlines, and TWA, which was then acquired by Travelport in 2007. By then, SABRE had their own programming languages. While the original Sabre languages were written in assembly, they wrote their own language on top of C and C++ called SabreTalk and later transitioned to standard REST endpoints. They also weren’t a part of American any longer. There were too many problems with manipulating how flights were displayed to benefit American Airlines and they had to make a clean cut. Especially after Congress got involved in the 1980s and outlawed that type of bias for screen placement. Now that they were a standalone company, Sabre went public then was taken private by private equity firms in 2007, and relisted on NASDAQ in 2014. Meanwhile, travel aggregators had figured out they could hook into the GDS systems and sell discount airfare without a percentage going to travel agents. Now that the GDS systems weren’t a part of the airlines, they were able to put downward pressure on prices. Hotwire, which used Sabre and a couple of other systems, and TripAdvisor, which booked travel through Sabre and Amadeus, were created in 2000 and Microsoft launched Expedia in 1996, which had done well enough to get spun off into its own public company by 2000. Travelocity operated inside Sabre until sold, and so the airlines put together a site of their own that they called Orbitz, which in 2001 was the biggest e-commerce site to have ever launched. And out of the bursting of the dot com bubble came online travel bookings. Kayak came in 2004 Sabre later sold Travelocity to Expedia, which uses Sabre to book travel. That allowed Sabre to focus on providing the back end travel technology. They now do over $4 billion in revenue in their industry. American Express had handled travel for decades but also added flights and hotels to their site, integrating with Sabre and Amadeus as well. Here, we see a classic paradigm in play. First the airlines moved their travel bookings from paper filing systems to isolated computer systems - what we’d call mainframes today. The airlines then rethink the paradigm and aggregate other information into a single system, or a system intermixed with other data. In short, they enriched the data. Then we expose those as APIs to further remove human labor and put systems on assembly lines. Sites hook into those and the GDS systems, as with many aggregators, get spun off into their own companies. The aggregated information then benefits consumers (in this case travelers) with more options and cheaper fares. This helps counteract the centralization of the market where airlines acquire other airlines but in some way also cheapen the experience. Gone are the days when a travel agent guides us through our budgets and helps us build a killer itinerary. But in a way that just makes travel much more adventurous.
3/16/2023 • 19 minutes, 16 seconds
The Story of Intel
We’ve talked about the history of microchips, transistors, and other chip makers. Today we’re going to talk about Intel in a little more detail. Intel is short for Integrated Electronics. They were founded in 1968 by Robert Noyce and Gordon Moore. Noyce was an Iowa kid who went off to MIT to get a PhD in physics in 1953. He went off to join the Shockley Semiconductor Lab to join up with William Shockley who’d developed the transistor as a means of bringing a solid-state alternative to vacuum tubes in computers and amplifiers. Shockley became erratic after he won the Nobel Prize and 8 of the researchers left, now known as the “traitorous eight.” Between them came over 60 companies, including Intel - but first they went on to create a new company called Fairchild Semiconductor where Noyce invented the monolithic integrated circuit in 1959, or a single chip that contains multiple transistors. After 10 years at Fairchild, Noyce joined up with coworker and fellow traitor Gordon Moore. Moore had gotten his PhD in chemistry from Caltech and had made an observation while at Fairchild that the number of transistors, resistors, diodes, or capacitors in an integrated circuit was doubling every year and so coined Moore’s Law, that it would continue to to do so. They wanted to make semiconductor memory cheaper and more practical. They needed money to continue their research. Arthur Rock had helped them find a home at Fairchild when they left Shockley and helped them raise $2.5 million in backing in a couple of days. The first day of the company, Andy Grove joined them from Fairchild. He’d fled the Hungarian revolution in the 50s and gotten a PhD in chemical engineering at the University of California, Berkeley. Then came Leslie Vadász, another Hungarian emigrant. Funding and money coming in from sales allowed them to hire some of the best in the business. People like Ted Hoff , Federico Faggin, and Stan Mazor. That first year they released 64-bit static random-access memory in the 3101 chip, doubling what was on the market as well as the 3301 read-only memory chip, and the 1101. Then DRAM, or dynamic random-access memory in the 1103 in 1970, which became the bestselling chip within the first couple of years. Armed with a lineup of chips and an explosion of companies that wanted to buy the chips, they went public within 2 years of being founded. 1971 saw Dov Frohman develop erasable programmable read-only memory, or EPROM, while working on a different problem. This meant they could reprogram chips using ultraviolet light and electricity. In 1971 they also created the Intel 4004 chip, which was started in 1969 when a calculator manufacturer out of Japan ask them to develop 12 different chips. Instead they made one that could do all of the tasks of the 12, outperforming the ENIAC from 1946 and so the era of the microprocessor was born. And instead of taking up a basement at a university lab, it took up an eight of an inch by a sixth of an inch to hold a whopping 2,300 transistors. The chip didn’t contribute a ton to the bottom line of the company, but they’d built the first true microprocessor, which would eventually be what they were known for. Instead they were making DRAM chips. But then came the 8008 in 1972, ushering in an 8-bit CPU. The memory chips were being used by other companies developing their own processors but they knew how and the Computer Terminal Corporation was looking to develop what was a trend for a hot minute, called programmable terminals. And given the doubling of speeds those gave way to microcomputers within just a few years. The Intel 8080 was a 2 MHz chip that became the basis of the Altair 8800, SOL-20, and IMSAI 8080. By then Motorola, Zilog, and MOS Technology were hot on their heals releasing the Z80 and 6802 processors. But Gary Kildall wrote CP/M, one of the first operating systems, initially for the 8080 prior to porting it to other chips. Sales had been good and Intel had been growing. By 1979 they saw the future was in chips and opened a new office in Haifa, Israiel, where they designed the 8088, which clocked in at 4.77 MHz. IBM chose this chip to be used in the original IBM Personal Computer. IBM was going to use an 8-bit chip, but the team at Microsoft talked them into going with the 16-bit 8088 and thus created the foundation of what would become the Wintel or Intel architecture, or x86, which would dominate the personal computer market for the next 40 years. One reason IBM trusted Intel is that they had proven to be innovators. They had effectively invented the integrated circuit, then the microprocessor, then coined Moore’s Law, and by 1980 had built a 15,000 person company capable of shipping product in large quantities. They were intentional about culture, looking for openness, distributed decision making, and trading off bureaucracy for figuring out cool stuff. That IBM decision to use that Intel chip is one of the most impactful in the entire history of personal computers. Based on Microsoft DOS and then Windows being able to run on the architecture, nearly every laptop and desktop would run on that original 8088/86 architecture. Based on the standards, Intel and Microsoft would both market that their products ran not only on those IBM PCs but also on any PC using the same architecture and so IBM’s hold on the computing world would slowly wither. On the back of all these chips, revenue shot past $1 billion for the first time in 1983. IBM bought 12 percent of the company in 1982 and thus gave them the Big Blue seal of approval, something important event today. And the hits kept on coming with the 286 to 486 chips coming along during the 1980s. Intel brought the 80286 to market and it was used in the IBM PC AT in 1984. This new chip brought new ways to manage addresses, the first that could do memory management, and the first Intel chip where we saw protected mode so we could get virtual memory and multi-tasking. All of this was made possible with over a hundred thousand transistors. At the time the original Mac used a Motorola 68000 but the sales were sluggish while they flourished at IBM and slowly we saw the rise of the companies cloning the IBM architecture, like Compaq. Still using those Intel chips. Jerry Sanders had actually left Fairchild a little before Noyce and Moore to found AMD and ended up cloning the instructions in the 80286, after entering into a technology exchange agreement with Intel. This led to AMD making the chips at volume and selling them on the open market. AMD would go on to fast-follow Intel for decades. The 80386 would go on to simply be known as the Intel 386, with over 275,000 transistors. It was launched in 1985, but we didn’t see a lot of companies use them until the early 1990s. The 486 came in 1989. Now we were up to a million transistors as well as a math coprocessor. We were 50 times faster than the 4004 that had come out less than 20 years earlier. I don’t want to take anything away from the phenomenal run of research and development at Intel during this time but the chips and cores and amazing developments were on autopilot. The 80s also saw them invest half a billion in reinvigorating their manufacturing plants. With quality manufacturing allowing for a new era of printing chips, the 90s were just as good to Intel. I like to think of this as the Pentium decade with the first Pentium in 1993. 32-bit here we come. Revenues jumped 50 percent that year closing in on $9 billion. Intel had been running an advertising campaign around Intel Inside. This represented a shift from the IBM PC to the Intel. The Pentium Pro came in 1995 and we’d crossed 5 million transistors in each chip. And the brand equity was rising fast. More importantly, so was revenue. 1996 saw revenues pass $20 billion. The personal computer was showing up in homes and on desks across the world and most had Intel Inside - in fact we’d gone from Intel inside to Pentium Inside. 1997 brought us the Pentium II with over 7 million transistors, the Xeon came in 1998 for servers, and 1999 Pentium III. By 2000 they introduced the first gigahertz processor at Intel and they announced the next generation after Pentium: Itanium, finally moving the world to the 64 bit processor. As processor speeds slowed they were able to bring multi-core processors and massive parallelism out of the hallowed halls of research and to the desktop computer in 2005. 2006 saw Intel go from just Windows to the Mac. And we got 45 nanometer logic technology in 2006 using hafnium-based high-k for transistor gates represented a shift from the silicon-gated transistors of the 60s and allowed them to move to hundreds of millions of transistors packed into a single chip. i3, i5, i7, an on. The chips now have over a couple hundred million transistors per core with 8 cores on a chip potentially putting us over 1.7 or 1.8 transistors per chip. Microsoft, IBM, Apple, and so many others went through huge growth and sales jumps then retreated dealing with how to run a company of the size they suddenly became. This led each to invest heavily into ending a lost decade effectively with R&D - like when IBM built the S/360 or Apple developed the iMac and then iPod. Intel’s strategy had been research and development. Build amazing products and they sold. Bigger, faster, better. The focus had been on power. But mobile devices were starting to take the market by storm. And the ARM chip was more popular on those because with a reduced set of instructions they could use less power and be a bit more versatile. Intel coined Moore’s Law. They know that if they don’t find ways to pack more and more transistors into smaller and smaller spaces then someone else will. And while they haven’t been huge in the RISC-based System on a Chip space, they do continue to release new products and look for the right product-market fit. Just like they did when they went from more DRAM and SRAM to producing the types of chips that made them into a powerhouse. And on the back of a steadily rising revenue stream that’s now over $77 billion they seem poised to be able to whether any storm. Not only on the back of R&D but also some of the best manufacturing in the industry. Chips today are so powerful and small and contain the whole computer from the era of those Pentiums. Just as that 4004 chip contained a whole ENIAC. This gives us a nearly limitless canvas to design software. Machine learning on a SoC expands the reach of what that software can process. Technology is moving so fast in part because of the amazing work done at places like Intel, AMD, and ARM. Maybe that positronic brain that Asimov promised us isn’t as far off as it seems. But then, I thought that in the 90s as well so I guess we’ll see.
3/7/2023 • 16 minutes, 51 seconds
AI Hype Cycles And Winters On The Way To ChatGPT
Carlota Perez is a researcher who has studied hype cycles for much of her career. She’s affiliated with the University College London, the University of Sussex, The Tallinn University of Technology in Astonia and has worked with some influential organizations around technology and innovation. As a neo-Schumpeterian, she sees technology as a cornerstone of innovation. Her book Technological Revolutions and Financial Capital is a must-read for anyone who works in an industry that includes any of those four words, including revolutionaries. Connecticut-based Gartner Research was founded by GideonGartner in 1979. He emigrated to the United States from Tel Aviv at three years old in 1938 and graduated in the 1956 class from MIT, where he got his Master’s at the Sloan School of Management. He went on to work at the software company System Development Corporation (SDC), the US military defense industry, and IBM over the next 13 years before starting his first company. After that failed, he moved into analysis work and quickly became known as a top mind in the technology industry analysts. He often bucked the trends to pick winners and made banks, funds, and investors lots of money. He was able to parlay that into founding the Gartner Group in 1979. Gartner hired senior people in different industry segments to aid in competitive intelligence, industry research, and of course, to help Wall Street. They wrote reports on industries, dove deeply into new technologies, and got to understand what we now call hype cycles in the ensuing decades. They now boast a few billion dollars in revenue per year and serve well over 10,000 customers in more than 100 countries. Gartner has developed a number of tools to make it easier to take in the types of analysis they create. One is a Magic Quadrant, reports that identify leaders in categories of companies by a vision (or a completeness of vision to be more specific) and the ability to execute, which includes things like go-to-market activities, support, etc. They lump companies into a standard four-box as Leaders, Challengers, Visionaries, and Niche Players. There’s certainly an observer effect and those they put in the top right of their four box often enjoy added growth as companies want to be with the most visionary and best when picking a tool. Another of Gartner’s graphical design patterns to display technology advances is what they call the “hype cycle”. The hype cycle simplifies research from career academics like Perez into five phases. * The first is the technology trigger, which is when a breakthrough is found and PoCs, or proof-of-concepts begin to emerge in the world that get press interested in the new technology. Sometimes the new technology isn’t even usable, but shows promise. * The second is the Peak of Inflated Expectations, when the press picks up the story and companies are born, capital invested, and a large number of projects around the new techology fail. * The third is the Trough of Disillusionment, where interest falls off after those failures. Some companies suceeded and can show real productivity, and they continue to get investment. * The fourth is the Slope of Enlightenment, where the go-to-market activities of the surviving companies (or even a new generation) begin to have real productivity gains. Every company or IT department now runs a pilot and expectations are lower, but now achievable. * The fifth is the Plateau of Productivity, when those pilots become deployments and purchase orders. The mainstream industries embrace the new technology and case studies prove the promised productivity increases. Provided there’s enough market, companies now find success. There are issues with the hype cycle. Not all technologies will follow the cycle. The Gartner approach focuses on financials and productivity rather than true adoption. It involves a lot of guesswork around subjective, synthetical, and often unsystematic research. There’s also the ever-resent observer effect. However, more often than not, the hype is seperated from the tech that can give organizations (and sometimes all of humanity) real productivity gains. Further, the term cycle denotes a series of events when it should in fact be cyclical as out of the end of the fifth phase a new cycle is born, or even a set of cycles if industries grow enough to diverge. ChatGPT is all over the news feeds these days, igniting yet another cycle in the cycles of AI hype that have been prevalent since the 1950s. The concept of computer intelligence dates back to the 1942 with Alan Turing and Isaac Asimov with “Runaround” where the three laws of robotics initially emerged from. By 1952 computers could play themselves in checkers and by 1955, Arthur Samuel had written a heuristic learning algorthm he called “temporal-difference learning” to play Chess. Academics around the world worked on similar projects and by 1956 John McCarthy introduced the term “artificial intelligence” when he gathered some of the top minds in the field together for the McCarthy workshop. They tinkered and a generation of researchers began to join them. By 1964, Joseph Weizenbaum’s "ELIZA" debuted. ELIZA was a computer program that used early forms of natural language processing to run what they called a “DOCTOR” script that acted as a psychotherapist. ELIZA was one of a few technologies that triggered the media to pick up AI in the second stage of the hype cycle. Others came into the industry and expectations soared, now predictably followed by dilsillusionment. Weizenbaum wrote a book called Computer Power and Human Reason: From Judgment to Calculation in 1976, in response to the critiques and some of the early successes were able to then go to wider markets as the fourth phase of the hype cycle began. ELIZA was seen by people who worked on similar software, including some games, for Apple, Atari, and Commodore. Still, in the aftermath of ELIZA, the machine translation movement in AI had failed in the eyes of those who funded the attempts because going further required more than some fancy case statements. Another similar movement called connectionism, or mostly node-based artificial neural networks is widely seen as the impetus to deep learning. David Hunter Hubel and Torsten Nils Wiesel focused on the idea of convultional neural networks in human vision, which culminated in a 1968 paper called "Receptive fields and functional architecture of monkey striate cortex.” That built on the original deep learning paper from Frank Rosenblatt of Cornell University called "Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms" in 1962 and work done behind the iron curtain by Alexey Ivakhnenko on learning algorithms in 1967. After early successes, though, connectionism - which when paired with machine learning would be called deep learning when Rina Dechter coined the term in 1986, went through a similar trough of disillusionment that kicked off in 1970. Funding for these projects shot up after the early successes and petered out ofter there wasn’t much to show for them. Some had so much promise that former presidents can be seen in old photographs going through the models with the statiticians who were moving into computing. But organizations like DARPA would pull back funding, as seen with their speech recognition projects with Cargegie Mellon University in the early 1970s. These hype cycles weren’t just seen in the United States. The British applied mathemetician James Lighthill wrote a report for the British Science Research Council, which was published in 1973. The paper was called “Artificial Intelligence: A General Survey” and analyzed the progress made based on the amount of money spent on artificial intelligence programs. He found none of the research had resulted in any “major impact” in fields that the academics had undertaken. Much of the work had been done at the University of Edinbourgh and funding was drastically cut, based on his findings, for AI research around the UK. Turing, Von Neumann, McCarthy, and others had either intentially or not, set an expectation that became a check the academic research community just couldn’t cash. For example, the New York Times claimed Rosenblatt’s perceptron would let the US Navy build computers that could “walk, talk, see, write, reproduce itself, and be conscious of its existence” in the 1950s - a goal not likely to be achieved in the near future even seventy years later. Funding was cut in the US, the UK, and even in the USSR, or Union of the Soviet Socialist Republic. Yet many persisted. Languages like Lisp had become common in the late 1970s, after engineers like Richard Greenblatt helped to make McCarthy’s ideas for computer languages a reality. The MIT AI Lab developed a Lisp Machine Project and as AI work was picked up at other schools like Stanford began to look for ways to buy commercially built computers ideal to be Lisp Machines. After the post-war spending, the idea that AI could become a more commercial endeavor was attractive to many. But after plenty of hype, the Lisp machine market never materialized. The next hype cycle had begun in 1983 when the US Department of Defense pumped a billion dollars into AI, but that spending was cancelled in 1987, just after the collapse of the Lisp machine market. Another AI winter was about to begin. Another trend that began in the 1950s but picked up steam in the 1980s was expert systems. These attempt to emulate the ways that humans make decisions. Some of this work came out of the Stanford Heuristic Programming Project, pioneered by Edward Feigenbaum. Some commercial companies took the mantle and after running into barriers with CPUs, by the 1980s those got fast enough. There were inflated expectations after great papers like Richard Karp’s “Reducibility among Combinatorial Problems” out of UC Berkeley in 1972. Countries like Japan dumped hundreds of millions of dollars (or yen) into projects like “Fifth Generation Computer Systems” in 1982, a 10 year project to build up massively parallel computing systems. IBM spent around the same amount on their own projects. However, while these types of projects helped to improve computing, they didn’t live up to the expectations and by the early 1990s funding was cut following commercial failures. By the mid-2000s, some of the researchers in AI began to use new terms, after generations of artificial intelligence projects led to subsequent AI winters. Yet research continued on, with varying degrees of funding. Organizations like DARPA began to use challenges rather than funding large projects in some cases. Over time, successes were found yet again. Google Translate, Google Image Search, IBM’s Watson, AWS options for AI/ML, home voice assistants, and various machine learning projects in the open source world led to the start of yet another AI spring in the early 2010s. New chips have built-in machine learning cores and programming languages have frameworks and new technologies like Jupyter notebooks to help organize and train data sets. By 2006, academic works and open source projects had hit a turning point, this time quietly. The Association of Computer Linguistics was founded in 1962, initially as the Association for Machine Translation and Computational Linguistics (AMTCL). As with the ACM, they have a number of special interest groups that include natural language learning, machine translation, typology, natural language generation, and the list goes on. The 2006 proceedings on the Workshop of Statistical Machine Translation began a series of dozens of workshops attended by hundreds of papers and presenters. The academic work was then able to be consumed by all, inlcuding contributions to achieve English-to-German and Frnech tasks from 2014. Deep learning models spread and become more accessible - democratic if you will. RNNs, CNNs, DNNs, GANs. Training data sets was still one of the most human intensive and slow aspects of machine learning. GANs, or Generative Adversarial Networks were one of those machine learning frameworks, initially designed by Ian Goodfellow and others in 2014. GANs use zero-sum game techniques from game theory to generate new data sets - a genrative model. This allowed for more unsupervised training of data. Now it was possible to get further, faster with AI. This brings us into the current hype cycle. ChatGPT was launched in November of 2022 by OpenAI. OpenAI was founded as a non-profit in 2015 by Sam Altman (former cofounder of location-based social network app Loopt and former president of Y Combinator) and a cast of veritable all-stars in the startup world that included: * Reid Hoffman, former Paypal COO, LinkedIn founder and venture capitalist. * Peter Thiel, former cofounder of Paypal and Palantir, as well as one of the top investors in Silicon Valley. * Jessica Livingston, founding partner at Y Combinator. * Greg Brockman, an AI researcher who had worked on projects at MIT and Harvard OpenAI spent the next few years as a non-profit and worked on GPT, or Generative Pre-trained Transformer autoregression models. GPT uses deep learning models to process human text and produce text that’s more human than previous models. Not only is it capable of natural language processing but the generative pre-training of models has allowed it to take a lot of unlabeled text so people don’t have to hand label weights, thus automated fine tuning of results. OpenAI dumped millions into public betas by 2016 and were ready to build products to take to market by 2019. That’s when they switched from a non-profit to a for-profit. Microsoft pumped $1 billion into the company and they released DALL-E to produce generative images, which helped lead to a new generation of applications that could produce artwork on the fly. Then they released ChatGPT towards the end of 2022, which led to more media coverage and prognostication of world-changing technological breakthrough than most other hype cycles for any industry in recent memory. This, with GPT-4 to be released later in 2023. ChatGPT is most interesting through the lens of the hype cycle. There have been plenty of peaks and plateaus and valleys in artificial intelligence over the last 7+ decades. Most have been hyped up in the hallowed halls of academia and defense research. ChatGPT has hit mainstream media. The AI winter following each seems to be based on the reach of audience and depth of expectations. Science fiction continues to conflate expectations. Early prototypes that make it seem as though science fiction will be in our hands in a matter of weeks lead media to conjecture. The reckoning could be substantial. Meanwhile, projects like TinyML - with smaller potential impacts for each use but wider use cases, could become the real benefit to humanity beyond research, when it comes to everyday productivity gains. The moral of this story is as old as time. Control expectations. Undersell and overdeliver. That doesn’t lead to massive valuations pumped up by hype cycles. Many CEOs and CFOs know that a jump in profits doesn’t always mean the increase will continue. Some intentially slow expectations in their quarterly reports and calls with analysts. Those are the smart ones.
2/22/2023 • 23 minutes, 37 seconds
Hackers and Chinese Food: Origins of a Love Affair
Research into the history of computers sometimes leads down some interesting alleys - or wormholes even. My family would always go out to eat Chinese food, or pick it up, on New Year’s day. None of the one Chinese restaurants in the area actually closed, so it just made sense. The Christmas leftovers were gone by then and no one really wanted to cook. My dad mentioned there were no Chinese restaurants in our area in the 1970s - so it was a relatively new entrant to the cuisine of my North Georgia town. Whether it’s the Tech Model Railroad or hobbyists from Cambridge, stories abound of young engineers debating the merits of this programming technique or chipset or that. So much so that while reading Steven Levy’s Hackers or Tom Lean’s Electronic Dreams, I couldn’t help but hop on Door Dash and order up some yummy fried rice. Then I started to wonder, why this obsession? For one, many of these hackers didn’t have a ton of money. Chinese food was quick and cheap. The restaurants were often family-owned and small. There were higher end restaurants but concepts like P.F. Chang’s hadn’t sprung up yet. That wouldn’t come until 1993. Another reason it was cheap is that many of the proprietors of the restaurants were recent immigrants. Some were from Hunan, others from Taipei or Sichuan, Shanghai, or Peking (the Romanized name for Beijing). Chinese immigrants began to flow into the United States during the Gold Rush of California in the late 1840s and early 1850s. The Qing Empire had been at its height at the end of the 1700s and China ruled over a third of humans in the world. Not only that - it was one of the top economies in the world. But rapid growth in population meant less farmland for everyone - less jobs to go around. Poverty spread, just as colonial powers began to pick away at parts of the empire. Britain had banned the slave trade in 1807 and Chinese laborers had been used to replace the slaves. The use of opium spread throughout the colonies and with the laborers, back into China. The Chinese tried to ban the opium trade and seized opium in Canton. The British had better ships, better guns, and when the First Opium War broke out, China was forced to give up Hong Kong to the British in 1842, which began what some historians refer to as a century of humiliation while China gave up land until they were able to modernize. Hong Kong became a British colony under Queen Victoria and the Victorian obsession with China grew. Art, silks (as with the Romans), vases, and anything the British could get their hands on flowed through Hong Kong. Then came the Taiping Rebellion, which lasted from 1851 to 1864. A Christian was named theocrat and China was forced to wage a war internally with around 20 million people dying and scores more being displaced. The scent of an empire in decay was in the air. Set against a backdrop of more rebellions, the Chinese army was weakened to the point that during the First Sino-Japanese War in 1894, and more intervention from colonial powers. By 1900, the anti-colonial and anti-Christian Boxer Uprising saw missionaries slaughtered and foreigners expelled. Great powers of the day sent ships and troops to retrieve their peoples and soon declared war on the empire and seized Beijing. This was all expensive, led to reparations, a prohibition on importing arms, razing of forts, and more foreign powers occupying areas of China. The United States put over $10 million of its take from the Boxer Indemnity as they called it, to help support Chinese students who came to the United States. The Qing court had lost control and by 1911 the Wuchang Uprising began and by 1912 2,000 years of Chinese dynasties was over with the Republic of China founded in 1912, and internal conflicts for power continuing until Mao Zedong and his followers finally seized power, established the People’s Republic of China as a communist nation, and cleansed the country of detractors during what they called the Great Leap Forward, resulting in 45 million dead. China itself was diplomatically disconnected with the United States at the time, who had backed the government now in exile in the capital city of Taiwan, Taipei - or the Republic of China as they were called during the Civil War. The food, though. Chinese food began to come into the United States during the Gold Rush. Cantonese merchants flowed into the sparkling bay of San Francisco, and emigrants could find jobs mining, laying railroad tracks, and in agriculture. Hard work means you get real hungry, and they cooked food like they had at home. China had a better restaurant and open market cooking industry than the US at the time (and arguably still does). Some of he Chinese who settled in San Francisco started restaurants - many better than those run by Americans. The first known restaurant owned by a Chinese proprietor was Canton Restaurant in 1849. As San Francisco grew, so grew the Chinese food industry. Every group of immigrants faces xenophobia or racism. The use of the Chinese laborers had led to laws in England that attempted to limit their use. In some cases they were subjugated into labor. The Chinese immigrants came into the California Gold Rush and many stayed. More restaurants were opened and some catered to white people more than the Chinese. The Transcontinental Railroad was completed in 1869 and tourists began to visit San Francisco from the east. China Towns began to spring up in other major cities across the United States. Restaurants, laundries, and other even eastern pharmacies. New people bring new ways and economies go up and down. Prejudice reared its ugly head. There was an economic recession in the 1870s. There were fears that the Chinese were taking jobs, causing wages to go down, and crime. Anti-Chinese sentiment became law in the Chinese Exclusion Act in 1882, which halted immigration into the US. That would be repealed in 1943. Conservative approaches to immigration did nothing to limit the growing appeal of Chinese food in the United States. Merchants, like those who owned Chinese restaurants, could get special visas. They could bring relatives and workers. Early Chinese restaurants had been called “chow chow houses” and by the early 1900s there were new Chop Suey restaurants in big cities, that were affordable. Chop Suey basically means “odds and ends” and most of the dishes were heavily westernized but still interesting and delicious. The food was fried in ways it hadn’t been in China, and sweeter. Ideas from other asian nations also began to come in, like fortune cookies, initially from Japan. Americans began to return home from World War II in the late 1940s. Many had experienced new culinary traditions in lands they visited. Initially Cantonese-inspired, more people flowed in from other parts of China like Taiwan and they brought food inspired from their native lands. Areas like New York and San Francisco got higher end restaurants. Once the Chinese Exclusion Act was repealed, plenty of immigrants fled wars and cleansing in China. Meanwhile, Americans embraced access to different types of foods - like Italian, Chinese, and fast food. Food became a part of the national identity. Further, new ways to preserve food became possible as people got freezers and canneries helped spread foods - like pasta sauce. This was the era of the spread of Spam and other types of early processed foods. The military helped spread the practice - as did Jen Paulucci, who bought Chun King Corporation in 1947. The Great Depression proved there needed to be new ways to distribute foods. Some capitalized on that. 4,000+ Chinese restaurants in the US in the 1940s meant there were plenty of companies to buy those goods rather than make them fresh. Chop Suey, possibly created by the early Chinese migrants. A new influx of immigrants would have new opportunities to diversify the American pallate. The 1960s saw an increase in legislation to protect human rights. Amidst the civil rights movement, the Hart-Celler Act of 1965 stopped the long-standing practice of controlling immigration effectively by color. The post-war years saw shifting borders and wars throughout the world - especially in Eastern Europe and Asia. The Marshal Plan helped rebuild the parts of Asia that weren’t communist, and opened the ability for more diverse people to move to the US. Many that we’ve covered went into computing and helped develop a number of aspects of computing. They didn’t just come from China - they came from Russia, Poland, India, Japan, Korea, Vietnam, Thailand, and throughout. Their food came with them. This is the world the Hackers that Steven Levy described lived in. The first Chinese restaurant opened in London in 1907 and as well when people who lived in Hong Kong moved to the UK, especially after World War II. That number of Chinese restaurants in the US grew to tens of thousands in the decades since Richard Nixon visited Beijing in 1972 to open relations back up with China. But the impact at the time was substantial, even on technologists. It wasn’t just those hackers from MIT that loved their Chinese food, but those in Cambridge as well in the 1980s, who partook in a more Americanized Chinese cuisine, like “Chow mein” - which loosely translates from “fried noodles” and emerged in the US in the early 1900s. Not all dishes have such simple origins to track down. Egg rolls emerged in the 1930s, a twist on the more traditional Chinese sprint roll. Ding Baozhen, a governor of the Sichuan province in the Qing Dynasty, discovered a spicy marinated chicken dish in the mid-1800s that spread quickly. He was the Palace Guardian, or Kung Pao, as the dish is still known. Zuo Zongtang, better known as General Tso, was a Qing Dynasty statesman and military commander who helped put down the Taiping Rebellion in the later half of the 1800s. Chef Peng Chang-kuei escaped communist China to Taiwan, where he developed General Tso’s chicken and named it after the war hero. It came to New York in the 1970s. Sweet and Sour pork also got its start in the Qing era, in 18th century Cantonese cuisine and spread to the US with the Gold Rush. Some dishes are far older. Steamed dumplings were popular from Afghanistan to Japan and go back to the Han Dynasty - possibly invented by the Chinese doctor Zhang Zhongjing in the centuries before or after the turn of the millennia. Peking duck is far older, getting its start in 1300s Ming Dynasty, or Yuan - but close to Shanghai. Otto Reichardt brought the ducks to San Francisco to be served in restaurants in 1901. Chinese diplomats helped popularize the dish in the 1940s as some of their staffs stayed in the US and the dish exploded in popularity in the 1970s - especially after Nixon’s trip to China, which included a televised meal on Tiananmen Square where he and Henry Kissinger ate the dish. There are countless stories of Chinese-born immigrants bringing their food to the world. Some are emblematic of larger population shifts globally. Cecilia Chiang grew up in Shanghai until Japan invaded, when she and her sister fled to Chengdu, only to flee the Chinese Communists and emigrate to the US in 1959. She opened The Mandarin in 1960 in San Francisco and a second location in 1967. It was an upscale restaurant and introduced a number of new dishes to the US from China. She went on to serve everyone from John Lennon to Julia Child - and her son Philip replaced her in 1989 before starting a more mainstream chain of restaurants he called P.F. Chang’s in 1993. The American dream, as it had come to be known. Plenty of other immigrants from countries around the world were met with open arms. Chemists, biologists, inventors, spies, mathematicians, doctors, physicists, and yes, computer scientists. And of course, chefs. Diversity of thought, diversity of ideas, and diversity-driven innovation can only come from diverse peoples. The hackers innovated over their Americanized versions of Chinese food - many making use of technology developed by immigrants from China, their children, or those who came from other nations. Just as those from nearly every industry did.
12/30/2022 • 19 minutes, 37 seconds
The Silk Roads: Then And Now...
The Silk Road, or roads more appropriately, has been in use for thousands of years. Horses, jade, gold, and of course silk flowed across the trade routes. As did spices - and knowledge. The term Silk Road was coined by a German geographer named Ferdinand van Richthofen in 1870 to describe a network of routes that was somewhat formalized in the second century that some theorize date back 3000 years, given that silk has been found on Egyptian mummies from that time - or further. The use of silk itself in China in fact dates back perhaps 8,500 years. Chinese silk has been found in Scythian graves, ancient Germanic graves, and along mountain ranges and waterways around modern India gold and silk flowed between east and west. These gave way to empires along the Carpathian Mountains or Kansu Corridor. There were Assyrian outposts in modern Iran and the Sogdia built cities around modern Samarkand in Uzbekistan, an area that has been inhabited since the 4th millennium BCE. The Sogdians developed trading networks that spanned over 1,500 miles - into ancient China. The road expanded with he Persian Royal Road from the 5th century BCE across Turkey and with the conquests of Alexander the Great in the 300s BCE, the Macedonian Empire pushed into Central Asia into modern Uzbekistan. The satrap Diodotus I claimed independence of one of those areas between the Hindu Kush, Pamirs, and Tengri Tagh mountains, which became known as the Hellenized name Bactria and called the Greco-Bactrian and then Into-Greek Kingdoms by history. Their culture also dates back thousands of years further. The Bactrians became powerful enough to push into the Indus Valley, west along the Caspian Sea, and north to the Syr Darya river, known as the Jaxartes at the time and to the Aral Sea. They also pushed south into modern Pakistan and Afghanistan, and east to modern Kyrgyzstan. To cross the Silk Road was to cross through Bactria, and they were considered a Greek empire in the east. The Han Chinese called them Daxia in the third century BCE. They grew so wealthy from the trade that they became the target of conquest by neighboring peoples once the thirst for silk could not be unquenched in the Roman Empire. The Romans consumed so much silk that silver reserves were worn thin and they regulated how silk could be used - something some of the Muslim’s would do over the next generations. Meanwhile, the Chinese hadn’t known where their silk was destined, but had been astute enough to limit who knew how silk was produced. The Chinese general Pan Chao in the first century AD and attempted to make contact with the Roman’s only to be thwarted by Parthians, who acted as the middlemen on many a trade route. It wasn’t until the Romans pushed East enough to control the Persian Gulf that an envoy was sent by Marcus Aurelius that made direct contact with China in 166 AD and from there, spread throughout the kingdom. Justinian even sent monks to bring home silkworm eggs but they were never able to reproduce silk, in part because they didn’t have mulberry trees. Yet, the west had perpetrated industrial espionage on the east, a practice that would be repeated in 1712 when a Jesuit priest found how the Chinese created porcelain. The Silk Road was a place where great fortunes could be found or lost. The Dread Pirate Roberts was a character from a movie called the Princess Bride, who had left home to make his fortune, so he could spend his life with his love, Buttercup. The Silk Road had made many a fortune, so Ross Ulbricht used that name on a site he created called the Silk Road, along with Frosty and Attoid. He’d gotten his Bachelors at the University of Texas and Masters at Penn State University before he got the idea to start a website he called the Silk Road in 2011. Most people connected to the site via ToR and paid for items in bitcoins. After he graduated from Penn State, he’d started a couple of companies that didn’t do that well. Given the success of Amazon, he and a friend started a site to sell used books, but Ulbricht realized it was more profitable to be the middle man, as the Parthians had thousands of years earlier. The new site would be Underground Brokers and later changed to The Silk Road. Cryptocurrencies allowed for anonymous transactions. He got some help from others, including two that went by the pseudonyms Smedley (later suspected to be Mike Wattier) and Variety Jones (later suspected to be Thomas Clark). They started to facilitate transactions in 2011. Business was good almost from the beginning. Then Gawker published an article about the site and more and more attention was paid to what was sold through this new darknet portal. The United States Department of Justice and other law enforcement agencies got involved. When bitcoins traded at less than $80 each, the United States Drug Enforcement Agency (DEA) seized 11 bitcoins, but couldn’t take the site down for good. It was actually an IRS investigator named Gary Alford who broke the case when he found the link between the Dread Pirate Roberts and Attoid and then a post that included Ulbricht’s name and phone number. Ulbricht was picked up in San Francisco and 26,000 bitcoins were seized, along with another 144,000 from Ulbricht’s personal wallets. Two federal agents were arrested when it was found they traded information about the investigation to Ulbricht. Ulbricht was also accused of murder for hire, but those charges never led to much. Ulbricht now servers a life sentence. The Silk Road of the darknet didn’t sell silk. 70% of the 10,000 things sold were drugs. There were also fake identities, child pornography, and through a second site, firearms. There were scammers. Tens of millions of dollars flowed over this new Silk Road. But the secrets weren’t guarded well enough and a Silk Road 2 was created in 2013, which only lasted a year. Others come and go. It’s kinda’ like playing whack-a-mole. The world is a big place and the reach of law enforcement agencies limited, thus the harsh sentence for Ulbricht.
10/28/2022 • 10 minutes, 7 seconds
Simulmatics: Simulating Advertising, Data, Democracy, and War in the 1960s
Dassler shoes was started by Adolf Dassler in 1924 in Germany, after he came home from World War I. His brother Rudolph joined him. They made athletic shoes and developed spikes to go on the bottom of the shoes. By 1936, they convinced Jesse Owens to wear their shoes on the way to his gold medals. Some of the American troops who liked the shoes during World War II helped spread the word. The brothers had a falling out soon after the war was over. Adolph founded Adidas while Rudolph created a rival shoe company called Puma. This was just in time for the advertising industry to convince people that if they bought athletic shoes that they would instantly be, er, athletic. The two companies became a part of an ad-driven identity that persists to this day. One most who buy the products advertised hardly understand themselves. A national identity involves concentric circles of understanding. The larger a nation, the more concentric circles and the harder it is to nail down exactly who has what identity. Part of this is that people spend less time thinking about who they are and more time being told who they should want to be like. Woven into the message of who a person should be is a bunch of products that a person has to buy to become the ideal. That’s called advertising. James White founded the first modern advertising agency called ‘R. F. White & Son' in Warwick Square, London in 1800. The industry evolved over the next hundred or so years as more plentiful supplies led to competition and so more of a need to advertise goods. Increasingly popular newspapers from better printing presses turned out a great place to advertise. The growth of industrialism meant there were plenty of goods and so competition between those who manufactured or trafficked those goods. The more efficient the machines of industry became, the more the advertising industry helped sell what the world might not yet know it needed. Many of those agencies settled into Madison Avenue in New York as balances of global power shifted and so by the end of World War II, Madison Avenue became a synonym for advertising. Many now-iconic brands were born in this era. Manufacturers and distributors weren’t the only ones to use advertising. People put out ads to find loves in personals and by the 1950s advertising even began to find its way into politics. Iconic politicians could be created. Dwight D Eisenhower served as the United States president from 1953 to 1961. He oversaw the liberation of Northern Africa in World War II, before he took command to plan the invasion of Normandy on D Day. He was almost universally held as a war hero in the United States. He had not held public office but the ad men of Madison Avenue were able to craft messages that put him into the White House. Messages like “I like Ike.” These were the early days of television and the early days of computers. A UNIVAC was able to predict that Eisenhower would defeat Adlai Stevenson in a landslide election in 1952. The country was not “Madly for Adlai” as his slogan went. ENIAC had first been used in 1945. MIT Whirlwind was created in 1951, and the age of interactive computing was upon us. Not only could a computer predict who might win an election but new options in data processing allowed for more granular ways to analyze data. A young Senator named John F. Kennedy was heralded as a “new candidate for the 1960s.” Just a few years later Stephenson had lambasted Ike for using advertising, but this new generation was willing to let computers help build a platform - just as the advertisers were starting to use computers to help them figure out the best way to market a product. It turns out that words mattered. At the beginning of that 1960 election, many observed they couldn’t tell much difference between the two candidates: Richard Nixon and John Kennedy. Kennedy’s democrats were still largely factored between those who believed in philosophies dating back to the New Deal and segregationists. Ike presided over the early days of the post-World War II new world order. This new generation, like new generations before and since, was different. They seemed to embrace the new digital era. Someone like JFK wasn’t punching cards and feeding them into a computer, writing algorithms, or out surveying people to collect that data. That was done by a company that was founded in 1959 called Simulmatics. Jill Lepore called them the What If men in her book called If/Then - a great read that goes further into the politics of the day. It’s a fascinating read. The founder of the company was a Madison Avenue ad man named Ed Greenfield. He surrounded himself with a cast of characters that included people from John Hopkins University, MIT, Yale, and IBM. Ithiel de Sola Pool had studied Nazi and Soviet propaganda during World War II. He picked up on work from Hungarian Frigyes Karinthy and with students ran Monte Carlo simulations on people’s acquaintances to formulate what would later become The Small World Problem or the Six Degrees of Separation, a later inspiration for the social network of the same name and even later, for Facebook. The social sciences had become digital. Political science could then be used to get at the very issues that could separate Kennedy from Nixon. The People Machine as one called it was a computer simulation, thus the name of the company. It would analyze voting behaviors. The previous Democratic candidate Stevenson had long-winded, complex speeches. They analyzed the electorate and found that “I Like Ike” resonated with more people. It had, after all, been developed by the same ad man who came up with “Melts in your mouth, not in your hands” for M&Ms. They called the project Project Microscope. They recruited some of the best liberal minds in political science and computer science. They split the electorate into 480 groups. A big focus was how to win the African-American vote. Turns out Gallup polls didn’t study that vote because Southern newspapers had blocked doing so. Civil rights, and race relations in general wasn’t unlike a few other issues. There was anti-Catholic, anti-Jew, and anti-a lot. The Republicans were the party of Lincoln and had gotten a lot of votes over the last hundred years for that. But factions within the party had shifted. Loyalties were shifting. Kennedy was a Catholic but many had cautioned he should down-play that issue. The computer predicted civil rights and anti-Catholic bigotry would help him, which became Kennedy’s platform. He stood for what was right but were they his positions or just what the nerds thought? He gained votes at the last minute. Turns out the other disenfranchised groups saw the bigotry against one group as akin to bigotry against their own; just like the computers thought they would. Kennedy became an anti-segregationist, as that would help win the Black vote in some large population centers. It was the most aggressive, or liberal, civil-rights plank the Democrats had ever taken up. Civil rights are human rights. Catholic rights are as well. Kennedy offered the role of Vice President to Lyndon B Johnson, the Senate Majority Leader and was nominated to the Democratic candidate. Project Microscope from Simulmatics was hired in part to shore up Jewish and African-American votes. They said Kennedy should turn the fact that he was a Catholic into a strength. Use the fact he was Catholic to give up a few votes here and there in the South but pick up other votes. He also took the Simulmatics information as it came out of the IBM 704 mainframe to shore up his stance on other issues. That confidence helped him out-perform Nixon in televised debates. They used teletypes and even had the kids rooms converted into temporary data rooms. CBS predicted Nixon would win. Less than an hour later they predicted Kennedy would win. Kennedy won the popular vote by .1 percent of the country even after two recounts. The Black vote hat turned out big for Kennedy. News leaked about the work Simulmatics had done for Kennedy. Some knew that IBM had helped Hitler track Jews as has been written about in the book IBM and the Holocaust by Edwin Black. Others still had issues with advertising in campaigns and couldn’t fathom computers. Despite Stalin’s disgust for computers some compared the use of computers to Stalinistic propaganda. Yet it worked - even if in retrospect the findings were all things we could all take for granted. They weren’t yet. The Kennedy campaign at first denied the “use of an electronic brain and yet their reports live on in the Kennedy Library. A movement against the use of the computer seemed to die after Kennedy was assassinated. Books of fiction persisted, like The 480 from Eugene Burdick, which got its title from the number of groups Simulmatics used. The company went on to experiment with every potential market their computer simulation could be used in. The most obvious was the advertising industry. But many of those companies went on to buy their own computers. They already had what many now know is the most important aspect of any data analytics project: the data. Sometimes they had decades of buying data - and could start over on more modern computers. They worked with the Times to analyze election results in 1962, to try and catch newspapers up with television. The project was a failure and newspapers leaned into more commentary and longer-term analysis to remain a relevant supplier of news in a world of real-time television. They applied their brand of statistics to help simulate the economy of Venezuela in a project called Project Camelot, which LBJ later shot down. Their most profitable venture became working with the defense department to do research in Vietnam. They collected data, analyzed data, punched data into cards, and fed it into computers. Pool was unabashedly pro-US and it’s arguable that they saw what they wanted to see. So did the war planners in the pentagon, who followed Robert McNamara. McNamara had been one of the Quiz Kids who turned around the Ford Motor Company with a new brand of data-driven management to analyze trends in the car industry, shore up supply chains, and out-innovate the competition. He became the first president of the company who wasn’t a Ford. His family had moved to the US from Ireland to flee the Great Irish Famine. Not many generations later he got an MBA from Harvard before he became a captain in the United States Army Air Forces during World War II primarily as an analyst. Henry Ford the second hired his whole group to help with the company. As many in politics and the military learn, companies and nations are very different. They did well at first, reducing the emphasis on big nuclear first strike capabilities and developing other military capabilities. One of those was how to deal with guerrilla warfare and counterinsurgencies. That became critical in Vietnam, a war between the communist North Vietnamese and the South Vietnamese. The North was backed by North Korea, China, and the Soviet Union, the South backed by the United States, South Korea, Australia. Others got involved but those were the main parties. We can think of McNamara’s use of computers to provide just in time provisioning of armed forces and move spending to where it could be most impactful, which slashed over $10 billion in military spending. As the Vietnam war intensified, statistically the number of troops killed by Americans vs American casualties made it look computationally like the was was being won. In hindsight we know it was not. Under McNamara, ARPA hired Simulmatics to study the situation on the ground. They would merge computers, information warfare, psychological warfare, and social sciences. The Vietnamese that they interviewed didn’t always tell them the truth. After all, maybe they were CIA agents. Many of the studies lacked true scholars as the war was unpopular back home. People who collected data weren’t always skilled at the job. They spoke primarily with those they didn’t get shot at as much while going to see. In general, the algorithms might have worked or might not have worked - but they had bad data. Yet Simulmatics sent reports that the operations were going well to McNamara. Many in the military would remember this as real capabilities at cyber warfare and information warfare were developed in the following decades. Back home, Simulmatics also became increasingly tied up in things Kennedy might have arguably fought against. There were riots, civil rights protests, and Simulatics took contracts to simulate racial riots. Some felt they could riot or go die in in the jungles of Vietnam. The era of predictive policing had begun as the hope of the early 1960s turned into the apathy of the late 1960s. Martin Luther King Jr spoke out again riot prediction, yet Simulmatics pushed on. Whether their insights were effective in many of the situations, just like in Vietnam - was dubious. They helped usher in the era of Surveillance capitalism, in a way. But the arrival of computers in ad agencies meant that if they hadn’t of, someone else would have. People didn’t take kindly to being poked, prodded, and analyzed intellectually. Automation took jobs, which Kennedy had addressed in rhetoric if not in action. The war was deeply unpopular as American soldiers came home from a far off land in caskets. The link between Simulmatics and academia was known. Students protested against them and claimed they were war criminals. The psychological warfare abroad, being on the wrong side of history at home with the race riots, and the disintegrating military-industrial-university complex didn’t help. There were technical issues. The technology had changed away from languages like FORTRAN. Further, the number of data points required and how they were processed required what we now call “Big Data” and “machine learning.” Those technologies showed promise early but more mathematics needed to be developed to fully weaponize the surveillance everything. More code and libraries needed to be developed to crunch the large amounts of statistics. More work needed to be done to get better data and process it. The computerization of the social sciences was just beginning and while people like Pool predicted the societal impacts we could expect, people at ARPA doubted the results and the company they created could not be saved as all these factors converged to put them into bankruptcy in 1970. Their ideas and research lived on. Pool and others published some of their findings. Books opened the minds to the good and bad of what technology could do. The Southern politicians, or Dixiecrats, fell apart. Nixon embraced a new brand of conservatism as he lost the race to be the Governor of California to Pat Brown in 1962. There were charges of voter fraud from the 1960 election. The Mansfeld Amendment restricted military funding of basic research in 1969 and went into effect in 1970. Ike had warned of the growing links between universities as the creators of weapons of war like what Simulmatics signified and the amendment helped pull back funding for such exploits. As Lepore points out in her book, mid-century liberalism was dead. Nixon tapped into the silent majority who countered the counterculture of the 1960s. Crime rose and the conservatives became the party of law and order. He opened up relations with China, spun down the Vietnam war, negotiated with the Soviet leader Brezhnev to warm relations, and rolled back Johnson’s attempts at what had been called The Great Society to get inflation back in check. Under him the incarceration rate in the United States exploded. His presidency ended with Watergate and under Ford, Carter, Reagan, and Bush, the personal computer became prolific and the internet, once an ARPA project began to take shape. They all used computers to find and weigh issues, thaw the Cold War, and build a new digitally-driven world order. The Clinton years saw an acceleration of the Internet and by the early 2000s companies like PayPal were on the rise. One of their founders was Peter Thiel. Peter Thiel founded Palantir in 2003 then invested in companies like Facebook with his PayPal money. Palantir received backing from In-Q-Tel “World-class, cutting-edge technologies for National Security”. In-Q-Tel was founded in 1999 as the global technological evolution began to explode. While the governments of the world had helped build the internet, it wasn’t long before they realized it gave an asymmetrical advantage to newcomers. The more widely available the internet, the more far reaching attacks could go, the more subversive economic warfare could be. Governmental agencies like the United States Central Intelligence Agency (CIA) needed more data and the long promised artificial intelligence technologies to comb through that data. Agencies then got together and launched their own venture capital fund, similar to those in the private sector - one called In-Q-Tel. Palantir has worked to develop software for the US Immigration and Customers Enforcement, or ICE, to investigate criminal activities and allegedly used data obtained from Cambridge Analytica along with Facebook data. The initial aim of the company was to take technology developed for PayPal’s fraud detection and apply it to other areas like terrorism, with help from intelligence agencies. They help fight fraud for nations and have worked with the CIA, NSA, FBI, CDC, and various branches of the United States military on various software projects. Their Gotham project is the culmination of decades of predictive policing work. There are dozens of other companies like Palantir. Just as Pool’s work on Six Degrees of Separation, social networks made the amount of data that could be harvested all the greater. Companies use that data to sell products. Nations use that data for propaganda. Those who get elected to run nations use that data to find out what they need to say to be allowed to do so. The data is more accurate with every passing year. Few of the ideas are all that new, just better executed. The original sin mostly forgotten, we still have to struggle with the impact and ethical ramifications. Politics has always had a bit of a ruse in a rise to power. Now it’s less about personal observation and more about the observations and analyses that can be gleaned from large troves of data. The issues brought up in books like The 480 are as poignant today as they were in the 1950s.
10/14/2022 • 27 minutes, 43 seconds
Taiwan, TSMC, NVIDIA, and Foundries
Taiwan is a country about half the size of Maine with about 17 times the population of that state. Taiwan sits just over a hundred miles off the coast of mainland China. It’s home to some 23 and a half million humans, roughly half way between Texas and Florida or a few more than live in Romania for the Europeans. Taiwan was connected to mainland China by a land bridge in the Late Pleistocene and human remains have been found dating back to 20,000 to 30,000 years ago. About half a million people on the island nation are aboriginal, or their ancestors are from there. But the population became more and more Chinese in recent centuries. Taiwan had not been part of China during the earlier dynastic ages but had been used by dynasties in exile to attack one another and so became a part of the Chinese empire in the 1600s. Taiwan was won by Japan in the late 1800s and held by the Japanese until World War II. During that time, a civil war had raged on the mainland of China with the Republic of China eventually formed as the replacement government for the Qing dynasty following a bloody period of turf battles by warlords and then civil war. Taiwan was in martial law from the time the pre-communist government of China retreated there during the exit of the Nationalists from mainland China in the 1940s to the late 1980. During that time, just like the exiled Han dynasty, they orchestrated war from afar. They stopped fighting, much like the Koreans, but have still never signed a peace treaty. And so large parts of the world remained in stalemate. As the years became decades, Taiwan, or the Republic of China as they still call themselves, has always had an unsteady relationship with the People’s Republic of China, or China as most in the US calls them. The Western world recognized the Republic of China and the Soviet and Chines countries recognized the mainland government. US President Richard Nixon visited mainland China in 1972 to re-open relations with the communist government there and relations slowly improved. The early 1970s was a time when much of the world still recognized the ruling government of Taiwan as the official Chinese government and there were proxy wars the two continued to fight. The Taiwanese and Chinese still aren’t besties. There are deep scars and propaganda that keep relations from being repaired. Before World War II, the Japanese also invaded Hong Kong. During the occupation there, Morris Chang’s family became displaced and moved to a few cities during his teens before he moved Boston to go to Harvard and then MIT where he did everything to get his PhD except defend his thesis. He then went to work for Sylvania Semiconductor and then Texas Instruments, finally getting his PhD from Stanford in 1964. He became a Vice President at TI and helped build an early semiconductor designer and foundry relationship when TI designed a chip and IBM manufactured it. The Premier of Taiwan at the time, Sun Yun-suan, who played a central role in Taiwan’s transformation from an agrarian economy to a large exporter. His biggest win was when to recruit Chang to move to Taiwan and found TSCM, or Taiwan Semiconductor Manufacturing Company. Some of this might sound familiar as it mirrors stories from companies like Samsung in South Korea. In short, Japanese imperialism, democracies versus communists, then rapid economic development as a massive manufacturing powerhouse in large part due to the fact that semiconductor designers were split from semiconductor foundry’s or where chips are actually created. In this case, a former Chinese national was recruited to return as founder and led TSMC for 31 years before he retired in 2018. Chang could see from his time with TI that more and more companies would design chips for their needs and outsource manufacturing. They worked with Texas Instruments, Intel, AMD, NXP, Marvell, MediaTek, ARM, and then the big success when they started to make the Apple chips. The company started down that path in 2011 with the A5 and A6 SoCs for iPhone and iPad on trial runs but picked up steam with the A8 and A9 through A14 and the Intel replacement for the Mac, the M1. They now sit on a half trillion US dollar market cap and are the largest in Taiwan. For perspective, their market cap only trails the GDP of the whole country by a few billion dollars. Nvidia TSMC is also a foundry Nvidia uses. As of the time of this writing, Nvidia is the 8th largest semiconductor company in the world. We’ve already covered Broadcom, Qualcomm, Micron, Samsung, and Intel. Nvidia is a fabless semiconductor company and so design chips that vendors like TSMC manufacture. Nvidia was founded by Jensen Huang, Chris Malachowsky, and Curtis Priem in 1993 in Santa Clara, California (although now incorporated in Delaware). Not all who leave the country they were born in due to war or during times of war return. Huang was born in Taiwan and his family moved to the US right around the time Nixon re-established relations with mainland China. Huang then went to grad school at Stanford before he became a CPU designer at AMD and a director at LSI Logic, so had experience as a do-er, a manager, and a manager’s manager. He was joined by Chris Malachowsky and Curtis Priem, who had designed the IBM Professional Graphics Adapter and then the GX graphics chip at Sun. because they saw this Mac and Windows and Amiga OS graphical interface, they saw the games one could play on machines, and they thought the graphics cards would be the next wave of computing. And so for a long time, Nvidia managed to avoid competition with other chip makers with a focus on graphics. That initially meant gaming and higher end video production but has expanded into much more like parallel programming and even cryptocurrency mining. They were more concerned about the next version of the idea or chip or company and used NV in the naming convention for their files. When it came time to name the company, they looked up words that started with those letters, which of course don’t exist - so instead chose invidia or Nvidia for short, as it’s latin for envy - what everyone who saw those sweet graphics the cards rendered would feel. They raised $20 million in funding and got to work. First with SGS-Thomson Microelectronics in 1994 to manufacture what they were calling a graphical-user interface accelerator that they packaged on a single chip. They worked with Diamond Multimedia Systems to install the chips onto the boards. In 1995 they released NV1. The PCI card was sold as Diamond Edge 3D and came with a 2d/3d graphics core with quadratic texture mapping. Screaming fast and Virtual Fighter from Sega ported to the platform. DirectX had come in 1995. So Nviia released DirectX drivers that supported Direct3D, the api that Microsoft developed to render 3d graphics. This was a time when 3d was on the rise for consoles and desktops. Nvidia timed it perfectly and reaped the rewards when they hit a million sold in the first four months for the RIVA, a 128-bit 3d processor that got used as an OEM in 1997. Then the 1998 RIVAZX with RIVATNT for multi-texture 3D processing. They also needed more manufacturing support at this point and entered into a strategic partnership with TSMC to manufacture their boards. A lot of vendors had a good amount of success in their niches. By the late 1990s there were companies who made memory, or the survivors of the DRAM industry after ongoing price dumping issues. There were companies that made central processors like Intel. Nvidia led the charge for a new type of chip, the GPU. They invented the GPU in 1999 when they released the GeForce 256. This was the first single-chip GPU processor. This means integrated lightings, triangle setups, rendering, like the old math coprocessor but for video. Millions of polygons could be drawn on screens every second. They also released the Quadro Pro GPU for professional graphics and went public in 1999 at an IPO of $12 per share. Nvidia used some of the funds from the IPO to scale operations, organically and inorganically. In 2000 they released the GeForce2 Go for laptops and acquired 3dfx, closing deals to get their 3d chips in devices from OEM manufacturers who made PCs and in the new Microsoft Xbox. By 2001 they hit $1 billion in revenues and released the GeForce 3 with a programmable GPU, using APIs to make their GPU a platform. They also released the nForce integrated graphics and so by 2002 hit 100 million processors out on the market. They acquired MediaQ in 2003 and partnered with game designer Blizzard to make Warcraft. They continued their success in the console market when the GeForce platform was used in the PS 3 in 2005 and by 2006 had sold half a billion processors. They also added the CUDA architecture that year to put a general purpose GPU on the market and acquired Hybrid Graphics who develops 2D and 3D embedded software for mobile devices. In 2008 they went beyond the consoles and PCs when Tesla used their GPUs in cars. They also acquired PortalPlayer, who supplies semiconductors and software for personal media players and launched the Tegra mobile processor to get into the exploding mobile market. More acquisitions in 2008 but a huge win when the GeForce 9400M was put into Apple MacBooks. Then more smaller chips in 2009 when the Tegra processors were used in Android devices. They also continued to expand how GPUs were used. They showed up in Ultrasounds and in 2010 the Audi. By then they had the Tianhe-1A ready to go, which showed up in supercomputers and the Optimus. All these types of devices that could use a GPU meant they hit a billion processors sold in 2011, which is when they went dual core with the Tegra 2 mobile processor and entered into cross licensing deals with Intel. At this point TSMC was able to pack more and more transistors into smaller and smaller places. This was a big year for larger jobs on the platform. By 2012, Nvidia got the Kepler-based GPUs out by then and their chips were used in the Titan supercomputer. They also released a virtualized GPU GRID for cloud processing. It wasn’t all about large-scale computing efforts. The Tegra-3 and GTX 600 came out in 2012 as well. Then in 2013 the Tegra 4, a quad-core mobile processor, a 4G LTE mobile processor, Nvidia Shield for portable gaming, the GTX Titan, a grid appliance. In 2014 the Tegra K1 192, a shield tablet, and Maxwell. In 2015 came the TegraX1 with deep learning with 256 cores and Titan X and Jetson TX1 for smart machines, and the Nvidia Drive for autonomous vehicles. They continued that deep learning work with an appliance in 2016 with the DGX-1. The Drive got an update in the form of PX 2 for in-vehicle AI. By then, they were a 20 year old company and working on the 11th generation of the GPU and most CPU architectures had dedicated cores for machine learning options of various types. 2017 brought the Volta, Jetson TX2, and SHIELD was ported over to the Google Assistant. 2018 brought the Turing GPU architecture, the DGX-2, AGX Xavier, Clara, 2019 brought AGX Orin for robots and autonomous or semi-autonomous piloting of various types of vehicles. They also made the Jetson Nano and Xavier, and EGX for Edge Computing. At this point there were plenty of people who used the GPUs to mine hashes for various blockchains like with cryptocurrencies and the ARM had finally given Intel a run for their money with designs from the ARM alliance showing up in everything but a Windows device (so Apple and Android). So they tried to buy ARM from SoftBank in 2020. That deal fell through eventually but would have been an $8 billion windfall for Softbank since they paid $32 billion for ARM in 2016. We probably don’t need more consolidation in the CPU sector. Standardization, yes. Some of top NVIDIA competitors include Samsung, AMD, Intel Corporation Qualcomm and even companies like Apple who make their own CPUs (but not their own GPUs as of the time of this writing). In their niche they can still make well over $15 billion a year. The invention of the MOSFET came from immigrants Mohamed Atalla, originally from Egypt, and Dawon Kahng, originally from from Seoul, South Korea. Kahng was born in Korea in 1931 but immigrated to the US in 1955 to get his PhD at THE Ohio State University and then went to work for Bell Labs, where he and Atalla invented the MOSFET, and where Kahng retired. The MOSFET was an important step on the way to a microchip. That microchip market with companies like Fairchild Semiconductors, Intel, IBM, Control Data, and Digital Equipment saw a lot of chip designers who maybe had their chips knocked off, either legally in a clean room or illegally outside of a clean room. Some of those ended in legal action, some didn’t. But the fact that factories overseas could reproduce chips were a huge part of the movement that came next, which was that companies started to think about whether they could just design chips and let someone else make them. That was in an era of increasing labor outsourcing, so factories could build cars offshore, and the foundry movement was born - or companies that just make chips for those who design them. As we have covered in this section and many others, many of the people who work on these kinds of projects moved to the United States from foreign lands in search of a better life. That might have been to flee Europe or Asian theaters of Cold War jackassery or might have been a civil war like in Korea or Taiwan. They had contacts and were able to work with places to outsource too and given that these happened at the same time that Hong Kong, Singapore, South Korea, and Taiwan became safe and with no violence. And so the Four Asian Tigers economies exploded, fueled by exports and a rapid period of industrialization that began in the 1960s and continues through to today with companies like TSMC, a pure play foundry, or Samsung, a mixed foundry - aided by companies like Nvidia who continue to effectively outsource their manufacturing operations to companies in the areas. At least, while it’s safe to do so. We certainly hope the entire world becomes safe. But it currently is not. There are currently nearly a million Rohingya refugees fleeing war in Myanmar. Over 3.5 million have fled the violence in Ukraine. 6.7 million have fled Syria. 2.7 million have left Afghanistan. Over 3 million are displaced between Sudan and South Sudan. Over 900,000 have fled Somalia. Before Ukranian refugees fled to mostly Eastern European countries, they had mainly settled in Turkey, Jordan, Lebanon, Pakistan, Uganda, Germany, Iran, and Ethiopia. Very few comparably settled in the 2 largest countries in the world: China, India, or the United States. It took decades for the children of those who moved or sent their children abroad to a better life to be able to find a better life. But we hope that history teaches us to get there faster, for the benefit of all.
9/30/2022 • 31 minutes, 3 seconds
The History of Zynga and founder Mark Pincus
Mark Pincus was at the forefront of mobile technology when it was just being born. He is a recovering venture capitalist who co-founded his first company with Sunil Paul in 1995. FreeLoader was at the forefront of giving people the news through push technology, just as the IETF was in the process of ratifying HTTP2. He sold that for $38 million only to watch it get destroyed. But he did invest in a startup that one of the interns founded when he gave Sean Parker $100,000 to help found Napster. Pincus then started Support.com, which went public in 2000. Then Tribe.net, which Cisco acquired. As a former user, it was fun while it lasted. Along the way, Pincus teamed up with Reid Hoffman, former PayPal executive and founder of LinkedIn and bought the Six Degrees patent that basically covered all social networking. Along the way, he invested in Friendster, Buddy Media, Brightmail, JD.com, Facebook, Snapchat, and Twitter. Investing in all those social media properties gave him a pretty good insight into what trends were on the way. Web 2.0 was on the rise and social networks were spreading fast. As they spread, each attempted to become a platform by opening APIs for third-party developers. This led to an opening to create a new company that could build software that sat on top of these social media companies. Meanwhile, the gaming industry was in a transition from desktop and console games to hyper-casual games that are played on mobile devices. So Pincus recruited conspirators to start yet another company and with Michael Luxton, Andrew Trader, Eric Schiermeyer, Steve Schoettler, and Justin Waldron, Zinga was born in 2007. Actually Zinga is the dog. The company Zynga was born in 2007. Facebook was only three years old at the time, but was already at 14 million users to start 2007. That’s when they opened up APIs for integration with third party products through FBML, or Facebook Markup Language. They would have 100 million within a year. Given his track record selling companies and picking winners, Zynga easily raised $29 million to start what amounts to a social game studio. They make games that people access through social networks. Luxton, Schiermeyer, and Waldron created the first game, Zynga Poker in 2007. It was a simple enough Texas hold ’em poker game but rose to include tens of millions of players at its height, raking in millions in revenue. They’d proven the thesis. Social networks, especially Facebook, were growing.. The iPhone came out in 2007. That only hardened their resolve. They sold poker chips in 2008. Then came FarmVille. FarmVille was launched in 2009 and an instant hit. The game went viral and had a million daily users in a week. It was originally written in flash and later ported to iPhones and other mobile platforms. It’s now been installed over 700 million times and ran until 2020, when Flash support was dropped by Facebook. FarmVille was free-to-play and simple. It had elements of a 4x game like Civilization, but was co-op, meaning players didn’t exterminate one another but instead earned points and thus rankings. In fact, players could help speed up tasks for one another. Players began with a farm - an empty plot of land. They earned experience points by doing routine tasks. Things like growing crops, upgrading items, plowing more and more land. Players took their crops to the market and sold them for coins. Coins could also be bought. If a player didn’t harvest their crops when they were mature, the crops would die. Thus, they had players coming back again and again. Push notifications helped remind people about the state of their farm. Or the news in FreeLoader-speak. Some players became what we called dolphins, or players that spent about what they would on a usual game. Maybe $10 to $30. Others spent thousands, which we referred to as whales. They became the top game on Facebook and the top earner. They launched sequels as well, with FarmVille 2 and FarmVille 3. They bought Challenge Games in 2010, which was founded by Andrew Busy to develop casual games a well. They bought 14 more companies. They grew to 750 employees. They opened offices in Bangalore, India and Ireland. They experimented with other platforms, like Microsoft’s MSN gaming environment and Google TV. They released CastleVille. And they went public towards the end of 2011. It was a whirlwind ride, and just really getting started. They released cute FarmVille toys. They also released Project Z, Mafia Wars, Hanging with Friends, Adventure World, and Hidden Chronicles. And along the way they became a considerable advertising customer for Facebook, with ads showing up for Mafia Wars and Project Z constantly. Not only that, but their ads flooded other mobile ad networks, as The Sims Social and other games caught on and stole eyeballs. And players were rewarded for spamming the walls of other players, which helped to increase the viral nature of the early Facebook games. Pincus and the team built a successful, vibrant company. They brought in Jeff Karp and launched Pioneer Trail. Then another smash hit, Words with Friends. They bought Newtoy for $53.3 million to get it, after Paul and David Bettner who wrote a game called Chess with Friends a few years earlier. But revenues dropped as the Facebook ride they’d been on began to transition from people gaming in a web browser to mobile devices. All this growth and the company was ready for the next phase. In 2013, Zynga hired Donald Mattrick to be the CEO and Pincus moved to the role of Chief Product Officer. The brought in Alex Garden, the General Manager for Xbox Music , Video, and Reading, who had founded the Homeward creator Relic Entertainment back in the 1990s. The new management didn’t fix the decline. The old games continued to lose market share and Pincus came back to run the company as CEO and cut the staff by 18 percent. In 2015 they brought in Frank Gibeau to the board and by 2016 moved him to CEO of the company. One challenge with the move to mobile was who got the processing payments. Microtransactions had gone through Facebook for years. They moved to Stripe in 2020. They acquired Gram Games, to get Merge Dragons! They bought Small Giant Games to get Empires & Puzzles. They bought Peak Games to get Toon Blast and Toy Blast. They picked up Rollic to get a boatload of actions and puzzle games. They bought Golf Rival by acquiring StarLark. And as of the time of this writing they have nearly 200 million players actively logging into their games. There are a few things to take from the story of Zynga. One is that a free game doesn’t put $2.8 billion in revenues on the board, which is what they made in 2021. Advertising amounts for just north of a half billion, but the rest comes from in app purchases. The next is that the transition from owner-operators is hard. Pincus and the founding team had a great vision. They executed and were rewarded by taking the company to a gangbuster IPO. The market changed and it took a couple of pivots to get there. That led to a couple of management shakeups and a transition to more of a portfolio mindset with the fleet of games they own. Another lesson is that larger development organizations don’t necessarily get more done. That’s why Zynga has had to acquire companies to get hits since around the time that they bought Words with Friends. Finally, when a company goes public the team gets distracted. Not only is going through an IPO expensive and the ensuing financial reporting requirements a hassle to deal with, but it’s distracting. Employees look at stock prices during the day. Higher ranking employees have to hire a team of accountants to shuffle their money around in order to take advantage of tax loopholes. Growth leads to political infighting and power grabbing. There are also regulatory requirements with how we manage our code and technology that slow down innovation. But it all makes us better run and a safer partner eventually. All companies go through this. Those who navigate towards a steady state fastest have the best chance of surviving one more lesson: when the first movers prove a monetization thesis the ocean will get red fast. Zynga became the top mobile development company again after weathering the storm and making a few solid acquisitions. But as Bill Gates pointed out in the 1980s, gaming is a fickle business. So Zynga agreed to be acquired for $12.7 billion in 2022 by Take-Two Interactive, who now owns the Civilization, Grand Theft Auto, Borderlands, WWE, Red Dead, Max Payne, NBA 2K, PGA 2K, Bioshock, Duke Nukem, Rainbow Six: Rogue Spear, Battleship, Centipede, and the list goes on and on. They’ve been running a portfolio for a long time. Pincus took away nearly $200 million in the deal and about $350 million in Take-Two equity. Ads and loot boxes can be big business. Meanwhile, Pincus and Hoffman from LinkedIn work well together, apparently. They built Reinvent Capital, an investment firm that shows that venture capital has quite a high recidivism rate. They had a number of successful investments and SPACs. Zynga was much more. They exploited Facebook to shoot up to hundreds of millions in revenue. That was revenue Facebook then decided they should have a piece of in 2011, which cut those Zynga revenues in half over time. This is an important lesson any time a huge percentage of revenue is dependent on another party who can change the game (no pun intended) at any time. Diversify.
8/19/2022 • 16 minutes, 24 seconds
The Evolution Of Unix, Mac, and Chrome OS Shells
In the beginning was the command line. Actually, before that were punch cards and paper tape. But at Multics and RSTS and DTSS came out, programmers and users needed a way to interface with the computer through the teletypes and other terminals that appeared in the early age of interactive computing. Those were often just a program that sat on a filesystem eventually as a daemon, listening for input on keyboards. This was one of the first things the team that built Unix needed, once they had a kernel that could compile. And from the very beginning it was independent of the operating system. Due to the shell's independence from the underlying operating system, numerous shells have been developed during Unix’s history, albeit only a few have attained widespread use. Shells are also referred to as Command-Line Interpreters (or CLIs), processes commands a user sends from a teletype, then a terminal. This provided a simpler interface for common tasks, rather than interfacing with the underlying C programming. Over the years, a number of shells have come and gone. Some of the most basic and original commands came from Multics, but the shell as we know it today was introduced as the Thompson shell in the first versions of Unix. Ken Thompson introduced the first Unix shell in 1971 with the Thompson Shell, the ancestor of the shell we still find in /bin/sh. The shell ran in the background and allowed for a concise syntax for redirecting the output of commands to one another. For example, pass the output to a file with > or read input from a file with Others built tools for Unix as well. Bill Joy wrote a different text editor when Berkeley had Thompson out to install Unix on their PDP. And 1977 saw the earliest forms of what we would later call the Bourne Shell, written by Steve Bourne. This shell. The Bourne shell was designed with two key aims: to act as a command interpreter for interactively executing operating system commands and to facilitate scripting. One of the more important aspects of going beyond piping output into other commands and into a more advanced scripting language is the ability to perform conditional if/then statements, loops, and variables. And thus rather than learn C to write simple programmers, generations of engineers and end users could now use basic functional programming at a bourne shell. Bill Joy created the C shell in 1978 while a graduate student at the University of California, Berkeley. It was designed for Berkeley Software Distribution (BSD) Unix machines. One of the main design goals of the C shell was to build a scripting language that seemed like C. Joy added one of my favorite features of every shell made after that one: command history. I’ve written many shell scripts by just cut-copy-pasting a few commands from my bash history and piping or variabalizing the output. Add to that the ability to use the up or down arrow to re-run previous commands and we got a huge productivity gain for people that did the same tasks, like editing a file. Simply scroll up through previous commands to run the same vi editor. That vi command also shipped first with BSD. There was another huge time saver out there in another operating system. An operating system called Tenex had name and command completion. The Tenex OS first shipped out of BBN, or Bolt, Beranek, and Newman, for PDPs. Unix had as well and so a number of early users had experience with both. Tenex had command completion, just hit the tab and the command being typed would automatically complete if the text started matched the text of a command in a path for commands. That project was started by Ken Greer at Carnegie Mellon University in 1975 and got merged into the C shell in 1981, adding the t for Tenex to the C for C shell and gave us tcsh. Thus tcsh had backwards compatibility with csh. David Korn at Bell labs added the korn shell, or ksh in the early 1980s. He added the idea that the user interface could provide a number of editors. For example, use emacs or vi to edit files. He borrowed ideas from the c shell and made minor tweaks that provided outsized impacts to productivity. Even Microsoft added a Korn shell option into Windows NT, as though Dave Cutler was paying homage to another great programmer. Brian Fox then added on to the Bourne shell with bash. He was working with the Free Software Foundation with Richard Stallman, and they wanted a shell that could do more advanced scripting but whose source code was open source. They started the project in 1988 and shipped bash in 1989. Bash then went on to become the most widely used and distributed shell in the arsenal of the Unix programmer. Bash stands for Bourne Again Shell and so was backwards compatible with bourne shell but also added features from tcsh, korn, and C shell, staying mostly backwards compatible with other shells. Due to the licensing, bash became the de facto standard (and often default shell) for GNU/Linux distributions and serves as the standard interactive shell for users, located at /bin/bash location. Now we had command history, tabbed auto-completion, command-line editing, multiple paths, multiple options for interpreters, a directory stack, full environment variables, and the modern command line environment. Paul Falstad created the initial version of zsh, or the Z Shell, in 1990. The Z shell (zsh) can be used interactively as a login shell or as more sophisticated command interpreter for shell scripting. As with previous shells, it is an optimized Bourne shell that incorporates several features from bash and tcsh and is mostly backwards compatible. Zsh comes with the tabbed auto-completion, regex integration (in addition to the standard glowing options available since the 1970s, additional shorthand for command scoping, but with a number of security features. The ability to limit memory and privilege escalations became critical in order to keep from having some of the same issues we’ve seen for decades with Windows and other operating systems as they evolved to meet Unix scripting, borrowing many a feature for Powershell from cousins in the Unix and Linux worlds. These are just the big ones. Sometimes it feels like every developer with a decent grasp of C and a workflow divergent from the norm (which is most developers), has taken a stab at developing their own shell. This is one of the great parts of having access to source code. The options are endless. At this point, we just take these productivity gains for granted. But it was decades of innovative approaches as Unix and then Linux and now MacOS and Android reached out to the rest of the world to change how we work.
7/15/2022 • 12 minutes, 43 seconds
St Jude, Felsenstein, and Community Memory
Lee Felsenstein went to the University of California, Berkeley in the 1960s. He worked at the tape manufacturer Ampex, where Oracle was born out of before going back to Berkeley to finish his degree. He was one of the original members of the Homebrew Computer Club, and as with so many inspired by the Altair S-100 bus, designed the Sol-20, arguably the first microcomputer that came with a built-in keyboard that could be hooked up to a television in 1976. The Apple II was introduced the following year. Adam Osborne was another of the Homebrew Computer Club regulars who wrote An Introduction to Microcomputers and sold his publishing company to McGraw-Hill in 1979. Flush with cash, he enlisted Felsenstein to help create another computer, which became the Osborne 1. The first commercial portable computer, although given that it weighed almost 25 pounds, is more appropriate to call a luggable computer. Before Felsensten built computers, though, he worked with a few others on a community computing project they called Community Memory. Judith Milhon was an activist in the 1960s Civil Rights movement who helped organize marches and rallies and went to jail for civil disobedience. She moved to Ohio, where she met Efrem Lipkin, and as with many in what we might think of as the counterculture now, they moved to San Francisco in 1968. St Jude, as she became called learned to program in 1967 and ended up at the Berkeley Computer Company after the work on the Berkeley timesharing projects was commercialized. There, she met Pam Hardt at Project One. Project One was a technological community built around an alternative high school founded by Ralph Scott. They brought together a number of non-profits to train people in various skills and as one might expect in the San Francisco area counterculture they had a mix of artists, craftspeople, filmmakers, and people with deep roots in technology. So much so that it became a bit of a technological commune. They had a warehouse and did day care, engineering, film processing, documentaries, and many participated in anti-Vietnam war protests. They had all this space and Hardt called around to find the computer. She got an SDS-940 mainframe donated by TransAmerica in 1971. Xerox had gotten out of the computing business and TransAmerica’s needs were better suited for other computers at the time. They had this idea to create a bulletin board system for the community and created a project at Project One they called Resource One. Plenty thought computers were evil at the time, given their rapid advancements during the Cold War era, and yet many also thought there was incredible promise to democratize everything. Peter Deutsch then donated time and an operating system he’d written a few years before. She then published a request for help in the People’s Computer Computer magazine and got a lot of people who just made their own things. An early precursor to maybe micro-services, where various people tinkered with data and programs. They were able to do so because of the people who could turn that SDS into a timesharing system. St Jude’s partner Lipkin took on the software part of the project. Chris Macie wrote a program that digitized information on social services offered in the area that was maintained by Mary Janowitz, Sherry Reson, and Mya Shone. That was eventually taken over by the United Way until the 1990s. Felsenstein helped with the hardware. They used teletype terminals to connect a video terminal and keyboard built into a wooden cabinet so real humans could access the system. The project then evolved into what was referred to as Community Memory. Community Memory Community Memory became the first public computerized bulletin board system established in 1973 in Berkeley, California. The first Community Memory terminal was located at Leopard’s Record in Berkeley. This was the first opportunity for people who were not studying the scientific subject to be able to use computers. It became very popular but soon was shut down by the founders because they face hurdles to replicate the equipment and languages being used. They were unable to expand the project. This allowed them to expand the timesharing system into the community and became a free online community-based resource used to share knowledge, organize, and grow. The initial stage of Community Memory from 1973 to 1975, was an experiment to see how people would react to using computers to share information. Operating from 1973 to 1992, it went from minicomputers to microcomputers as those became more prevelant. Before Resource One and Community Memory, computers weren’t necessarily used for people. They were used for business, scientific research, and military purposes. After Community Memory, Felsenstein and others in the area and around the world helped make computers personal. Commun tty Memory was one aspect of that process but there were others that unfolded in the UK, France, Germany and even the Soviet Union - although those were typically impacted by embargoes and a lack of the central government’s buy-in for computing in general. After the initial work was done, many of the core instigators went in their own directions. For example, Felsenstein went on to create the SOL and pursue his other projects in personal computing. Many had families or moved out of the area after the Vietnam War ended in 1975. The economy still wasn’t great, but the technical skills made them more employable. Some of the developers and a new era of contributors regrouped and created a new non-profit in 1977. They started from scratch and developed their own software, database, and communication packages. It was very noisy so they encased it in a card box. It had a transparent plastic top so they could see what was being printed out. This program ran from 1984 to 1989. After more research, a new terminal was released in 1989 in Berkeley. By then it had evolved into a pre-web social network. The modified keyboard had brief instructions mounted on it, which showed the steps to send a message, how to attach keywords to messages, and how to search those keywords to find messages from others. Ultimately, the design underwent three generations, ending in a network of text-based browsers running on basic IBM PCs accessing a Unix server. It was never connected to the Internet, and closed in 1992. By then, it was large, unpowered, and uneconomical to run in an era where servers and graphical interfaces were available. A booming economy also ironically meant a shortage of funding. The job market exploded for programmers in the decade that led up to the dot com bubble and with inconsistent marketing and outreach, Community Memory shut down in 1992. Many of the people involved with Resource One and Community memory went on to have careers in computing. St Jude helped found the cypherpunks and created Mondo 2000 magazine, a magazine dedicated to that space where computers meet culture. She also worked with Efrem Lipkin on CoDesign, and he was a CTO for many of the dot coms in the late 1990s. Chris Neustrup became a programmer for Agilent. The whole operation had been funded by various grants and donations and while there haven’t been any studies on the economic impact due to how hard it is to attribute inspiration rather than direct influence, the payoff was nonetheless considerable.
6/25/2022 • 11 minutes, 38 seconds
Research In Motion and the Blackberry
Lars Magnus Ericsson was working for the Swedish government that made telegraph equipment in the 1870s when he started a little telegraph repair shop in 1976. That was the same year the telephone was invented. After fixing other people’s telegraphs and then telephones he started a company making his own telephone equipment. He started making his own equipment and by the 1890s was shipping gear to the UK. As the roaring 20s came, they sold stock to buy other companies and expanded quickly. Early mobile devices used radios to connect mobile phones to wired phone networks and following projects like ALOHANET in the 1970s they expanded to digitize communications, allowing for sending early forms of text messages, the way people might have sent those telegraphs when old Lars was still alive and kicking. At the time, the Swedish state-owned Televerket Radio was dabbling in this space and partnered with Ericsson to take first those messages then as email became a thing, email, to people wirelessly using the 400 to 450 MHz range in Europe and 900 MHz in the US. That standard went to the OSI and became a 1G wireless packet switching network we call Mobitex. Mike Lazaridis was born in Istanbul and moved to Canada in 1966 when he was five, attending the University of Waterloo in 1979. He dropped out of school to take a contract with General Motors to build a networked computer display in 1984. He took out a loan from his parents, got a grant from the Canadian government, and recruited another electrical engineering student, Doug Fregin from the University of Windsor, who designed the first circuit boards. to join him starting a company they called Research in Motion. Mike Barnstijn joined them and they were off to do research. After a few years doing research projects, they managed to build up a dozen employees and a million in revenues. They became the first Mobitex provider in America and by 1991 shipped the first Mobitex device. They brought in James Balsillie as co-CEO, to handle corporate finance and business development in 1992, a partnership between co-CEOs that would prove fruitful for 20 years. Some of those work-for-hire projects they’d done involved reading bar codes so they started with point-of-sale, enabling mobile payments and by 1993 shipped RIMGate, a gateway for Mobitex. Then a Mobitex point-of-sale terminal and finally with the establishment of the PCMCIA standard, a PCMCIP Mobitex modem they called Freedom. Two-way paging had already become a thing and they were ready to venture out of PoS systems. So in 1995, they took a $5 million investment to develop the RIM 900 OEM radio modem. They also developed a pager they called the Inter@ctive Pager 900 that was capable of two-way messaging the next year. Then they went public on the Toronto Stock Exchange in 1997. The next year, they sold a licensing deal to IBM for the 900 for $10M dollars. That IBM mark of approval is always a sign that a company is ready to play in an enterprise market. And enterprises increasingly wanted to keep executives just a quick two-way page away. But everyone knew there was a technology convergence on the way. They worked with Ericsson to further the technology and over the next few years competed with SkyTel in the interactive pager market. Enter The Blackberry They knew there was something new coming. Just as the founders know something is coming in Quantum Computing and run a fund for that now. They hired a marketing firm called Lexicon Branding to come up with a name and after they saw the keys on the now-iconic keyboard, the marketing firm suggested BlackBerry. They’d done the research and development and they thought they had a product that was special. So they released the first BlackBerry 850 in Munich in 1999. But those were still using radio networks and more specifically the DataTAC network. The age of mobility was imminent, although we didn’t call it that yet. Handspring and Palm each went public in 2000. In 2000, Research In Motion brought its first cellular phone product in the BlackBerry 957, with push email and internet capability. But then came the dot com bubble. Some thought the Internet might have been a fad and in fact might disappear. But instead the world was actually ready for that mobile convergence. Part of that was developing a great operating system for the time when they released the BlackBerry OS the year before. And in 2000 the BlackBerry was named Product of the Year by InfoWorld. The new devices took the market by storm and shattered the previous personal information manager market, with shares of that Palm company dropping by over 90% and Palm OS being setup as it’s own corporation within a couple of years. People were increasingly glued to their email. While the BlackBerry could do web browsing and faxing over the internet, it was really the integrated email access, phone, and text messaging platform that companies like General Magic had been working on as far back as the early 1990s. The Rise of the BlackBerry The BlackBerry was finally the breakthrough mobile product everyone had been expecting and waiting for. Enterprise-level security, integration with business email like Microsoft’s Exchange Server, a QWERTY keyboard that most had grown accustomed to, the option to use a stylus, and a simple menu made the product an instant smash success. And by instant we mean after five years of research and development and a massive financial investment. The Palm owned the PDA market. But the VII cost $599 and the BlackBerry cost $399 at the time (which was far less than the $675 Inter@ctive Pager had cost in the 1990s). The Palm also let us know when we had new messages using the emerging concept of push notifications. 2000 had seen the second version of the BlackBerry OS and their AOL Mobile Communicator had helped them spread the message that the wealthy could have access to their data any time. But by 2001 other carriers were signing on to support devices and BlackBerry was selling bigger and bigger contracts. 5,000 devices, 50,000 devices, 100,000 devices. And a company called Kasten Chase stepped in to develop a secure wireless interface to the Defense Messaging System in the US, which opened up another potential two million people in the defense industry They expanded the service to cover more and more geographies in 2001 and revenues doubled, jumping to 164,000 subscribers by the end of the year. That’s when they added wireless downloads so could access all those MIME attachments in email and display them. Finally, reading PDFs on a phone with the help of GoAmerica Communications! And somehow they won a patent for the idea that a single email address could be used on both a mobile device and a desktop. I guess the patent office didn’t understand why IMAP was invented by Mark Crispin at Stanford in the 80s, or why Exchange allowed multiple devices access to the same mailbox. They kept inking contracts with other companies. AT&T added the BlackBerry in 2002 in the era of GSM. The 5810 was the first truly convergent BlackBerry that offered email and a phone in one device with seamless SMS communications. It shipped in the US and the 5820 in Europe and Cingular Wireless jumped on board in the US and Deutsche Telekom in Germany, as well as Vivendi in France, Telecom Italia in Italy, etc. The devices had inched back up to around $500 with service fees ranging from $40 to $100 plus pretty limited data plans. The Tree came out that year but while it was cool and provided a familiar interface to the legions of Palm users, it was clunky and had less options for securing communications. The NSA signed on and by the end of the year they were a truly global operation, raking in revenues of nearly $300 million. The Buying Torndado They added web-based application in 2003, as well as network printing. They moved to a Java-based interface and added the 6500 series, adding a walkie-talkie function. But that 6200 series at around $200 turned out to be huge. This is when they went into that thing a lot of companies do - they started suing companies like Good and Handspring for infringing on patents they probably never should have been awarded. They eventually lost the cases and paid out tens of millions of dollars in damages. More importantly they took their eyes off innovating, a common mistake in the history of computing companies. Yet there were innovations. They released Blackberry Enterprise Server in 2004 then bolted on connectors to Exchange, Lotus Domino, and allowed for interfacing with XML-based APIs in popular enterprise toolchains of the day. They also later added support for GroupWise. That was one of the last solutions that worked with symmetric key cryptography I can remember using and initially required the devices be cradled to get the necessary keys to secure communications, which then worked over Triple-DES, common at the time. One thing we never liked was that messages did end up living at Research in Motion, even if encrypted at the time. This is one aspect that future types of push communications would resolve. And Microsoft Exchange’s ActiveSync. By 2005 there were CVEs filed for BlackBerry Enterprise Server, racking up 17 in the six years that product shipped up to 5.0 in 2010 before becoming BES 10 and much later Blackberry Enterprise Mobility Management, a cross-platform mobile device management solution. Those BES 4 and 5 support contracts, or T-Support, could cost hundreds of dollars per incident. Microsoft had Windows Mobile clients out that integrated pretty seamlessly with Exchange. But people loved their Blackberries. Other device manufacturers experimented with different modes of interactivity. Microsoft made APIs for pens and keyboards that flipped open. BlackBerry added a trackball in 2006, that was always kind of clunky. Nokia, Ericsson, Motorola, and others were experimenting with new ways to navigate devices, but people were used to menus and even styluses. And they seemed to prefer a look and feel that seemed like what they used for the menuing control systems on HVAC controls, video games, and even the iPod. The Eye Of The Storm A new paradigm was on the way. Apple's iPhone was released in 2007 and Google's Android OS in 2008. By then the BlackBerry Pearl was shipping and it was clear which devices were better. No one saw the two biggest threats coming. Apple was a consumer company. They were slow to add ActiveSync policies, which many thought would be the corporate answer to mobile management as group policies in Active Directory had become for desktops. Apple and Google were slow to take the market, as BlackBerry continued to dominate the smartphone industry well into 2010, especially once then-president Barack Obama strong-armed the NSA into allowing him to use a special version of the BlackBerry 8830 World Edition for official communiques. Other world leaders followed suit, as did the leaders of global companies that had previously been luddites when it came to constantly being online. Even Eric Schmidt, then chairman of google loved his Crackberry in 2013, 5 years after the arrival of Android. Looking back, we can see a steady rise in iPhone sales up to the iPhone 4, released in 2010. Many still said they loved the keyboard on their BlackBerries. Organizations had built BES into their networks and had policies dating back to NIST STIGs. Research in Motion owned the enterprise and held over half the US market and a fifth of the global market. That peaked in 2011. BlackBerry put mobility on the map. But companies like AirWatch, founded in 2003 and MobileIron, founded in 2007, had risen to take a cross-platform approach to the device management aspect of mobile devices. We call them Unified Endpoint Protection products today and companies could suddenly support BlackBerry, Windows Mobile, and iPhones from a single console. Over 50 million Blackberries were being sold a year and the stock was soaring at over $230 a share. Today, they hold no market share and their stock performance shows it. Even though they’ve pivoted to more of a device management company, given their decades of experience working with some of the biggest and most secure companies and governments in the world. The Fall Of The BlackBerry The iPhone was beautiful. It had amazing graphics and a full touch screen. It was the very symbol of innovation. The rising tide of the App Store also made it a developers playground (no pun intended). It was more expensive than the Blackberry, but while they didn’t cater to the enterprise, they wedged their way in there with first executives and then anyone. Initially because of ActiveSync, which had come along in 1996 mostly to support Windows Mobile, but by Exchange Server 2003 SP 2 could do almost anything Outlook could do - provided software developers like Apple could make the clients work. So by 2011, Exchange clients could automatically locate a server based on an email address (or more to the point based on DNS records for the domain) and work just as webmail, which was open in almost every IIS implementation that worked with Exchange. And Office365 was released in 2011, paving the way to move from on-prem Exchange to what we now call “the cloud.” And Google Mail had been around for 7 years by then and people were putting it on the BlackBerry as well, blending home and office accounts on the same devices at times. In fact, Google licensed Exchange ActiveSync, or EAS in 2009 so support for Gmail was showing up on a variety of devices. BlackBerry had everything companies wanted. But people slowly moved to that new iPhone. Or Androids when decent models of phones started shipping with the OS on them. BlackBerry stuck by that keyboard, even though it was clear that people wanted full touchscreens. The BlackBerry Bold came out in 2009. BlackBerry had not just doubled down with the keyboard instead of full touchscreen, but they tripled down on it. They had released the Storm in 2008 and then the Storm in 2009 but they just had a different kind of customer. Albeit one that was slowly starting to retire. This is the hard thing about being in the buying tornado. We’re so busy transacting that we can’t think ahead to staying in the eye that we don’t see how the world is changing outside of it. As we saw with companies like Amdahl and Control Data, when we only focus on big customers and ignore the mass market we leave room for entrants in our industries who have more mass appeal. Since the rise of the independent software market following the IBM anti-trust cases, app developers have been a bellwether of successful platforms. And the iPhone revenue split was appealing to say the least. Sales fell off fast. By 2012, the BlackBerry represented less than 6 percent of smartphones sold and by the start of 2013 that number dropped in half, falling to less than 1 percent in 2014. That’s when the White House tested replacements for the Blackberry. There was a small bump in sales when they finally released a product that had competitive specs to the iPhone, but it was shortly lived. The Crackberry craze was officially over. BlackBerry shot into the mainstream and brought the smartphone with them. They made the devices secure and work seamlessly in corporate environments and for those who could pay money to run BES or BIS. They proved the market and then got stuck in the Innovator’s Dilemna. They became all about features that big customers wanted and needed. And so they missed the personal part of personal computing. Apple, as they did with the PC and then graphical user interfaces saw a successful technology and made people salivate over it. They saw how Windows had built a better sandbox for developers and built the best app delivery mechanism the world has seen to date. Google followed suit and managed to take a much larger piece of the market with more competitive pricing. There is so much we didn’t discuss, like the short-lived Playbook tablet from BlackBerry. Or the Priv. Because for the most part, they a device management solution today. The founders are long gone, investing in the next wave of technology: Quantum Computing. The new face of BlackBerry is chasing device management, following adjacencies into security and dabbling in IoT for healthcare and finance. Big ticket types of buys that include red teaming to automotive management to XDR. Maybe their future is in the convergence of post-quantum security, or maybe we’ll see their $5.5B market cap get tasty enough for one of those billionaires who really, really, really wants their chicklet keyboard back. Who knows but part of the fun of this is it’s a living history.
6/17/2022 • 25 minutes, 45 seconds
Colossal Cave Adventure
Imagine a game that begins with a printout that reads: You are standing at the end of a road before a small brick building. Around you is a forest. A small stream flows out of the building and down a gully. In the distance there is a tall gleaming white tower. Now imagine typing some information into a teletype and then reading the next printout. And then another. A trail of paper lists your every move. This is interactive gaming in the 1970s. Later versions had a monitor so a screen could just show a cursor and the player needed to know what to type. Type N and hit enter and the player travels north. “Search” doesn’t work but “look” does. “Take water” works as does “Drink water” but it takes hours to find dwarves and dragons and figure out how to battle or escape. This is one of the earliest games we played and it was marvelous. The game was called Colossal Cave Adventure and it was one of the first conversational adventure games. Many came after it in the 70s and 80s, in an era before good graphics were feasible. But the imagination was strong. The Oregon Trail was written before it, in 1971 and Trek73 came in 1973, both written for HP minicomputers. Dungeon was written in 1975 for a PDP-10. The author, Don Daglow, went on the work on games like Utopia and Neverwinter Nights Another game called Dungeon showed up in 1975 as well, on the PLATO network at the University of Illinois Champagne-Urbana. As the computer monitor spread, so spread games. William Crowther got his degree in physics at MIT and then went to work at Bolt Baranek and Newman during the early days of the ARPANET. He was on the IMP team, or the people who developed the Interface Message Processor, the first nodes of the packet switching ARPANET, the ancestor of the Internet. They were long hours, but when he wasn’t working, he and his wife Pat explored caves. She was a programmer as well. Or he played the new Dungeons & Dragons game that was popular with other programmers. The two got divorced in 1975 and like many suddenly single fathers he searched for something for his daughters to do when they were at the house. Crowther combined exploring caves, Dungeons & Dragons, and FORTRAN to get Colossal Cave Adventure, often just called Adventure. And since he worked on the ARPANET, the game found its way out onto the growing computer network. Crowther moved to Palo Alto and went to work for Xerox PARC in 1976 before going back to BBN and eventually retiring from Cisco. Crowther loosely based the game mechanics on the ELIZA natural language processing work done by Joseph Weizenbaum at the MIT Artificial Intelligence Laboratory in the 1960s. That had been a project to show how computers could be shown to understand text provided to computers. It was most notably used in tests to have a computer provide therapy sessions. And writing software for the kids or gaming can be therapeutic as well. As can replaying happier times. Crowther explored Mammoth Cave National Park in Kentucky in the early 1970s. The characters in the game follow along his notes about the caves, exploring the area around it using natural language while the computer looked for commands in what was entered. It took about 700 lines to do the original Fortran code for the PDP-10 he had at his disposal at BBN. When he was done he went off on vacation, and the game spread. Programmers in that era just shared code. Source needed to be recompiled for different computers, so they had to. Another programmer was Don Woods, who also used a PDP-10. He went to Princeton in the 1970s and was working at the Stanford AI Lab, or SAIL, at the time. He came across the game and asked Crowther if it would be OK to add a few features and did. His version got distributed through DECUS, or the Digital Equipment Computer Users Society. A lot of people went there for software at the time. The game was up to 3,000 lines of code when it left Woods. The adventurer could now enter the mysterious cave in search of the hidden treasures. The concept of the computer as a narrator began with Collosal Cave Adventure and is now widely used. Although we now have vast scenery rendered and can point and click where we want to go so don’t need to type commands as often. The interpreter looked for commands like “move”, “interact” with other characters, “get” items for the inventory, etc. Woods went further and added more words and the ability to interpret punctuation as well. He also added over a thousand lines of text used to identify and describe the 40 locations. Woods continued to update that game until the mid-1990s. James Gillogly of RAND ported the code to C so it would run on the newer Unix architecture in 1977 and it’s still part of many a BSD distribution. Microsoft published a version of Adventure in 1979 that was distributed for the Apple II and TRS-80 and followed that up in 1981 with a version for Microsoft DOS or MS-DOS. Adventure was now a commercial product. Kevin Black wrote a version for IBM PCs. Peter Gerrard ported it to Amiga Bob Supnik rose to a Vice President at Digital Equipment, not because he ported the game, but it didn’t hurt. And throughout the 1980s, the game spread to other devices as well. Peter Gerrard implemented the version for the Tandy 1000. The Original Adventure was a version that came out of Aventuras AD in Spain. They gave it one of the biggest updates of all. Colossal Cave Adventure was never forgotten, even though it was Zork was replaced. Zork came along in 1977 and Adventureland in 1979. Ken and Roberta Williams played the game in 1979. Ken had bounced around the computer industry for awhile and had a teletype terminal at home when he came across Colossal Cave Adventure in 1979. The two became transfixed and opened their own company to make the game they released the next year called Mystery House. And the text adventure genre moved to a new level when they sold 15,000 copies and it became the first hit. Rogue, and others followed, increasingly interactive, until fully immersive graphical games replaced the adventure genre in general. That process began when Warren Robinett of Atari created the 1980 game, Adventure. Robinett saw Colossal Cave Adventure when he visited the Stanford Artificial Intelligence Laboratory in 1977. He was inspired into a life of programming by a programming professor he had in college named Ken Thompson while he was on sabbatical from Bell Labs. That’s where Thompason, with Dennis Ritchie and one of the most amazing teams of programmers ever assembled, gave the world Unix and the the C programming language at Bell Labs. Adventure game went on to sell over a million copies and the genre of fantasy action-adventure games moved from text to video.
6/2/2022 • 11 minutes, 28 seconds
MySpace And My First Friend, Tom
Before Facebook, there was MySpace. People logged into a web page every day to write to friends, show off photos, and play music. Some of the things we still do on social networks. The world had been shifting to personal use of computers since the early days when time sharing systems were used in universities. Then came the Bulletin Board Systems of the 80s. But those were somewhat difficult to use and prone to be taken over by people like the ones who went on to found DefCon and hacking collectives. Then in the 1990s computers and networks started to get easier to use. We got tools like AOL Instant Messenger and a Microsoft knockoff called Messenger. It’s different ‘cause it doesn’t say Instant. The rise of the World Wide Web meant that people could build their own websites in online communities. We got these online communities like Geocities in 1994, where users could build their own little web page. Some were notes from classes at universities; others how to be better at dressing goth. They tried to sort people by communities they called cities, and then each member got an address number in their community. They grew fast and even went public before being acquired by Yahoo! in 1999. Tripod showed up the year after Geocities came out and got acquired by Yahoo! competitor Lycos in 1998, signaling that portal services in a pre-modern search engine world would be getting into more content to show ads to eyeballs. Angelfire was another that started in 1996 and ended up in the Lycos portfolio as well. More people had more pages and that meant more eyeballs to show ads to. No knowledge of HTML was really required but it did help to know some. The GeoCities idea about communities was a good one. Turns out people liked hanging out with others like themselves online. People liked reading thoughts and ideas and seeing photos if they ever bothered to finish downloading. But forget to bookmark a page and it could be lost in the cyberbits or whatever happened to pages when we weren’t looking at them. The concept of six agrees of Kevin Bacon had been rolling around a bit, so Andrew Weinreich got the idea to do something similar to Angelfire and the next year created SixDegrees.com. It was easy to evolve the concept to bookmark pages by making connections on the site. Except to get people into the site and signing up the model appeared to be the flip side: enter real world friends and family and they were invited to join up. Accepted contacts could then post on each others bulletin boards or send messages to one another. We could also see who our connections were connected to, thus allowing us to say “oh I met that person at a party.” Within a few years the web of contacts model was so successful that it had a few million users and was sold for over $100 million. By 2000 it was shut down but had proven there was a model there that could work. Xanga came along the next year as a weblog and social networking site but never made it to the level of success. Classmates.com is still out there as well, having been founded in 1995 to build a web of contacts for finding those friends from high school we lost contact with. Then came Friendster and MySpace in 2003. Friendster came out of the gate faster but faded away quicker. These took the concepts of SixDegrees.com where users invited friends and family but went a little further, allowing people to post on one another boards. MySpace went a little further. They used some of the same concepts Geocities used and allowed people to customize their own web pages. When some people learned HTML to edit their pages, they got the bug to create. And so a new generation of web developers was created as people learned to layout pages and do basic web programming in order to embed files, flash content, change backgrounds, and insert little DHTML or even JavaScript snippets. MySpace was co-founded by Chris DeWolfe, Uber Whitcomb, Josh Berman, and Tom Anderson while working at an incubator or software holding company called eUniverse, which was later renamed to Intermix Media. Brad Greenspan founded that after going to UCLA and then jumping headfirst into the startup universe. He created Entertainment Universe, then raised $2M in capital from Lehman Brothers, another $5M from others and bought a young site called CD Universe, which was selling Compact Disks online. He reverse merged that into an empty public shell company, like a modern SPAC works, and was suddenly the CEO of a public company, expanding into online DVD sales. Remember, these were the days leading up to the dot com bubble. There was a lot of money floating around. They expanded into dating sites and other membership programs. We’d think of monthly member fees as Monthly Recurring Revenue now, but at the time there was so much free stuff on the internet that those most sites just gave it away and built revenue streams on advertising revenues. CDs and DVDs have data on them. Data can be shared. Napster proved how lucrative that could be by then. Maybe that was something eUniverse should get into. DeWolfe created a tool called Sitegeist, which was a site with a little dating, a little instant messaging, and a little hyper localized search. It was just a school project but got him thinking. Then, like millions of us were about to do, he met Tom. Tom was a kid from the valley who’d been tinkering with computers for years, as “Lord Flathead” who’d been busted hacking as a kid before going off to the University of California at Berkeley before coming home to LA to do software QA for an online storage company. The company he worked for got acquired as a depressed asset by eUniverse in 2002, along with Josh Berman. They got matched up with DeWolfe, and saw this crazy Friendster coming out of nowhere and decided to build something like it. They had a domain they weren’t using called MySpace.com, which they were going to use for another online storage project. So they grabbed Aber Whitcomb, fired up a ColdFusion IDE and given the other properties eUniverse was sitting on had the expertise to get everything up and running fairly quickly. So they launched MySpace internally first and then had little contests to see who could get the most people to sign up. eUniverse had tens of millions of users on the other properties so they emailed them too. Within two years they had 20 million users and were the centerpiece of the eUniverse portfolio. Wanting in on what the young kids were doing these days, Rupert Murdoch and News Corporation, or NewsCorp for short, picked up the company for $580 Million in cash. It’s like an episode of Succession, right? After the acquisition of Myspace by news corporation, Myspace continued its exponential growth. Later in the year, the site started signing up 200,000 new users every day. About a year later, it was registering approx. 320,000 users each day. They localized into different languages and became the biggest website in the US. So they turned on the advertising machine, paying back their purchase price by doing $800 million in revenue back to NewsCorp. MySpace had become the first big social media platform that was always free that allowed users to freely express their minds and thoughts with millions of other users, provided they were 13 years or older. They restricted access to profiles of people younger than 16 years in such a way that they couldn’t be viewed by people over 18 years old. That was to keep sexual predators from accessing the profile of a minor. Kids turned out to be a challenge. In 2006, during extensive research the company began detecting and deleting profiles of registered sex offenders which had started showing up on the platform. Myspace partnered with Sentinel Tech Holdings Corporation to build a searchable, national database containing names, physical descriptions, and other identity details known as the Sentinel Safe which allowed them to keep track of over half a million registered sex offenders from U.S. government records. This way they developed the first national database of convicted sex offenders to protect kids on the platform, which they then provided to state attorney generals when the sex offenders tried to use MySpace. Facebook was created in 2004 and Twitter was created in 2006. They picked up market share, but MySpace continued to do well in 2007 then not as well in 2008. By 2009, Facebook surpassed Myspace in the number of unique U.S. visitors. Myspace began a rapid decline and lost members fast. Network effects can disappear as quickly as they are created. They kept the site simple and basic; people would log in, make new friends, and share music, photos, and chat with people. Facebook and Twitter constantly introduced new features for users to explore; this kept the existing users on the site and attracted more users. Then social media companies like twitter began to target users on Myspace. New and more complicated issues kept coming up. Pages were vandalized, there were phishing attacks, malware got posted to the site, and there were outages as the ColdFusion code had been easy to implement but proved harder to hyperscale. In fact, few had needed to scale a site like MySpace had in that era. Not only were users abandoning the platform, but employees at Myspace started to leave. The changes to MySpace’s executive ranks went down quicky in June 2009 by a layoff of 37.5% of its workforce reducing, the employees went down from 1,600 to 1,000. Myspace attempted to rebrand itself as primarily a music site to try and gain the audience they lost. They changed the layout to make it look more attractive but continued a quick decline just as Facebook and Twitter were in the midst of a meteoric rise. In 2011 News Corporation sold Myspace to Specific Media and Justin Timberlake for around $35 million. Timberlake wanted to make a platform where fans could go and communicate with their favorite entertainers, listen to new music, watch videos, share music, and connect with others who liked the same things. Like Geocities but for music lovers. They never really managed to turn things around. In 2016, Myspace and its parent company were acquired by Time Inc. and later Time inc. was in turn purchased by the Meredith Corporation. A few months later the news cycle on and about the platform became less positive. A hacker retrieved 427 million Myspace passwords and tried to sell them for $2,800. In 2019, Myspace accidentally deleted over 50 million digital files including photos, songs, and videos during a server migration. Everything up to 2015 was erased. In some ways that’s not the worst thing, considering some of the history left on older profiles. MySpace continues to push music today, with shows that include original content, like interviews with artists. It’s more of a way for artists to project their craft than a social network. It’s featured content, either sponsored by a label or artist, or from artists so popular or with such an intriguing story their label doesn’t need to promote them. There are elements of a social network left, but nothing like the other social networks of the day. And there’s some beauty in that simplicity. MySpace was always more than just a social networking website; it was the social network that kickstarted the web 2.0 experience we know today. Tom was everyone who joined the networks first friend. So he became the first major social media star. MySpace became the most visited social networking site in the world, often surpassing Google in number of visitors. Then the network effect moved elsewhere, and those who inherited the users analyzed what caused them to move away from MySpace and either through copying features, out innovating, or acquisition, have managed to remain dominant for over a decade. But there’s always something else right around the corner. One of the major reasons people abandoned MySpace was to be with those who thought just like them. When Facebook was only available to college kids it had a young appeal. It slowly leaked into the mainstream and my grandmother started typing the word like when I posted pictures of my kid. Because we grew up. They didn’t attempt to monetize too early. They remained stable. They didn’t spend more than they needed to keep the site going, so never lost control to investors. Meanwhile, MySpace grew to well over a thousand people to support a web property that would take a dozen to support today. Facebook may move fast and break things. But they do so because they saw what happens when we don’t.
5/14/2022 • 18 minutes, 15 seconds
Gateway 2000, and Sioux City
Theophile Bruguier was a fur trader who moved south out of Monreal after a stint as an attorney in Quebec before his fiancé died. He became friends with Chief War Eagle of the Yankton Sioux. We call him Chief, but he left the Santee rather than have a bloody fight over who would be the next chief. The Santee were being pushed down from the Great Lakes area of Minnesota and Wisconsin by the growing Ojibwe and were pushing further and further south. There are two main divisions of the Sioux people: the Dakota and the Lakota. There are two main ethnic groups of the Dakota, the Eastern, sometimes called the Santee and the Western, or the Yankton. After the issues with the his native Santee, he was welcomed by the Yankton, where he had two wives and seven children. Chief War Eagle then spent time with the white people moving into the area in greater and greater numbers. They even went to war and he acted as a messenger for them in the War of 1812 and then became a messenger for the American Fur Company and a guide along the Missouri. After the war, he was elected a chief and helped negotiate peace treaties. He married two of his daughters off to Theophile Bruguier, who he sailed the Missouri with on trips between St Louis and Fort Pierre in the Dakota territory. The place where Theophile settled was where the Big Sioux and Missouri rivers meet. Two water ways for trade made his cabin a perfect place to trade, and the chief died a couple of years later and was buried in what we now call War Eagle Park, a beautiful hike above Sioux City. His city. Around the same time, the Sioux throughout the Minnesota River were moved to South Dakota to live on reservations, having lost their lands and war broke out in the 1860s. Back at the Bruguier land, more French moved into the area after Bruguier opened a trading post and was one of the 17 white people that voted in the first Woodbury County election, once Wahkaw County was changed to Woodbury to honor Levi Woodbury, a former Supreme Court Justice. Bruguier sold some of his land to Joseph Leonais in 1852. He sold it to a land surveyor, Dr. John Cook, who founded Sioux City in 1854. By 1860, with the westward expansion of the US, the population had already risen to 400. Steamboats, railroads, livestock yards, and by 1880 they were over 7,000 souls, growing to 6 times that by the time Bruguier died in 1896. Seemingly more comfortable with those of the First Nations, his body is interred with Chief War Eagle and his first two wives on the bluffs overlooking Sioux City, totally unrecognizable by then. The goods this new industry brought had to cross the rivers. Before there were bridges to cross the sometimes angry rivers, ranchers had to ferry cattle across. Sometimes cattle fell off the barges and once they were moving, they couldn’t stop for a single head of cattle. Ted Waitt’s ancestors rescued cattle and sold them, eventually homesteading their own ranch. And that ranch is where Ted started Gateway Computers in 1985 with his friend Mike Hammond. Michael Dell started Dell computers in 1984 and grew the company on the backs of a strong mail order business. He went from selling repair services and upgrades to selling full systems. He wasn’t the only one to build a company based on a mail and phone order business model in the 1980s and 1990s. Before the internet that was the most modern way to transact business. Ted Waitt went to the University of Iowa in Iowa City a couple of years before Michael Dell went to the University of Texas. He started out in marketing and then spent a couple of years working for a reseller and repair store in Des Moines before he decided to start his own company. Gateway began life in 1985 as the Texas Instruments PC Network, or TIPC Network for short. They sold stuff for Texas Instruments computers like modems, printers, and other peripherals. The TI-99/4A had been released in 1979 and was discontinued a year before. It was a niche hobbyist market even by then, but the Texas Instruments Personal Computer had shipped in 1983 and came with an 8088 CPU. It was similar to an IBM PC and came with a DOS. But Texas Instruments wasn’t a clone maker and the machines weren’t fully Personal Computer compatible. Instead, there were differences. They found some success and made more than $100,000 in just a few months, so brought in Tedd’s brother Norm. Compaq, Dell, and a bunch of other companies were springing up to build computers. Anyone who had sold parts for an 8088 and used DOS on it knew how to build a computer. And after a few years of supplying parts, they had a good idea how to find inexpensive components to build their own computers. They could rescue parts and sell them to meatpacking plants as full-blown computers. They just needed some Intel chips, some boards, which were pretty common by then, some RAM, which was dirt cheap due to a number of foreign companies dumping RAM into the US market. They built some computers and got up to $1 million in revenue in 1986. Then they became an IBM-compatible personal computer when they found the right mix of parts. It was close to what Texas Instruments sold, but came with a color monitor and two floppy disk drives, which were important in that era before all the computers came with spinning hard drives. Their first computer sold for just under $2,000, which made it half what a Texas Instruments computer cost. They found the same thing that Dell had found: the R&D and marketing overhead at big companies meant they could be more cost-competitive. They couldn’t call the computers a TIPC Network though. Sioux City, Iowa became the Gateway to the Dakotas, and beyond, so they changed their name to Gateway 2000. Gateway 2000 then released an 80286, which we lovingly called the 286, in 1988 and finally left the ranch to move into the city. They also put Waitt’s marketing classes to use and slapped a photo of the cows from the ranch in a magazine that said “Computers from Iowa?” and one of the better tactics for long-term loyalty, they gave cash bonuses to employees based on their profits. Within a year, they jumped to $12 million in sales. Then $70 million in 1989, and moved to South Dakota in 1990 to avoid paying state income tax. The cow turned out to be popular, so they kept Holstein cows in their ads and even added them to the box. Everyone knew what those Gateway boxes looked like. Like Dell, they hired great tech support who seemed to love their jobs at Gateway and would help with any problems people found. They brought in the adults in 1990. Executives from big firms. They had been the first to Mae color monitors standard and now, with the release of Windows they became the first big computer seller to standardize on the platform. They released a notebook computer in 1992. The HandBook was their first computer that didn’t do well. It could have been the timing, but in the midst of a recession in a time when most households were getting computers, a low cost computer sold well and sales hit $1 billion. Yet they had trouble scaling to their ship hundreds of computers a day. They opened an office in Ireland and ramped up sales overseas. Then they went public in 1993, raising $150 million. The Wiatt’s hung on to 85% of the company and used the capital raised in the IPO to branch into other areas to complete the Gateway offering: modems, networking equipment, printers, and more support representatives. Sales in 1994 hit $2.7 billion a year. They added another support center a few hours down the Missouri River in Kansas City. They opened showrooms. They added a manufacturing plant in Malaysia. They bought Osborne Computer. They opened showrooms and by 1996 Gateway spent tens of millions a year in advertising. The ads worked and they became a household name. They became a top ten company in computing with $5 billion in sales. Dell was the only direct personal computer supplier who was bigger. They opened a new sales channel: the World Wide Web. Many still called after they looked up prices at first but by 1997 they did hundreds of millions in sales on the web. By then, Ethernet had become the standard network protocol so they introduced the E-Series, which came with networks. They bought Advanced Logic Research to expand into servers. They launched a dialup provider called gateway.net. By the late 1990s, the ocean of companies who sold personal computers was red. Anyone could head down to the local shop, buy some parts, and build their own personal computer. Dell, HP, Compaq, and others dropped their prices and Gateway was left needing a new approach. Three years before Apple opened their first store, Gateway launched Gateway Country, retail stores that sold the computer, the dialup service, and they went big fast, launching 58 stores in 26 states in a short period of time. With 2000 right around the corner, they also changed their name to Gateway, Inc. Price pressure continued to hammer away at them and they couldn’t find talent so they moved to San Diego. 1999 proved a pivotal year for many in technology. The run-up to the dot com bubble meant new web properties popped up constantly. AOL had more capital than they could spend and invested heavily into Gateway to take over the ISP business, which had grown to over half a million subscribers. They threw in free Internet access with the computers, opened more channels into different sectors, and expanded the retail stores to over 200. Some thought Waitt needed to let go and let someone with more executive experience come in. So long-time AT&T exec Jeff Weitzen, who had joined the company in 1998 took over as CEO. By then Waitt was worth billions and it made sense that maybe he could go run a cattle ranch. By then his former partner Mike Hammond had a little business fixing up cars so why not explore something new. Waitt stayed on as chairman as Weitzen reorganized the company. But the prices of computers continued to fall. To keep up, Gateway released the Astro computer in 2000. This was an affordable, small desktop that had a built-in monitor, CPU, and speakers. It ran a 400 MHz Intel Celeron, had a CD-ROM, and a 4.3 GB hard drive, with 64 Megabytes of memory, a floppy, a modem, Windows 98 Second Edition, Norton Anti-Virus, USB ports, and the Microsoft Works Suite. All this came in at $799. Gateway had led the market with Windows and other firsts they jumped on board with. They had been aggressive. The first iMac had been released in 1998 and this seemed like they were following that with a cheaper computer. Gateway Country stores grew over 400+ stores. But the margins had gotten razor thin. That meant profits were down. Waitt came back to run the company, the US Securities and Exchange Commission filed charges for fraud against Weitzen, the former controller, and the former CFO, and that raged on for years. In that time, Gateway got into TVs, cameras, MP3 players, and in 2004 acquired eMachines, a rapidly growing economy PC manufacturer. Their CEO, Wayne Inouye then came in to run Gateway. He had been an executive at The Good Guys! and Best Buy before taking the helm of eMachines in 2001, helping them open sales channels in retail stores. But Gateway didn’t get as much a foothold in retail. That laptop failure from the 1980s stuck with Gateway. They never managed to ship a game-changing laptop. Then the market started to shift to laptops. Other companies left on that market but Gateway never seemed able to ship the right device. They instead branched into consumer electronics. The dot com bubble burst and they never recovered. The financial woes with the SEC hurt trust in the brand. The outsourcing hurt the trust in the brand. The acquisition of a budget manufacturer hurt the brand. Apple managed to open retail stores to great success, while preserving relationships with big box retailers. But Gateway lost that route to market when they opened their own stores. Then Acer acquired Gateway in 2007. They can now be found at Walmart, having been relaunched as a budget brand of Acer, a company who the big American firms once outsourced to, but who now stands on their own two feed as a maker of personal computers.
5/9/2022 • 18 minutes, 56 seconds
The WYSIWYG Web
4/29/2022 • 24 minutes, 37 seconds
Whistling Our Way To Windows XP
Microsoft had confusion in the Windows 2000 marketing and disappointment with Millennium Edition, which was built on a kernel that had run its course. It was time to phase out the older 95, 98, and Millennium code. So in 2001, Microsoft introduced Windows NT 5.1, known as Windows XP (eXperience). XP came in a Home or Professional edition. Microsoft built a new interface they called Whistler for XP. It was sleeker and took more use of the graphics processors of the day. Jim Allchin was the Vice President in charge of the software group by then and helped spearhead development. XP had even more security options, which were simplified in the home edition. They did a lot of work to improve the compatibility between hardware and software and added the option for fast user switching so users didn’t have to log off completely and close all of their applications when someone else needed to use the computer. They also improved on the digital media experience and added new libraries to incorporate DirectX for various games. Professional edition also added options that were more business focused. This included the ability to join a network and Remote Desktop without the need of a third party product to take control of the keyboard, video, and mouse of a remote computer. Users could use their XP Home Edition computer to log into work, if the network administrator could forward the port necessary. XP Professional also came with the ability to support multiple processors, send faxes, an encrypted file system, more granular control of files and other objects (including GPOs), roaming profiles (centrally managed through Active Directory using those GPOs), multiple language support, IntelliMirror (an oft forgotten centralized management solution that included RIS and sysprep for mass deployments), an option to do an Automated System Recovery, or ASR restore of a computer. Professional also came with the ability to act as a web server, not that anyone should run one on a home operating system. XP Professional was also 64-bit given the right processor. XP Home Edition could be upgraded to from Windows 98, Windows 98 Second Edition, Millineum, and XP Professional could be upgraded to from any operating system since Windows 98 was released., including NT 4 and Windows 2000 Professional. And users could upgrade from Home to Professional for an additional $100. Microsoft also fixed a few features. One that had plagued users was that they had to gracefully unmount a drive before removing it; Microsoft got in front of this when they removed the warning that a drive was disconnected improperly and had the software take care of that preemptively. They removed some features users didn’t really use like NetMeeting and Phone Dialer and removed some of the themes options. The 3D Maze was also sadly removed. Other options just cleaned up the interface or merged technologies that had become similar, like Deluxe CD player and DVD player were removed in lieu of just using Windows Media Player. And chatty network protocols that caused problems like NetBEUI and AppleTalk were removed from the defaults, as was the legacy Microsoft OS/2 subsystem. In general, Microsoft moved from two operating system code bases to one. Although with the introduction of Windows CE, they arguably had no net-savings. However, to the consumer and enterprise buyer, it was a simpler licensing scheme. Those enterprise buyers were more and more important to Microsoft. Larger and larger fleets gave them buying power and the line items with resellers showed it with an explosion in the number of options for licensing packs and tiers. But feature-wise Microsoft had spent the Microsoft NT and Windows 2000-era training thousands of engineers on how to manage large fleets of Windows machines as Microsoft Certified Systems Engineers (MCSE) and other credentials. Deployments grew and by the time XP was released, Microsoft had the lions’ share of the market for desktop operating systems and productivity apps. XP would only cement that lead and create a generation of systems administrators equipped to manage the platform, who never knew a way other than the Microsoft way. One step along the path to the MCSE was through servers. For the first couple of years, XP connected to Windows 2000 Servers. Windows Server 2003, which was built on the Windows NT 5.2 kernel, was then released in 2003. Here, we saw Active Directory cement a lead created in 2000 over servers from Novell and other vendors. Server 2003 became the de facto platform for centralized file, print, web, ftp, software time, DHCP, DNS, event, messeging, and terminal services (or shared Remote Desktop services through Terminal Server). Server 2003 could also be purchased with Exchange 2003. Given the integration with Microsoft Outlook and a number of desktop services, Microsoft Exchange. The groupware market in 2003 and the years that followed were dominated by Lotus Notes, Novell’s GroupWise, and Exchange. Microsoft was aggressive. They were aggressive on pricing. They released tools to migrate from Notes to Exchange the week before IBM’s conference. We saw some of the same tactics and some of the same faces that were involved in Microsoft’s Internet Explorer anti-trust suit from the 1990s. The competition to Change never recovered and while Microsoft gained ground in the groupware space through the Exchange Server 4.0, 5.0, 5.5, 2000, 2003, 2007, 2010, 2013, and 2016 eras, by Exchange 2019 over half the mailboxes formerly hosted by on premises Exchange servers had moved to the cloud and predominantly Microsoft’s Office 365 cloud service. Some still used legacy Unix mail services like sendmail or those hosted by third party providers like GoDaddy with their domain or website - but many of those ran on Exchange as well. The only company to put up true competition in the space has been Google. Other companies had released tools to manage Windows devices en masse. Companies like Altiris sprang out of needs for companies who did third party software testing to manage the state of Windows computers. Microsoft had a product called Systems Management Server but Altiris built a better product, so Microsoft built an even more robust solution called System Center Configuration Management server, or SCCM for short, and within a few years Altiris lost so much business they were acquired by Symantec. Other similar stories played out across other areas where each product competed with other vendors and sometimes market segments - and usually won. To a large degree this was because of the tight hold Windows had on the market. Microsoft had taken the desktop metaphor and seemed to own the entire stack by the end of the Windows XP era. However, the technology we used was a couple of years after the product management and product development teams started to build it. And by the end of the XP era, Bill Gates had been gone long enough, and many of the early stars that almost by pure will pushed products through development cycles were as well. Microsoft continued to release new versions of the operating systems but XP became one of the biggest competitors to later operating systems rather than other companies. This reluctance to move to Vista and other technologies was the main reason extended support for XP through to 2012, around 11 years after it was released.
4/25/2022 • 11 minutes, 31 seconds
Windows NT 5 becomes Windows 2000
Microsoft Windows 2000 was the successor to Windows NT 4.0, which had been released in 1997. Windows 2000 didn’t have a code name (supposedly because Jim Allchin didn’t like codenames), although its service packs did; Service Pack 1 and Windows 2000 64-bit were codenamed "Asteroid" and "Janus," respectively. 2000 began as NT 5.0 but Microsoft announced the name change in 1998, in a signal with when customer might expect the OS. Some of the enhancements were just to match the look and feel of the consumer Windows 98 counterpart. For example, the logo in the boot screens was cleaned up and they added new icons. Some found Windows 2000 to be more reliable, others claimed it didn’t have enough new features. But what it might have lacked in features from a cursory glance, Windows 2000 made up for in stability, scalability, and reliability. This time around, Microsoft had input from some of their larger partners. They released the operating system to partners in 1999, after releasing three release candidates or developer previews earlier that year. They needed to, if only so third parties could understand what items needed to be sold to customers. There were enough editions now, that it wasn’t uncommon for resellers to have to call the licensing desk at a distributor (similar to a wholesaler for packaged goods) in order to figure out what line items the reseller needed to put on a bid, or estimate. Reporters hailed it as the most stable product ever produced by Microsoft. It was also the most secure version. 2000 brought Group Policies forward from NT and enhanced what could be controlled from a central system. The old single line domain concept for managing domains was enhanced to become what Microsoft called Active Directory, a modern directory service that located resources in a database and allowed for finely grained controls of those resources. Windows 2000 also introduced NTFS 3, an Encrypted File System that was built on top of layers of APIs, each with their own controls. Still, Windows 98 was the most popular operating system in the world by then and it was harder to move people to it than initially expected. Microsoft released Windows 98 Second Edition in 1999 and then Windows Millennium Edition, or Me, in 2000. Millennium was a flop and helped move more people into 2000, even though 2000 was marketed as a business or enterprise operating system. Windows 2000 Professional was the workstation workhorse. Active Directory and other server services ran on Windows 2000 Server Edition. They also released Advanced Server and Datacenter Server for even more advanced environments, with Datacenter able to support up to 32 CPUs. Professional borrowed many features from both NT and 98 Second Edition, including the Outlook Express email client, expanded file system support, WebDAV support, Windows Media Player, WDM (Windows Driver Model), the Microsoft Management Console (MMC) for making it easier to manage those GPOs, support for new mass storage devices like Firewire, hibernation and passwords to wake up from hibernation, the System File Checker, new debugging options, better event logs, Windows Desktop Update (which gave us “Patch Tuesday”), a new Windows Installer, Windows Management Instrumentation (WMI), Plug and Play hardware (installing new hardware in Windows NT was a bit more like doing so in Unix than Windows 95), and all the transitions and animations of the Windows shell like an Explorer integrated with Internet Explorer. Some of these features were abused. We got Code Red, Nimbda, and other malware that became high profile attacks against vulnerable binaries. These were unprecedented in terms of how quickly a flaw in the code could get abused en masse. Hundreds of thousands of computers could be infected in a matter of days with a well crafted exploit. Even some of the server services were exploited such as the IIS, or Internet Information Services server. Microsoft responded with security bulletins but buffer overflows and other vulnerabilities allows mass infections. So much so that the US and other governments got involved. This wasn’t made any easier by the fact that the source code for parts of 2000 was leaked on the Internet and had been used to help find new exploits. Yet Windows 2000 was still the most secure operating system Microsoft had put out. Imagine how many viruses and exploits would have appeared on all those computers if it hadn’t of been. And within Microsoft, Windows 2000 was a critical step toward mass adoption of the far more stable, technically sophisticated Windows NT platform. It demonstrated that a technologically powerful Windows operating system could also have a user-friendly interface and multimedia capabilities.
4/17/2022 • 7 minutes, 53 seconds
The R Programming Language
R is the 18th level of the Latin alphabet. It represents the rhotic consonant, or the r sound. It goes back to the Greek Rho, the Phoenician Resh before that and the Egyptian rêš, which is the same name the Egyptians had for head, before that. R appears in about 7 and a half percent of the words in the English dictionary. And R is probably the best language out there for programming around various statistical and machine learning tasks. We may use tools like Tensorflow imported to languages like python to prototype but R is incredibly performant for all the maths. And so it has become an essential piece of software for data scientists. The R programming language was created in 1993 by two statisticians Robert Gentleman, and Ross Ihaka at the University of Auckland, New Zealand. It has since been ported to practically every operating system and is available at r-project.org. Initially called "S," the name changed to "R" to avoid a trademark issue with a commercial software package that we’ll discuss in a bit. R was primarily written in C but used Fortran and since even R itself. And there have been statistical packages since the very first computers were used for math. IBM in fact packaged up BMDP when they first started working on the idea at UCLA Health Computing Facility. That was 1957. Then came SPSS out of the University of Chicago in 1968. And the same year, John Sall and others gave us SAS, or Statistical Analysis System) out of North Carolina State University. And those evolved from those early days through into the 80s with the advent of object oriented everything and thus got not only windowing interfaces but also extensibility, code sharing, and as we moved into the 90s, acquisition’s. BMDP was acquired by SPSS who was then acquired by IBM and the products were getting more expensive but not getting a ton of key updates for the same scientific and medical communities. And so we saw the upstarts in the 80s, Data Desk and JMP and others. Tools built for windowing operating systems and in object oriented languages. We got the ability to interactively manipulate data, zoom in and spin three dimensional representations of data, and all kinds of pretty aspects. But they were not a programmers tool. S was begun in the seventies at Bell Labs and was supposed to be a statistical MATLAB, a language specifically designed for number crunching. And the statistical techniques were far beyond where SPSS and SAS had stopped. And with the breakup of Ma Bell, parts of Bell became Lucent, which sold S to Insightful Corporation who released S-PLUS and would later get bought by TIBCO. Keep in mind, Bell was testing line quality and statistics and going back to World War II employed some of the top scientists in those fields, ones who would later create large chunks of the quality movement and implementations like Six Sigma. Once S went to a standalone software company basically, it became less about the statistics and more about porting to different computers to make more money. Private equity and portfolio conglomerates are, by nature, after improving the multiples on a line of business. But sometimes more statisticians in various feels might feel left behind. And this is where R comes into the picture. R gained popularity among statisticians because it made it easier to write complicated statistical algorithms without learning an entire programming language. Its popularity has grown significantly since then. R has been described as a cross between MATLAB and SPSS, but much faster. R was initially designed to be a language that could handle statistical analysis and other types of data mining, an offshoot of which we now call machine learning. R is also an open-source language and as with a number of other languages has plenty of packages available through a package repository - which they call CRAN (Comprehensive R Archive Network). This allows R to be used in fields outside of statistics and data science or to just get new methods to do math that doesn’t belong in the main language. There are over 18,000 packages for R. One of the more popular is ggplot2, an open-source data visualization package. data.table is another that performs programmatic data manipulation operations. dplyr provides functions designed to enable data frame manipulation in an intuitive manner. tidyr helps create tidier data. Shiny generates interactive web apps. And there are plenty of packages to make R easier, faster, and more extensible. By 2015, more than 10 million people used R every month and it’s now the 13th most popular language in use. And the needs have expanded. We can drop r scripts into other programs and tools for processing. And some of the workloads are huge. This led to the development of parallel computing, specifically using MPI (Message Passing Interface). R programming is one of the most popular languages used for statistical analysis, statistical graphics generation, and data science projects. There are other languages or tools for specific uses but it’s even started being used in those. The latest version, R 4.1.2, was released on 21/11/01. R development, as with most thriving open source solutions, is guided by a group of core developers supported by contributions from the broader community. It became popular because it provides all essential features for data mining and graphics needed for academic research and industry applications and because of the pluggable and robust and versatile nature. And projects like tensorflow and numpy and sci-kit have evolved for other languages. And there are services from companies like Amazon that can host and process assets from both, both using unstructured databases like NoSQL or using Jupyter notebooks. A Jupyter Notebook is a JSON document, following a versioned schema that contains an ordered list of input/output cells which can contain code, text (using Markdown), formulas, algorithms, plots and even media like audio or video. Project Jupyter was a spin-off of iPython but the goal was to create a language-agnostic tool where we could execute aspects in Ruby or Haskel or Python or even R. This gives us so many ways to get our data into the notebook, in batches or deep learning environments or whatever pipeline needs to be built based on an organization’s stack. Especially if the notebook has a frontend based on Amazon SageMaker Notebooks, Google's Colaboratory and Microsoft's Azure Notebook. Think about this. 25% of the languages lack a rhotic consonant. Sometimes it seems like we’ve got languages that do everything or that we’ve built products that do everything. But I bet no matter the industry or focus or sub-specialty, there’s still 25% more automation or instigation into our own data to be done. Because there always will be.
4/1/2022 • 10 minutes, 50 seconds
The Earliest Days of Microsoft Windows NT
The first operating systems as we might think of them today (or at least anything beyond a basic task manager) shipped in the form of Multics in 1969. Some of the people who worked on that then helped created Unix at Bell Labs in 1971. Throughout the 1970s and 1980s, Unix flowed to education, research, and corporate environments through minicomputers and many in those environments thought a flavor of BSD, or Berkeley Software Distribution, might become the operating system of choice on microcomputers. But the microcomputer movement had a while other plan if only in spite of the elder minicomputers. Apple DOS was created in 1978 in a time when most companies who made computers had to mail their own DOS as well, if only so software developers could built disks capable of booting the machines. Microsoft created their Disk Operating System, or MS-DOS, in 1981. They proceeded to Windows 1 to sit on top of MS-DOS in 1985, which was built in Intel’s 8086 assembler and called operating system services via interrupts. That led to poor programmers locking down points in order to access memory addresses and written assuming a single-user operating system. Then came Windows 2 in 1987, Windows 3 in 1992, and released one of the most anticipated operating systems of all time in 1995 with Windows 95. 95 turned into 98, and then Millineum in 2000. But in the meantime, Microsoft began work on another generation of operating systems based on a fusion of ideas between work they were doing with IBM, work architects had done at Digital Equipment Corporation (DEC), and rethinking all of it with modern foundations of APIs and layers of security sitting atop a kernel. Microsoft worked on OS/2 with IBM from 1985 to 1989. This was to be the IBM-blessed successor of the personal computer. But IBM was losing control of the PC market with the rise of cloned IBM architectures. IBM was also big, corporate, and the small, fledgeling Microsoft was able to move quicker. Really small companies that find success often don’t mesh well with really big companies that have layers of bureaucracy. The people Microsoft originally worked with were nimble and moved quickly. The ones presiding over the massive sales and go to market efforts and the explosion in engineering team size was back to the old IBM. OS/2 had APIs for most everything the computer could do. This meant that programmers weren’t just calling assembly any time they wanted and invading whatever memory addresses they wanted. They also wanted preemptive multitasking and threading. And a file system since by then computers had internal hard drives. The Microsoft and IBM relationship fell apart and Microsoft decided to go their own way. Microsoft realized that DOS was old and building on top of DOS was going to some day be a big, big problem. Windows 3 was closer, as was 95, so they continued on with that plan. But they started something similar to what we’d call a fork of OS/2 today. So Gates went out to recruit the best in the industry. He hired Dave Cutler from Digital Equipment to take on the architecture of the new operating system. Cutler had worked on the VMS operating system and helped lead efforts for next-generation operating system at DEC that they called MICA. And that moment began the march towards a new operating system called NT, which borrowed much of the best from VMS, Microsoft Windows, and OS/2 - and had little baggage. Microsoft was supposed to make version 3 of OS/2 but NT OS/2 3.0 would become just Windows NT when Microsoft stopped developing on OS/2. It took 12 years, because um, they had a loooooot of customers after the wild success of first Windows 3 and then Windows 95, but eventually Cutler and team’s NT would replace all other operating systems in the family with the release of Windows 2000. Cutler wanted to escape the confines of what was by then the second largest computing company in the world. Cutler worked on VMS and RSX-12 before he got to Microsoft. There were constant turf battles and arguments about microkernels and system architecture and meetings weren’t always conducive with actually shipping code. So Cutler went somewhere he could. At least, so long as they kept IBM at bay. Cutler brought some of the team from Digital with him and they got to work on that next generation of operating systems in 1988. They sat down to decide what they wanted to build, using the NS OS/2 operating system they had a starting point. Microsoft had sold Xenix and the team knew about most every operating system on the market at the time. They wanted a multi-user environment like a Unix. They wanted programming APIs, especially for networking, but different than what BSD had. In fact, many of the paths and structures of networking commands in Windows still harken back to emulating those structures. The system would be slow on the 8086 processor, but ever since the days of Xerox PARC, everyone knew Moore’s Law was real and that the processors would double in speed every other year. Especially since Moore was still at Intel and could make his law remain true with the 286 and 386 chips in the pipeline. They also wanted the operating system to be portable since IBM selected the Intel CPU but there were plenty of other CPU architectures out there as well. The original name for NT was to be OS/2 3.0. But the IBM and Microsoft relationship fell apart and the two companies took their operating systems in different directions. OS/2 became went the direction of Warp and IBM never recovered. NT went in a direction where some ideas came over from Windows 95 or 3.1 but mostly the team just added layers of APIs and focused on making NT a fully 32-bit version of Windows that could that could be ported to other platforms including ARM, PowerPC, and the DEC Alpha that Cutler had exposure to from his days at Digital. The name became Windows NT and NT began with version 3, as it was in fact the third installment of OS/2. The team began with Cutler and a few others, grew to eight and by the time it finally shipped as NT 3.1 in 1993 there were a few hundred people working on the project. Where Windows 95 became the mass marketed operating system, NT took lessons learned from the Unix, IBM mainframe, and VMS worlds and packed them into an operating system that could run on a corporate desktop computer, as microcomputers were called by then. The project cost $150 million, about the same as the first iPhone. It was a rough start. But that core team and those who followed did what Apple couldn’t in a time when a missing modern operating system nearly put Apple out of business. Cutler inspired, good managers drove teams forward, some bad managers left, other bad managers stayed, and in an almost agile development environment they managed to break through the conflicts and ship an operating system that didn’t actually seem like it was built by a committee. Bill Gates knew the market and was patient enough to let NT 3 mature. They took the parts of OS/2 like LAN Manager. They took parts of Unix like ping. But those were at the application level. The microkernel was the most important part. And that was a small core team, like it always is. The first version they shipped to the public was Windows NT 3.1. The sales people found it easiest to often say that NT was the business-oriented operating system. Over time, the Windows NT series was slowly enlarged to become the company’s general-purpose OS product line for all PCs, and thus Microsoft abandoned the Windows 9x family, which might or might not have a lot to do with the poor reviews Millennium Edition had. Other aspects of the application layer the original team didn’t do much with included the GUI, which was much more similar to Windows 3.x. But based on great APIs they were able to move faster than most, especially in that era where Unix was in weird legal territory, changing hands from Bell to Novell, and BSD was also in dubious legal territory. The Linux kernel had been written in 1991 but wasn’t yet a desktop-class operating system. So the remaining choices most business considered were really Mac, which had serious operating system issues at the time and seemed to lack a vision since Steve Jobs left the company, or Windows. Windows NT 3.5 was introduced in 1994, followed by 3.51 a year later. During those releases they shored up access control lists for files, functions, and services. Services being similar in nearly every way to a process in Unix. It sported a TCP/IP network stack but also NetBIOS for locating computers to establish a share and a file sharing stack in LAN Manager based on the Server Message Block, or SMB protocol that Barry Feigenbaum wrote at IBM in 1983 to turn a DOS computer into a file server. Over the years, Microsoft and 3COM add additional functionality and Microsoft added the full Samba with LDAP out of the University of Michigan as a backend and Kerberos (out of MIT) to provide single sign-on services. 3.51 also brought a lot of user-mode components from Windows 95. That included the Windows 95 common control library, which included the rich edit control, and a number of tools for developers. NT could run DOS software, now they were getting it to run Windows 95 software without sacrificing the security of the operating system where possible. It kinda’ looked like a slightly more boring version of 95. And some of the features were a little harder to use, like configuring a SCSI driver to get a tape drive to work. But they got the ability to run Office 95 and it was the last version that ran the old Program Manager graphical interface. Cutler had been joined by Moshe Dunie, who led the management side of NT 3.1, through NT 4 and became the VP of the Windows Operating System Division so also had responsibility for Windows 98 and 2000. For perspective, that operating system group grew to include 3,000 badged Microsoft employees and about half that number of contractors. Mark Luovsky and Lou Perazzoli joined from Digital. Jim Alchin came in from Banyan Vines. Windows NT 4.0 was released in 1996, with a GUI very similar to Windows 95. NT 4 became the workhorse of the field that emerged for large deployments of computers we now refer to as enterprise computing. It didn’t have all the animation-type bells and whistles of 95 but did perform about as well as any operating system could. It had the NT Explorer to browse files, a Start menu, for which many of us just clicked run and types cmd. It had a Windows Desktop Update and a task scheduler. They released a number of features that would take years for other vendors to catch up with. The DCOM, or Distributed Component Object Modeling and Object Linking & Embedding (or OLE) was a core aspect any developer had to learn. The Telephony API (or TAPI) allowed access to the modem. The Microsoft Transaction Server allowed developers to build network applications on their own sockets. The Crypto API allowed developers to encrypt information in their applications. The Microsoft Message Queuing service allowed queuing data transfer between services. They also built in DirectX support and already had OpenGL support. The Task Manager in NT 4 was like an awesome graphical version of the top command on Unix. And it came with Internet Explorer 2 built in. NT 4 would be followed by a series of service packs for 4 years before the next generation of operating system was ready. That was Windows 5, or more colloquially called Windows 2000. In those years NT became known as NT Workstation, the server became known as NT Server, they built out Terminal Server Edition in collaboration with Citrix. And across 6 service packs, NT became the standard in enterprise computing. IBM released OS/2 Warp version 4.52 in 2001, but never had even a fraction of the sales Microsoft did. By contrast, NT 5.1 became Windows XP and 6 became Vista in while OS/2 was cancelled in 2005.
3/24/2022 • 17 minutes, 55 seconds
Qualcomm: From Satellites to CDMA to Snapdragons
Qualcomm is the world's largest fabless semiconductor designer. The name Qualcomm is a mashup of Quality and Communications and communications has been a hallmark of the company since its founding. They began in satellite communications and today most every smartphone has a Qualcomm chip. The ubiquity of communications in our devices and everyday lives has allowed them a $182 billion market cap as of the time of this writing. Qualcomm began with far humbler beginnings. They emerged out of a company called Linkabit in 1985. Linkabit was started by Irwin Jacobs, Leonard Kleinrock, and Andrew Viterbi - all three former graduate students at MIT. Viterbi moved to California to take a job with JPL in Pasadena, where he worked on satellites. He then went off to UCLA where he developed what we now call the Viterti algorithm, for encoding and decoding digital communications. Jacobs worked on a book called Principles of Communication Engineering after getting his doctorate at MIT. Jacobs then took a year of leave to work at JPL after he met Viterbi in the early 1960s and the two hit it off. By 1966, Jacobs was a professor at the University of California, San Diego. Kleinrock was at UCLA by then and the three realized they had too many consulting efforts between them, but if they consolidated the request they could pool their resources. Eventually Jacobs and Viterbi left and Kleinrock got busy working on the first ARPANET node when it was installed at UCLA. Jerry Heller, Andrew Cohen, Klein Gilhousen, and James Dunn eventually moved into the area to work at Linkabit and by the 1970s Jacobs was back to help design telecommunications for satellites. They’d been working to refine the theories from Claude Shannon’s time at MIT and Bell Labs and were some of the top names in the industry on the work. And the space race needed a lot of this type of work. They did their work on Scientific Data Systems computers in an era before that company was acquired by Xerox. Much as Claude Shannon got started thinking of data loss as it pertains to information theory while trying to send telegraphs over barbed wire, they refined that work thinking about sending images from mars to earth. Others from MIT worked on other space projects as a part of missions. Many of those early employees were Viterbi’s PhD students and they were joined by Joseph Odenwalder, who took Viterbi’s decoding work and combined it with a previous dissertation out of MIT when he joined Linkabit. That got used in the Voyager space probes and put Linkabit on the map. They were hiring some of the top talent in digital communications and were able to promote not only being able to work with some of the top minds in the industry but also the fact that they were in beautiful San Diego, which appealed to many in the Boston or MIT communities during harsh winters. As solid state electronics got cheaper and the number of transistors more densely packed into those wafers, they were able to exploit the ability to make hardware and software for military applications by packing digital signal processors that had previously taken a Sigma from SDS into smaller and smaller form factors, like the Linkabit Microprocessor, which got Viterbi’s algorithm for encoding data into a breadboard and a chip. The work continued with defense contractors and suppliers. They built modulation and demodulation for UHF signals for military communications. That evolved into a Command Post Modem/Processor they sold, or CPM/P for short. They made modems for the military in the 1970s, some of which remained in production until the 1990s. And as they turned their way into the 1980s, they had more than $10 million in revenue. The UC San Diego program grew in those years, and the Linkabit founders had more and more local talent to choose from. Linkabit developed tools to facilitate encoded communications over commercial satellites as well. They partnered with companies like IBM and developed smaller business units they were able to sell off. They also developed a tool they called VideoCipher to encode video, which HBO and others used to do what we later called scrambling on satellite signals. As we rounded the corner into the 1990s, though, they turned their attention to cellular services with TDMA (Time-Division Multiple Access), an early alternative to CDMA. Along the way, Linkabit got acquired by a company called MACOM in 1980 for $25 million. The founders liked that the acquirer was a fellow PhD from MIT and Linkabit stayed separate but grew quickly with the products they were introducing. As with most acquisitions, the culture changed and by 1985 the founders were gone. The VideoCipher and other units were sold off, spun off, or people just left and started new companies. Information theory was decades old at this point, plenty of academic papers had been published, and everyone who understood the industry knew that digital telecommunications was about to explode; a perfect storm for defections. Qualcomm Over the course of the next few years over two dozen companies were born as the alumni left and by 2003, 76 companies were founded by Linkabit alumni, including four who went public. One of the companies that emerged included the Linkabit founders Irwin Jacobs and Andrew Viterbi, Begun in 1985, Qualcomm is also based in San Diego. The founders had put information theory into practice at Linkabit and seen that the managers who were great at finance just weren’t inspiring to scientists. Qualcomm began with consulting and research, but this time looked for products to take to market. They merged with a company called Omninet and the two released the OmniTRACS satellite communication system for trucking and logistical companies. They landed Schneider National and a few other large customers and grew to over 600 employees in those first five years. It remained a Qualcomm subsidiary until recently. Even with tens of millions in revenue, they operated at a loss while researching what they knew would be the next big thing. Code-Division Multiple Acces, or CDMA, is a technology that allows for sending information over multiple channels so users can share not just a single frequency of the radio band, but multiple frequencies without a lot of interference. The original research began all the way back in the 1930s when Dmitry Ageyev in the Soviet Union researched the theory of code division of signals at Leningrad Electrotechnical Institute of Communications. That work and was furthered during World War II by German researchers like Karl Küpfmüller and Americans like Claude Shannon, who focused more on the information theory of communication channels. People like Lee Yuk-wing then took the cybernetics work from pioneers like Norbert Weiner and helped connect those with others like Qualcomm’s Jacobs, a student of Yuk-wing’s when he was a professor at MIT. They were already working on CDMA jamming in the early 1950s at MIT’s Lincoln Lab. Another Russian named Leonid Kupriyanovich put the concept of CMDA into practice in the later 1950s so the Soviets could track people using a service they called Altai. That made it perfect for perfect for tracking trucks and within a few years was released in 1965 as a pre-cellular radiotelephone network that got bridged to standard phone lines. The Linkabit and then Qualcomm engineers had worked closely with satellite engineers at JPL then Hughes and other defense then commercial contractors. They’d come in contact with work and built their own intellectual property for decades. Bell was working on mobile, or cellular technologies. Ameritech Mobile Communications, or Advanced Mobile Phone System (AMPS) as they were known at the time, launched the first 1G network in 1983 and Vodaphone launched their first service in the UK in 1984. Qualcomm filed their first patent for CDMA the next year. That patent is one of the most cited documents in all of technology. Qualcomm worked closely with the Federal Communications Commission (FCC) in the US and with industry consortiums, such as the CTIA, or Cellular Telephone Industries Association. Meanwhile Ericsson promoted the TDMA standard as they claimed it was more standard; however, Qualcomm worked on additional patents and got to the point that they licensed their technology to early cell phone providers like Ameritech, who was one of the first to switch from the TDMA standard Ericsson promoted to CDMA. Other carriers switched to CDMA as well, which gave them data to prove their technology worked. The OmniTRACS service helped with revenue, but they needed more. So they filed for an initial public offering in 1991 and raised over $500 billion in funding between then and 1995 when they sold another round of shares. By then, they had done the work to get CDMA encoding on a chip and it was time to go to the mass market. They made double what they raised back in just the first two years, reaching over $800 million in revenue in 1996. Qualcomm and Cell Phones One of the reasons Qualcomm was able to raise so much money in two substantial rounds of public funding is that the test demonstrations were going so well. They deployed CDMA in San Diego, New York, Honk Kong, Los Angeles, and within just a few years had over a dozen carriers running substantial tests. The CTIA supported CDMA as a standard in 1993 and by 1995 they went from tests to commercial networks. The standard grew in adoption from there. South Korea standardized on CDMA between 1993 to 116. The CDMA standard was embraced by Primeco in 1995, who used the 1900 MHz PCS band. This was a joint venture between a number of vendors including two former regional AT&T spin-offs from before the breakup of AT&T and represented interests from Cox Communications, Sprint, and turned out to be a large undertaking. It was also the largest cellular launch with services going live in 19 cities and the first phones were from a joint venture between Qualcomm and Sony. Most of PrimeCo’s assets were later merged with AirTouch Cellular and the Bell Atlantic Mobile to form what we now know as Verizon Wireless. Along the way, there were a few barriers to mass proliferation of the Qualcomm CDMA standards. One is that they made phones. The Qualcomm Q cost them a lot to manufacture and it was a market with a lot of competition who had cheaper manufacturing ecosystems. So Qualcomm sold the manufacturing business to Kyocera, who continued to license Qualcomm chips. Now they could shift all of their focus on encoding bits of data to be carried over multiple radio channels to do their part in paving the way for 2G and 3G networks with the chips that went into most phones of the era. Qualcomm couldn’t have built out a mass manufacturing ecosystem to supply the world with every phone needed in the 2G and 3G era. Nor could they make the chips that went in those phones. The mid and late 1990s saw them outsource then just license their patents and know-how to other companies. A quarter of a billion 3G subscribers across over a hundred carriers in dozens of countries. They got in front of what came after CDMA and worked on multiple other standards, including OFDMA, or Orthogonal frequency-Division Multiple Access. For those they developed the Qualcomm Flarion Flash-OFDM and 3GPP 5G NR, or New Radio. And of course a boatload of other innovative technologies and chips. Thus paving the way to have made Qualcomm instrumental in 5G and beyond. This was really made possible by this hyper-specialization. Many of the same people who developed the encoding technology for the Voyager satellite decades prior helped pave the way for the mobile revolution. They ventured into manufacturing but as with many of the designers of technology and chips, chose to license the technology in massive cross-licensing deals. These deals are so big Apple sued Qualcomm recently for a billion in missed rebates. But there were changes happening in the technology industry that would shake up those licensing deals. Broadcom was growing into a behemoth. Many of their designs sent from stand-alone chips to being a small part of a SoC, or system on a chip. Suddenly, cross-licensing the ARM gave Qualcomm the ability to make full SoCs. Snapdragon has been the moniker of the current line of SoCs since 2007. Qualcomm has an ARM Architectural License and uses the ARM instruction set to create their own CPUs. The most recent incarnation is known as Krait. They also create their own Graphics Processor (GPU) and Digital Signal Processors (DSPs) known as Adreno and Hexagon. They recently acquired Arteris' technology and engineering group, and they used Arteris' Network on Chip (NoC) technology. Snapdragon chips can be found in the Samsung Galaxy, Vivo, Asus, and Xiaomi phones. Apple designs their own chips that are based on the ARM architecture, so in some ways compete with the Snapdragon, but still use Qualcomm modems like every other SoC. Qualcomm also bought a new patent portfolio from HP, including the Palm patents and others, so who knows what we’ll find in the next chips - maybe a chip in a stylus. Their slogan is "enabling the wireless industry," and they’ve certainly done that. From satellite communications that required a computer the size of a few refrigerators to battlefield communications to shipping trucks with tracking systems to cell towers, and now the full processor on a cell phone. They’ve been with us since the beginning of the mobile era and one has to wonder if the next few generations of mobile technology will involve satellites, so if Qualcomm will end up right back where they began: encoding bits of information theory into silicon.
3/17/2022 • 28 minutes, 55 seconds
The Short But Sweet History Of The Go Programming Language
The Go Programming Language Go is an open-source programming language with influences from Limbo, C, APL, Modular, Oberon, Pascal, Alex, Erlang, and most importantly, C. While relatively young compared to many languages, there are over 365,000 repositories of Go projects on Github alone. There are a few reason it gained popularity so quickly: it’s fast and efficient in the right hands, simple to pick up, doesn’t have some of the baggage of some more mature languages, and the name Ken Thompson. The seamless way we can make calls from Go into C and the fact that Ken Thompson was one of the parties responsible for C, makes it seem in part like a modern web enabled language that can stretch between the tasks C is still used for all the way to playing fart sounds in an app. And it didn’t hurt that co-author Rob Pike had whelped write books, co-created UTF-8, and was part of the distributed operating system Plan 9 team at Bell Labs and had worked on the Limbo programming language there. And Robert Griesemer was another co-author. He’d begun his career studying under Niklaus Wirth, the greater of Pascal, Modula, and Oberon. So it’s no surprise that he’d go on to write compilers and design languages. Before go, he’d worked on the V8 JavaScript engine at Google and a compiler for the Java HotSpot Virtual Machine. So our intrepid heroes assembled (pun intended) at Google in 2009. But why? Friends don’t let friends write in C. Thompson had done something amazing for the world with C. But that was going on 50 years ago. And others had picked up the mantle with C++. But there were shortcomings the team wanted to address. And so Go has the ability to concatenate string variables without using a preprocessor, has many similarities to languages like BASIC from the Limbo influences, but the most impressive feature about this programming language is its support for concurrent execution. And probably the best garbage collection facility I’ve ever seen. The first version of the language wasn't released to the public and wouldn’t be for a few years. The initial compiler was written in C but over time they got to where it can be self-hosted, which is to say that Go is compiled in Go. Go is a compiled language that can run on a command line, in a browser, on the server, or even be used to compile itself. Go compiles fast and has no global variables to clutter memory. This simplicity makes it easy to read through Go code line by line without consulting any parsing tools or syntax charts. Let’s look at a quick Hello World: // A basic Go program that demonstrates "Hello World!" package main import "fmt" func main() { fmt.Println("Hello World!") } The output would be a simple Hello World! Fairly straight forward but the power gets into more of the scripting structures - especially given that a micro service is just a lot of little functional scripts. The language itself has no connection to any other functional programming languages and does not include support for object orientation or reflection. The language consists of two parts: a parser (which processes an input file) and a bytecode interpreter, which translates all source code into machine code. Consequently, Go programs tend to compile quickly and run very efficiently because they are mainly independent of the runtime environment and can execute directly on the hardware without being interpreted by some sort of virtual machine first. Additionally, there is no need for a separate interpreter during execution since everything runs natively. The libraries and sources built using the Go programming language provide developers with a straightforward, safe, and extensibility system to build on. We have things like Go Kit, GORM, cli, Vegeta, fuzzy, Authboss, Image, Time, gg, and mgo. These can basically provide pre-built functions and APIs to hook into any old type of service or give a number of things for free. Go was well designed from the outset and while it’s evolved over the years, it hasn’t changed as much as many other languages. with the latest release being Go 1.17. 1.1 came just a couple of months after the initial release to increase how much memory could be used on 64 bit chips by about 10-fold, add detection for race conditions, added the uint for 64 bit integers. Oh and fixed a couple of issues in the compiler. 1.2 also came in 2013 and tweaked how slicing of arrays worked in a really elegant way (almost ruby-like) and allowed developers to call the runtime scheduler for non-inline calls. And added a thread limit, like the ulimit a bash would have, for 10,000 threads. And they doubled the grouting minimum size of the stack. Then the changes got smaller. This happens as every language gets more popular. The more people use it, the more havoc the developers cause when they make breaking changes. Bigger changes are contiguous models of grouting stacks in 1.3, the addition of internal packages in 1.4, a redesigned garbage collector in 1.5 when Go was moved away from C and implemented solely in Go and assembler. And 17 releases later, it’s more popular than ever. While C remains the most popular language today, Go is hovering in the top 10. Imagine, one day saying let’s build a better language for concurrent programming. And then viola; hundreds of thousands of people are using it.
3/13/2022 • 9 minutes, 33 seconds
awk && Regular Expressions For Finding Text
Programming was once all about math. And life was good. Then came strings, or those icky non-numbery things. Then we had to process those strings. And much of that is looking for patterns that wouldn’t be a need with integers, or numbers. For example, a space in a string of text. Let’s say we want to print hello world to the screen in bash. That would be the echo command, followed by “Hello World!” Now let’s say we ran that without the quotes then it would simply echo out the word Hello to the screen, given that the interpreter saw the space and ended the command, or looked for the next operator or verb according to which command is being used. Unix was started in 1969 at Bell Labs. Part of that work was The Thompson shell, the first Unix shell, which shipped in 1971. And C was written in 1972. These make up the ancestral underpinnings of the modern Linux, BSD, Android, Chrome, iPhone, and Mac operating systems. A lot of the work the team at Bell Labs was doing was shifting from pure statistical and mathematical operations to connect phones and do R&D faster to more general computing applications. Those meant going from math to those annoying stringy things. Unix was an early operating system and that shell gave them new abilities to interact with the computer. People called files funny things. There was text in those files. And so text manipulation became a thing. Lee McMahon developed sed in 1974, which was great for finding patterns and doing basic substitutions. Another team at Bell Labs that included Finnish programmer Alfred Aho, Peter Weinberger, and Brian Kernighan had more advanced needs. Take their last name initials and we get awk. Awk is a programming language they developed in 1977 for data processing, or more specifically for text manipulation. Marc Rochkind had been working on a version management tool for code at Bell and that involved some text manipulation, as well as a good starting point for awk. It’s meant to be concise and given some input, produce the desired output. Nice, short, and efficient scripting language to help people that didn’t need to go out and learn C to do some basic tasks. AWK is a programming language with its own interpreter, so no need to compile to run AWK scripts as executable programs. Sed and awk are both written to be used as one0line programs, or more if needed. But building in an implicit loops and implicit variables made it simple to build short but power regular expressions. Think of awk as a pair of objects. The first is a pattern followed by an action to take in curly brackets. It can be dangerous to call if the pattern is too wide open.; especially when piping information For example, ls -al at the root of a volume and piping that to awk $1 or some other position and then piping that into xargs to rm and a systems administrator could have a really rough day. Those $1, $2, and so-on represent the positions of words. So could be directories. Think about this, though. In a world before relational databases, when we were looking to query the 3rd column in a file with information separated by some delimiter, piping those positions represented a simple way to effectively join tables of information into a text file or screen output. Or to find files on a computer that match a pattern for whatever reason. Awk began powerful. Over time, improvements have enabled it to be used in increasingly complicated scenarios. Especially when it comes to pattern matching with regular expressions. Various coding styles for input and output have been added as well, which can be changed depending on the need at hand. Awk is also important because it influenced other languages. After becoming part of the IEEE Standard 1003.1, it is now a part of the POSIX standard. And after a few years, Larry Wall came up with some improvements, and along came Perl. But the awk syntax has always been the most succinct and useable regular expression engines. Part of that is the wildcard, piping, and file redirection techniques borrowed from the original shells. The AWK creators wrote a book called The AWK Programming Language for Addison-Wesley in 1988. Aho would go on to develop influential algorithms, write compilers, and write books (some of which were about compilers). Weinberger continued to do work at Bell before becoming the Chief Technology Officer of Hedge Fund Renaissance Technologies with former code breaker and mathematician James Simon and Robert Mercer. His face led to much love from his coworkers at Bell during the advent of digital photography and hopefully some day we’ll see it on the Google Search page, given he now works there. Brian Kernighan was a contributor to the early Multics then Unix work, as well as C. In fact, an important C implementation, K&R C, stands for Kernighan and Ritchie C. He coauthored The C Programming Language ands written a number of other books, most recently on the Go Programming Language. He also wrote a number of influential algorithms, as well as some other programming languages, including AMPL. His 1978 description of how to manage memory when working with those pesky strings we discussed earlier went on to give us the Hello World example we use for pretty much all introductions to programming languages today. He worked on ARPA projects at Stanford, helped with emacs, and now teaches computer science at Princeton, where he can help to shape the minds of future generations of programming languages and their creators.
3/4/2022 • 8 minutes, 40 seconds
Banyan Vines and the Emerging Local Area Network
One of my first jobs out of college was ripping Banyan VINES out of a company and replacing it with LAN Manager. Banyan VINES was a network operating system for Unix systems. It came along in 1984. This was a time when minicomputers running Unix were running at most every University and when Unix offered far more features that the alternatives. Sharing files was as old as the Internet. Telnet was created in 1969. FTP came along in 1971. SMB in 1983. Networking computers together had evolved from just the ARPANET to local protocols like ALOHAnet, which inspired Bob Metcalfe to start work on the PARC Universal Packet protocol with David Boggs, which evolved into the Xerox Network Systems, or XNS, suite of networking protocols that were developed to network the Xerox Alto. Along the way the two of them co-invented Ethernet. But there were developments happening in various locations in silos. For example, TCP was more of an ARPANET then NSFNET project so wasn’t used for computers on their own networks to communicate yet. Data General was founded in 1968 when Edson de Castro, the project manager for the PDP-8 at Digital Equipment Corporation, grew frustrated that the PDP wasn’t evolving fast enough. He, Henry Burkhardt, and Richard Sogge of Digital would be joined by Herbert Richman, who did sales for Fairchild Semiconductor. They were proud of the PDP-8. It was a beautiful machine. But they wanted to go even further. And they didn’t feel like they could do so at Digital. A few computers later, Within a year, they shipped the next generation machine, which they called the Nova. They released more computers but then came the explosion of computers that was the personal computing market. Microcomputers showed up in offices around the world and on multiple desks. And it didn’t take long before people started wondering if it wouldn’t be faster to run a cable between computers than it was to save a file to a floppy and get on an elevator. By the 1970s, Data General had been writing software for customers, mostly for the rising tide of UNIX System V implementations. But just giving customers a TCP/IP stack or an application that could open a socket over an X.25 network, which was later replaced with Frame Relay networks run by phone systems and for legacy support on those X.25 was streamed over TCP/IP. Some of the people from those projects at Data General saw an opportunity to build a company that focused on a common need, moving files back and forth between the microcomputers that were also being connected to these networks. David Mahoney was a manager at Data General who saw what customers were asking for. And he saw an increasing under of those microcomputers needed a few common services to connect to. So he left to form Banyan Systems in 1983, bringing Anand Jagannathan and Larry Floryan with him. They built Banyan VINES (Virtual Integrated NEtwork Service) in 1984, releasing version 1. Their client software could run on DOS and connect to X.25, Token Ring (which IBM introduced in 1984), or the Ethernet networks Bob Metcalfe from Xerox and then 3Com was a proponent of. After all, much of their work resembled the Xerox Network Systems protocols, which Metcalfe had helped develop. They used a 32-bit address. They developed an Address Resolution Protocol (or ARP) and Routing Table Protocol (RTP) that used tables on a server. And they created a file services application, print services application, and directory service they called StreetTalk. To help, they brought in Jim Allchin, who eventually did much of the heavy lifting. It was similar enough to TCP/IP, but different. Yet as TCP/IP became the standard, they added that at a cost. The whole thing came in at $17,000 and ran on less bandwidth than other services, and so they won a few contracts with the US State Deparment, US Marine Corps, and other government agencies. Many embassies used 300 baud phone lines with older modems and the new VINES service allowed them to do file sharing, print sharing, and even instant messaging throughout the late 80s and early 90s. The Marine Corp used it during the Gulf War and in an early form of a buying tornado, they went public in 1992, raising $28 million through NASDAQ. They grew to 410 employees and peaked at around $75 million in sales, spread across 7000 customers. They’d grown through word of mouth and other companies with strong marketing and sales arms were waiting in the wings. Novel was founded in 1983 in Utah and they developed the IPX network protocol. Netware would eventually become one of the most dominant network operating systems for Windows 3 and then Windows 95 computers. Yet, with incumbents like Banyan VINES and Novel Netware, this is another one of those times when Microsoft saw an opening for something better and just willed it into existence. And the story is similar to that of dozens of other companies including Novell, Lotus, VisiCalc, Netscape, Digital Research, and the list goes on and on and on. This kept happening because of a number of reasons. The field of computing had been comprised of former academics, many of whom weren’t aggressive in business. Microsoft ended up owning the operating system and so had selling power when it came to cornering adjacent markets because they could provide the cleanest possible user experience. People seemed to underestimate Microsoft until it was too late. Inertia. Oh, and Microsoft could outspend on top talent and offer them the biggest impact for their work. Whatever the motivators, Microsoft won in nearly every nook and cranny in the IT field that they pursued for decades. The damaging part for Banyan was when they teamed up with IBM to ship LAN Manager, which ultimately shipped under the name of each company. Microsoft ended up recruiting Jim Allchin away and with network interface cards falling below $1,000 it became clear that the local area network was really just in its infancy. He inherited LAN Manager and then NT from Dave Cutler and the next thing we knew, Windows NT Server was born, complete with file services, print services, and a domain, which wasn’t a fully qualified domain name until the release of Active Directory. Microsoft added Windsock in 1993 and released their own protocols. They supported protocols like IPX/SPX and DECnet but slowly moved customers to their own protocols. Banyan released the last version of Banyan VINES, 7.0, in 1997. StreetTalk eventually became an NT to LDAP bridge before being cancelled in the end. The dot com bubble was firmly here, though, so all was not lost. They changed their name in 1999 to ePresence, shifting their focus to identity management and security, officially pulling out of the VINES market. But the dot com bubble burst, so they were acquired in 2003 by Unisys. There were other companies in different networking niches along the way. Phil Karn wrote KA9Q NOS to connect CP/M and then DOS to TCP/IP in 1985. He wrote it on a Xerox 820, but by then Xerox was putting Zilog chips in computers and running CP/M, seemingly with little of the flair the Alto could have had. But with KA9Q NOS any of the personal computers on the market could get on the Internet and that software helped host many a commercial dialup connection and would go on to be used for years in small embedded devices that needed IP connectivity. Those turned out to be markets overtaken by Banyan who was overtaken by Novel, who was overtaken by Microsoft when they added WinSock. There are a few things to take away from this journey. The first is that when IBM and Microsoft team up to develop a competing product, it’s time to pivot when there’s plenty of money left in the bank. The second is that there was an era of closed systems that was short lived when vendors wanted to increasingly embrace open standards. Open standards like TCP/IP. We also want to keep our most talented team in place. Jim Allchin was responsible for those initial Windows Server implementations. Then SQL Server. He was the kind of person who’s a game changer on a team. We also don’t want to pivot to the new hotness because it’s the new hotness. Customers pay vendors to solve problems. Putting an e in front of the name of a company seemed really cool in 1998. But surveying customers and thinking more deeply about problems they face - that’s where magic can happen. Provided we have the right talent to make it happen.
2/27/2022 • 13 minutes, 1 second
The Nature and Causes of the Cold War
Our last episode was on Project MAC, a Cold War-era project sponsored by ARPA. That led to many questions like what led to the Cold War and just what was the Cold War. We'll dig into that today. The Cold War was a period between 1946, in the days after World War II, and 1991, when the United States and western allies were engaged in a technical time of peace that was actually an aggressive time of arms buildup and proxy wars. Technology often moves quickly when nations or empires are at war. In many ways, the Cold War gave us the very thought of interactive computing and networking, so is responsible for the acceleration towards our modern digital lives. And while I’ve never seen it references as such, this was more of a continuation of wars between the former British empire and the Imperialistic Russian empires. These make up two or the three largest empires the world has ever seen and a rare pair of empires that were active at the same time. And the third, well, we’ll get to the Mongols in this story as well. These were larger than the Greeks, the Romans, the Persians, or any of the Chinese dynasties. In fact, the British Empire that reached its peak in 1920 was 7 times larger than the land controlled by the Romans, clocking in at 13.7 million square miles. The Russian Empire was 8.8 million square miles. Combined the two held nearly half the world. And their legacies live on in trade empires, in some cases run by the same families that helped fun the previous expansions. But the Russians and British were on a collision course going back to a time when their roots were not as different as one might think. They were both known to the Romans. But yet they both became feudal powers with lineages of rulers going back to Vikings. We know the Romans battled the Celts, but they also knew of a place that Ptolemy called Sarmatia Europea in around 150AD, where a man named Rurik settle far later. He was a Varangian prince, which is the name Romans gave to Vikings from the area we now call Sweden. The 9th to 11th century saw a number o these warrior chiefs flow down rivers throughout the Baltics and modern Russia in search of riches from the dwindling Roman vestiges of empire. Some returned home to Sweden; others conquered and settled. They rowed down the rivers: the Volga, the Volkhov, the Dvina, and the networks of rivers that flow between one another, all the way down the Dnieper river, through the Slavic tripes Ptolemy described which by then had developed into city-states, such as Kiev, past the Romanians and Bulgers and to the second Rome, or Constantinople. The Viking ships rowed down these rivers. They pillaged, conquered, and sometimes settled. The term for rowers was Rus. Some Viking chiefs set up their own city-states in and around the lands. Some when their lands back home were taken while they were off on long campaigns. Charlemagne conquered modern day France and much of Germany, from The Atlantic all the way down into the Italian peninsula, north into Jutland, and east to the border with the Slavic tribes. He weakened many, upsetting the balance of power in the area. Or perhaps there was never a balance of power. Empires such as the Scythians and Sarmatians and various Turkic or Iranian powers had come and gone and each in their wake crossing the vast and harsh lands found only what Homer said of the area all the way back in the 8th century BCE, that the land was deprived of sunshine. The Romans never pushed up so far into the interior of the steppes as the were busy with more fertile farming grounds. But as the Roman Empire fell and the Byzantines flourished, the Vikings traded with them and even took their turn trying to loot Constantinople. And Frankish Paris. And again, settled in the Slavic lands, marrying into cultures and DNA. The Rus Rome retreated from lands as her generals were defeated. The Merovingian dynasty rose in the 5th century with the defeat of Syagrius, the last Roman general Gaul and lasted until a family of advisors slowly took control of running the country, transitioning to the Carolingian Empire, of which Charlemagne, the Holy Roman Emperor, as he was crowned, was the most famous. He conquered and grew the empire. Charlemagne knew the empire had outgrown what one person could rule with the technology of the era, so it was split into three, which his son passed to his grandsons. And so the Carolingian empire had made the Eastern Slavs into tributaries of the Franks. There were hostilities but by the Treaty of Mersen in 870 the split of the empire generally looked like the borders of northern Italy, France, and Germany - although Germany also included Austria but not yet Bohemia. It split and re-merged and smaller boundary changes happened but that left the Slavs aware of these larger empires. The Slavic peoples grew and mixed with people from the Steppes and Vikings. The Viking chiefs were always looking for new extensions to their trade networks. Trade was good. Looting was good. Looting and getting trade concessions to stop looting those already looted was better. The networks grew. One of those Vikings was Rurik. Possibly Danish Rorik, a well documented ally who tended to play all sides of the Carolingians and a well respected raider and military mind. Rurik was brought in as the first Viking, or rower, or Rus, ruler of the important trade city that would be known as New City, or Novgorod. Humans had settled in Kiev since the Stone Age and then by Polans before another prince Kyi took over and then Rurik’s successor Oleg took Smolensk and Lyubech. Oleg extended the land of Rus down the trading routes, and conquered Kiev. Now, they had a larger capital and were the Kievan Rus. Rurik’s son Igor took over after Oleg and centralized power in Kiev. He took tribute from Constantinople after he attacked, plunder Arab lands off the Caspian Sea, and was killed overtaxing vassal states in his territory. His son Sviatoslav the Brave then conquered the Alans and through other raiding helped cause the collapse of the Kazaria and Bulgarian empires. They expanded throughout the Volga River valley, then to the Balkans, and up the Pontic Steppe, and quickly became the largest empire in Europe of the day. His son Vladimir the Great expanded again, with he empire extending from the Baltics to Belarus to the Baltics and converted to Christianity, thus Christianizing the lands he ruled. He began marrying and integrating into the Christian monarchies, which his son continued. Yaroslov the Wise married the daughter of the King of Sweden who gave him the area around modern-day Leningrad. He then captured Estonia in 1030, and as with others in the Rurikid dynasty as they were now known, made treaties with others and then pillaged more Byzantine treasures. He married one daughter to the King of Norway, another to the King of Hungary, another to the King of the Franks, and another to Edward the Exile of England, and thus was the grandfather of Edgar the Aetheling, who later became a king of England. The Mongols The next couple of centuries saw the rise of Feudalism and the descendants of Rurik fight amongst each other. The various principalities were, as with much of Europe during the Middle Ages, semi-independent duchies, similar to city-states. Kiev became one of the many and around the mid 1100s Yaroslav the Wise’s great-grandson, Yuri Dolgoruki built a number of new villages and principalities, including one along the Moskva river they called Moscow. They built a keep there, which the Rus called kremlins. The walls of those keeps didn’t keep the Mongols out. They arrived in 1237. They moved the capital to Moscow and Yaroslav II, Yuri’s grandson, was poisoned in the court of Ghengis Khan’s grandson Batu. The Mongols ruled, sometimes through the descendants of Rurik, sometimes disposing of them and picking a new one, for 200 years. This is known as the time of the “Mongol yoke.” One of those princes the Mongols let rule was Ivan I of Moscow, who helped them put down a revolt in a rival area in the 1300s. The Mongols trusted Moscow after that, and so we see a migration of rulers of the land up into Moscow. The Golden Horde, like the Viking Danes and Swedes settled in some lands. Kublai Khan made himself ruler of China. Khanates splintered off to form the ruling factions of weaker lands, such as modern India and Iran - who were once the cradle of civilization. Those became the Mughals dynasties as they Muslimized and moved south. And so the Golden Horde became the Great Horde. Ivan the Great expanded the Muscovite sphere of influence, taking Novgorod, Rostov, Tver, Vyatka, and up into the land of the Finns. They were finally strong enough to stand up to the Tatars as they called their Mongol overlords and made a Great Stand on the Ugra River. And summoning a great army simply frightened the Mongol Tatars off. Turns out they were going through their own power struggles between princes of their realm and Akhmed was assassinated the next year, with his successor becoming Sheikh instead of Khan. Ivan’s grandson, Ivan the Terrible expanded the country even further. He made deals with various Khans and then conquered others, pushing east to conquer the Khanate of Sibiu and so conquered Siberia in the 1580s. The empire then stretched all the way to the Pacific Ocean. He had a son who didn’t have any heirs and so was the last in the Rurikid dynasty. But Ivan the Terrible had married Anastasia Romanov, who when he crowned himself Caesar, or Tsar as they called it, made her Tsaritsa. And so the Romanov’s came to power in 1596 and following the rule of Peter the Great from 1672 to 1725, brought the Enlightenment to Russia. He started the process of industrialization, built a new capital he called St Petersburg, built a navy, made peace with the Polish king, then Ottoman king, and so took control of the Baltics, where the Swedes had taken control of on and off since the time of Rurik. Russian Empire Thus began the expansion as the Russian Empire. They used an alliance with Denmark-Norway and chased the Swedes through the Polish-Lithuanian Commonwealth, unseating the Polish king along the way. He probably should not have allied with them. They moved back into Finland, took the Baltics so modern Latvia and Estonia, and pushed all the way across the Eurasian content across the frozen tundra and into Alaska. Catherine the Great took power in 1762 and ignited a golden age. She took Belarus, parts of Mongolia, parts of modern day Georgia, overtook the Crimean Khanate, and modern day Azerbaijan. and during her reign founded Odessa, Sevastopol and other cities. She modernized the country like Peter and oversaw nearly constant rebellions in the empire. And her three or four children went on to fill the courts of Britain, Denmark, Sweden, Spain, and the Netherlands. She set up a national network of schools, with teachings from Russian and western philosophers like John Locke. She collected vast amounts of art, including many from China. She set up a banking system and issued paper money. She also started the process to bring about the end of serfdom. Even though between her and the country she owned 3.3 million herself. She planned on invading the Khanate of Persia, but passed away before her army got there. Her son Paul halted expansion. And probably just in time. Her grandson Alexander I supported other imperial powers against Napoleon and so had to deal with the biggest invasion Russia had seen. Napoleon moved in with his grand army of half a million troops. The Russians used a tactic that Peter the Great used and mostly refused to engage Napoleon’s troops instead burning the supply lines. Napoleon lost 300,000 troops during that campaign. Soon after the Napoleanic wars ended, the railways began to appear. The country was industrializing and with guns and cannons, growing stronger than ever. The Opium Wars, between China and the UK then the UK and France were not good to China. Even though Russia didn’t really help they needed up with a piece of the Chinese empire and so in the last half of the 1800s the Russian Empire grew by another 300,000 square miles on the backs of a series of unequal treaties as they came to be known in China following World War I. And so by 1895, the Romanovs had expanded past their native Moscow, driven back the Mongols, followed some of the former Mongol Khanates to their lands and taken them, took Siberia, parts of the Chinese empire, the Baltics, Alaska, and were sitting on the third largest empire the world had ever seen, which covered nearly 17 percent of the world. Some 8.8 million square miles. And yet, still just a little smaller than the British empire. They had small skirmishes with the British but by and large looked to smaller foes or proxy wars, with the exception of the Crimean War. Revolution The population was expanding and industrializing. Workers flocked to factories on those train lines. And more people in more concentrated urban areas meant more ideas. Rurik came in 862 and his descendants ruled until the Romanovs took power in 1613. They ruled until 1917. That’s over 1,000 years of kings, queens, Tsars, and Emperors. The ideas of Marx slowly spread. While the ruling family was busy with treaties and wars and empire, they forgot to pay attention to the wars at home. People like Vladimir Lenin discovered books by people like Karl Marx. Revolution was in the air around the world. France had shown monarchies could be toppled. Some of the revolutionaries were killed, others put to work in labor camps, others exiled, and still others continued on. Still, the empire was caught up in global empire intrigues. The German empire had been growing and the Russians had the Ottomans and Bulgarians on their southern boarders. They allied with France to take Germany, just as they’d allied with Germany to take down Poland. And so after over 1.8 million dead Russians and another 3.2 million wounded or captured and food shortages back home and in the trenches, the people finally had enough of their Tsar. They went on strike but Tsar Nicholas ordered the troops to fire. The troops refused. The Duma stepped in and forced Nicholas to abdicate. Russia had revolted in 1917, sued Germany for peace, and gave up more territory than they wanted in the process. Finland, the Baltics, their share of Poland, parts of the Ukraine. It was too much. But the Germans took a lot of time and focus to occupy and so it helped to weaken them in the overall war effort. Back home, Lenin took a train home and his Bolshevik party took control of the country. After the war Poland was again independent. Yugoslavia, Czechoslovakia, Estonia, Lithuania, Latvia, and the Serbs became independent nations. In the wake of the war the Ottoman Empire was toppled and modern Turkey was born. The German Kaiser abdicated. And socialism and communism were on the rise. In some cases, that was really just a new way to refer to a dictator that pretended to care about the people. Revolution had come to China in 1911 and Mao took power in the 1940s. Meanwhile, Lenin passed in 1924 and Rykov, then Molotov, who helped spur a new wave of industrialization. Then Stalin, who led purges of the Russian people in a number of Show Trials before getting the Soviet Union, as Russian Empire was now called, into World War II. Stalin encouraged Hitler to attack Poland in 1939. Let’s sit on that for a second. He tried to build a pact with the Western powers and after that broke down, he launched excursions annexing parts of Poland, Finland, Romania, Lithuania, Estonia, Latvia. Many of the lands were parts of the former Russian Empire. The USSR had chunks of Belarus and the Ukraine before but as of the 1950s annexed Poland, Easter Germany, Czechoslovakia, Romania, and Bulgaria as part of the Warsaw Pact, a block of nations we later called the Soviet Bloc. They even built a wall between East and West Germany. During and after the war, the Americans whisked German scientists off to the United States. The Soviets were in no real danger from an invasion by the US and the weakened French, Austrians, and military-less Germans were in no place to attack the Soviets. The UK had to rebuild and British empire quickly fell apart. Even the traditional homes of the vikings who’d rowed down the rivers would cease to become global powers. And thus there were two superpowers remaining in the world, the Soviets and the United States. The Cold War The Soviets took back much of the former Russian Empire, claiming they needed buffer zones or through subterfuge. At its peak, the Soviet Union cover 8.6 million square miles; just a couple hundred thousand shy of the Russian Empire. On the way there, they grew to a nation of over 290 million people with dozens of nationalities. And they expanded the sphere of influence even further, waging proxy wars in places like Vietnam and Korea. They never actually went to war with the United States, in much the same way they mostly avoided the direct big war with the Mongols and the British - and how Rorik of Dorestad played both sides of Frankish conflicts. We now call this period the Cold War. The Cold War was an arms race. This manifested itself first in nuclear weapons. The US is still the only country to detonate a nuclear weapon in war time, from the bombings that caused the surrender of Japan at the end of the war. The Soviets weren’t that far behind and detonated a bomb in 1949. That was the same year NATO was founded as a treaty organization between Belgium, Canada, Denmark, France, Iceland, Italy, Luxembourg, the Netherlands, Norway, Portugal, and the United States. The US upped the ante with the hydrogen bomb in 1952. The Soviets got the hydrogen bomb in 1955. And then came the Space Race. Sputnik launched in 1957. The Russians were winning the space race. They further proved that when they put Yuri Gagarin up in 1961. By 1969 the US put Neil Armstrong and Buzz Aldrin on the moon. Each side developed military coalitions, provided economic aid to allies, built large arsenals of weapons, practiced espionage against one another, deployed massive amounts of propaganda, and spreading their ideology. Or at least that’s what the modern interpretation of history tells us. There were certainly ideological differences, but the Cold War saw the spread of communism as a replacement for conquest. That started with Lenin trying to lead a revolt throughout Europe but shifted over the decades into again, pure conquest. Truman saw the rapid expansion of the Soviets and without context that they were mostly reclaiming lands conquered by the Russian imperial forces, won support for the Truman Doctrine. There, he contained Soviet expansion in Eastern Europe. First, they supported Greece and Turkey. But the support extended throughout areas adjacent to Soviet interests. Eisenhower saw how swiftly Russians were putting science in action with satellites and space missions and nuclear weapons - and responded with an emphasis in American science. The post-war advancements in computing were vast in the US. The industry moved from tubes and punch cards to interactive computing after the Whirlwind computer was developed at MIT first to help train pilots and then to intercept soviet nuclear weapons. Packet switching, and so the foundations of the Internet were laid to build a computer network that could withstand nuclear attack. Graphical interfaces got their start when Ivan Sutherland was working at MIT on the grandchild of Whirlwind, the TX-2 - which would evolve into the Digital Equipment PDP once privatized. Drum memory, which became the foundation of storage was developed to help break Russian codes and intercept messages. There isn’t a part of the computing industry that isn’t touched by the research farmed out by various branches of the military and by ARPA. Before the Cold War, Russia and then the Soviet Union were about half for and half against various countries when it came to proxy wars. They tended to play both sides. After the Cold War it was pretty much always the US or UK vs the Soviet Union. Algeria, Kenya, Taiwan, the Sudan, Lebanon, Central America, the Congo, Eritrea, Yemen, Dhofar, Algeria, Malaysia, the Dominican Republic, Chad, Iran, Iraq, Thailand, Bolivia, South Africa, Nigeria, India, Bangladesh, Angolia, Ethiopia, the Sahara, Indonesia, Somalia, Mozambique, Libya, and Sri Lanka. And the big ones were Korea, Vietnam, and Afghanistan. Many of these are still raging on today. The Soviet empire grew to over 5 million soldiers. The US started with 2 nuclear weapons in 1945 and had nearly 300 by 1950 when the Soviets had just 5. The US stockpile grew to over 18,000 in 1960 and peaked at over 31,000 in 1965. The Soviets had 6,129 by then but kept building until they got close to 40,000 by 1980. By then the Chinese, France, and the UK each had over 200 and India and Israel had developed nuclear weapons. Since then only Pakistan and North Korea have added warheads, although there are US warheads located in Germany, Belgium, Italy, Turkey, and the Netherlands. Modern Russia The buildup was expensive. Research, development, feeding troops, supporting asymmetrical warfare in proxy states, and trade sanctions put a strain on the government and nearly bankrupted Russia. They fell behind in science, after Stalin had been anti-computers. Meanwhile, the US was able to parlay all that research spending into true productivity gains. The venture capital system also fueled increasingly wealthy companies who paid taxes. Banking, supply chains, refrigeration, miniaturization, radio, television, and everywhere else we could think of. By the 1980s, the US had Apple and Microsoft and Commodore. The Russians were trading blat, or an informal black market currency, to gain access to knock-offs of ZX Spectrums when the graphical interfaces systems were born. The system of government in the Soviet Union had become outdated. There were some who had thought to modernize it into more of a technocracy in an era when the US was just starting to build ARPANET - but those ideas never came to fruition. Instead it became almost feudalistic with high-ranking party members replacing the boyars, or aristocrats of the old Kievan Rus days. The standard of living suffered. So many cultures and tribes under one roof, but only the Slavs had much say. As the empire over-extended there were food shortages. If there are independent companies then the finger can be pointed in their direction but when food is rationed by the Politburo then the decline in agricultural production became dependent on bringing food in from the outside. That meant paying for it. Pair that with uneven distribution and overspending on the military. The Marxist-Leninist doctrine had been a one party state. The Communist Party. Michael Gorbachev allowed countries in the Bloc to move into a democratic direction with multiple parties. The Soviet Union simply became unmanageable. And while Gorbachev took the blame for much of the downfall of the empire, there was already a deep decay - they were an oligarchy pretending to be a communist state. The countries outside of Russia quickly voted in non-communist governments and by 1989 the Berlin Wall came down and the Eastern European countries began to seek independence, most moving towards democratic governments. The collapse of the Soviet Union resulted in 15 separate countries and left the United States standing alone as the global superpower. The Czech Republic, Hungary, and Poland joined NATO in 1999. 2004 saw Bulgaria, Estonia, Latvia, Lithuania, Romania, Slovakia, and Slovenia join. 2009 brought in Albania and Croatia. 2017 led to Montenegro and then North Macedonia. Then came the subject of adding Ukraine. The country that the Kievan Rus had migrated throughout the lands from. The stem from which the name and possibly soul of the country had sprouted from. How could Vladimir Putin allow that to happen? Why would it come up? As the Soviets pulled out of the Bloc countries , they left remnants of their empire behind. Belarus, Kazakstan, and the Ukraine were left plenty of weapons that couldn’t be moved quickly. Ukraine alone had 1,700 nuclear weapons, which included 16 intercontinental ballistic missiles. Add to that nearly 2,000 biological and chemical weapons. Those went to Russia or were disassembled once the Ukrainians were assured of their sovereignty. The Crimea, which had been fought over in multiple bloody wars was added to Ukraine. At least until 2014, when Putin wanted the port of Sevastopol, founded by Catherine the Great. Now there was a gateway from Russia to the Mediterranean yet again. So Kievan Rus under Rurik is really the modern Ukraine and the Russian Empire then Romanov Dynasty flowed from that following the Mongol invasions. The Russian Empire freed other nations from the yolk of Mongolian rule but became something entirely different once they over-extended. Those countries in the empire often traded the Mongol yolk for the Soviet yolk. And entirely different from the Soviet Union that fought the Cold War and the modern Russia we know today. Meanwhile, the states of Europe had been profoundly changed since the days of Thomas Paine’s The Rights of Man and Marx. Many moved left of center and became socialized parts of their economy. No one ever need go hungry in a Scandanavian country. Health care, education, even child care became free in many countries. Many of those same ideals that helped lift the standard of living for all in developed countries then spread, including in Canada and some in the US. And so we see socialism to capitalism as more of a spectrum than a boolean choice now. And totalitarianism, oligarchy, and democracy as a spectrum as well. Many could argue reforms in democratic countries are paid for by lobbyists who are paid for by companies and thus an effective oligarchy. Others might argue the elections in many countries are rigged and so they aren’t even oligarchs, they’re monarchies. Putin took office in 1999 and while Dmitry Medvedev was the president for a time, but he effectively ruled in a tandemocracy with Putin until Putin decided to get back in power. That’s 23 years and counting and just a few months behind when King Abdullah took over in Jordan and King Mohammed VI took over in Morocco. And so while democratic in name, they’re not all quite so democratic. Yet they do benefit from technology that began in Western countries and spread throughout the world. Countries like semi-conductor manufacturer Sitronics even went public on the London stock exchange. Hard line communists might (and do) counter that the US has an empire and that western countries conspire for the downfall of Russia or want to turn Russians into slaves to the capitalist machine. As mentioned earlier, there has always been plenty of propaganda in this relationship. Or gaslighting. Or fake news. Or disinformation. One of those American advancements that ties the Russians to the capitalist yoke is interactive computing. That could have been developed in Glushkov’s or Kitov’s labs in Russia, as they had the ideas and talent. But because the oligarchy that formed around communism, the ideas were sidelined and it came out of MIT - and that led to Project MAC, which did as much to democratize computing as Gorbachev did to democratize the Russian Federation.
2/18/2022 • 45 minutes, 53 seconds
Project MAC and Multics
Welcome to the history of computing podcast. Today we’re going to cover a cold war-era project called Project MAC that bridged MIT with GE and Bell Labs. The Russians beat the US to space when they launched Sputnik in 1958. Many in the US felt the nation was falling behind and so later that year president Dwight D. Eisenhower appointed then president of MIT James Killian as the Presidential Assistant for Science and created ARPA. The office was lean and funded a few projects without much oversight. One was Project MAC at MIT, which helped cement the university as one of the top in the field of computing as it grew. Project MAC, short for Project on Mathematics and Computation, was a 1960s collaborative endeavor to develop a workable timesharing system. The concept of timesharing initially emerged during the late 1950s. Scientists and Researchers finally went beyond batch processing with Whirlwind and its spiritual predecessors, the TX-0 through TX-2 computers at MIT. We had computer memory now and so had interactive computing. That meant we could explore different ways to connect directly with the machine. In 1959, British mathematician Christopher Strachey presented the first public presentation on timesharing at a UNESCO meeting, and John McCarthy distributed an internal letter regarding timesharing at MIT. Timesharing was initially demonstrated at the MIT Computational Center in November 1961, under the supervision of Fernando Corbato, an MIT professor. J.C.R. Licklider at ARPA had been involved with MIT for most of his career in one way or another and helped provide vision and funding along with contacts and guidance, including getting the team to work with Bolt, Beranek & Newman (BBN). Yuri Alekseyevich Gagarin went to space in 1961. The Russians were still lapping us. Money. Governments spend money. Let’s do that. Licklider assisted in the development of Project MAC, machine-assisted cognition, led by Professor Robert M. Fano. He then funded the project with $3 million per year. That would become the most prominent initiative in timesharing. In 1967, the Information Processing Techniques Office invested more than $12 million in over a dozen timesharing programs at colleges and research institutions. Timesharing then enabled the development of new software and hardware separate from that used for batch processing. Thus, one of the most important innovations to come out of the project was an operating system capable of supporting multiple parallel users - all of whom could have complete control of the machine. The operating system they created would be known as Multics, short for Multiplexed Information and Computing Service. It was created for a GE 645 computer but modular in nature and could be ported to other computers. The project was a collaborative effort between MIT, GE, and Bell Labs. Multics was the first time we really split files away from objects read in memory and wrote them into memory for processing then back to disk. They developed the concepts of dynamic linking, daemons, procedural calls, hierarchical file systems, process stacks, a split between user land and the system, and much more. By the end of six months after Project MAC was created, 200 users in 10 different MIT departments had secured access to the system. The Project MAC laboratory was apart from its former Department of Electrical Engineering by 1967 and evolved into its interdepartmental laboratory. Multics progressed from computer timesharing to a networked computer system, integrating file sharing and administration capabilities and security mechanisms into its architecture. The sophisticated design, which could serve 300 daily active users on 1,000 MIT terminal computers within a couple more years, inspired engineers Ken Thompson and Dennis Ritchie to create their own at Bell Labs, which evolved into the C programming language and the Unix operating system. See, all the stakeholders with all the things they wanted in the operating system had built something slow and fragile. Solo developers don’t tend to build amazing systems, but neither do large intracompany bureaucracies. GE never did commercialize Multics because they ended their computer hardware business in 1970. Bell Labs dropped out of the project as well. So Honeywell acquired the General Electric computer division and so rights to the Multics project. In addition, Honeywell possessed several other operating systems, each supported by its internal organizations. In 1976, Project MAC was renamed the Laboratory for Computer Science (LCS) at MIT, broadening its scope. Michael L. Dertouzos, the lab's director, advocated developing intelligent computer programs. To increase computer use, the laboratory analyzed how to construct cost-effective, user-friendly systems and the theoretical underpinnings of computer science to recognize space and time constraints. Some of their project ran for decades afterwards. In 2000, several Multics sites were shut down. The concept of buying corporate “computer utilities” was a large area of research in the late 60s to 70s. Scientists bought time on computers that universities purchased. Companies did the same. The pace of research at both increased dramatically. Companies like Tymeshare and IBM made money selling time or processing credits, and then after an anti-trust case, IBM handed that business over to Control Data Corporation, who developed training centers to teach people how to lease time. These helped prepare a generation of programmers when the microcomputers came along, often taking people who had spent their whole careers on CDC Cybers or Burroughs mainframes by surprise. That seems to happen with the rapid changes in computing. But it was good to those who invested in the concept early. And the lessons learned about scalable architectures were skills that transitioned nicely into a microcomputer world. In fact, many environments still run on applications built in this era. The Laboratory for Computer Science (LCS) accomplished other ground-breaking work, including playing a critical role in advancing the Internet. It was often larger but less opulent than the AI lab at MIT. And their role in developing applications that would facilitate online processing and evaluation across various academic fields, such as engineering, medical, and library sciences led to advances in each. In 2004, LCS merged with MIT's AI laboratory to establish the Computer Science and Artificial Intelligence Laboratory (CSAIL), one of the flagship research labs at MIT. And in the meantime countless computer scientists who contributed at every level of the field flowed through MIT - some because of the name made in those early days. And the royalties from patents have certainly helped the universities endowment. The Cold War thawed. The US reduced ARPA spending after the Mansfield Amendment was passed in 1969. The MIT hackers flowed out to the world, changing not only how people thought of automating business processes, but how they thought of work and collaboration. And those hackers were happy to circumvent all the security precautions put on Multics, and so cultural movements evolved from there. And the legacy of Multics lived on in Unix, which evolved to influence Linux and is in some way now a part of iOS, Mac OS, Android, and Chrome OS.
2/15/2022 • 11 minutes, 31 seconds
Dell: From A Dorm Room to a Board Room
Dell is one of the largest technology companies in the world, and it all started with a small startup that sold personal computers out of Michael Dell's dorm room at the University of Texas. From there, Dell grew into a multi-billion dollar company, bought and sold other companies, went public, and now manufactures a wide range of electronics including laptops, desktops, servers, and more. After graduating high school, Michael Dell enrolled at the University of Texas at Austin with the idea that he would some day start his own company. Maybe even in computers. He had an Apple II in school and Apple and other companies had done pretty well by then in the new microcomputer space. He took it apart and these computers were just a few parts that were quickly becoming standardized. Parts that could be bought off the shelf at computer stores. So he opened a little business that he ran out of his dorm room fixing computers and selling little upgrades. Many a student around the world still does the exact same thing. He also started buying up parts and building new computers. Texas Instruments was right up the road in Dallas. And there was a price war in the early 80s between Commodore and Texas Instruments. Computers could be big business. And it seemed clear that this IBM PC that was introduced in 1981 was going to be more of a thing, especially in offices. Especially since there were several companies making clones of the PC, including Compaq who was all over the news as Silicon Cowboys, having gotten to $100 million in sales within just two years. So from his dorm room in 1984, Dell started a little computer company he called PCs Limited. He built PCs using parts and experimented with different combinations. One customer led to another and he realized that a company like IBM bought a few hundred dollars worth of parts, put them in a big case and sold it for thousands of dollars. Any time a company makes too much margin, smaller and more disruptive companies will take the market away. Small orders turned into bigger and ones and he was able to parlay each into being able to build bigger orders. They released the Turbo PC in 1985. A case, a mother board, a CPU, a keyboard, a mouse, some memory, and a CPU chip. Those first computers he built came with an 8088 chip. Low overhead meant he could be competitive on price: $795. No retail store front and no dealers, who often took 25 to 50 percent of the money spent on computers, let the company run out of a condo. He’d sold newspapers as a kid so he was comfortable picking up the phone and dialing for dollars. He managed to make $200,000 in sales in that first year. So he dropped out of school to build the company. To keep costs low, he sold through direct mail and over the phone. No high-paid sellers in blue suits like IBM, even if the computers could run the same versions of DOS. He incorporated as Dell Computer Company in 1987, started to expand internationally, and on the back of rapid revenue growth and good margins. They hit $159 million in sales that year. So they took the company public in 1988. The market capitalization when they went public was $30 million and quickly rose to $80 million. By then we’d moved past the 8088 chips and the industry was standardizing on the 80386 chip, following the IBM PS/2. By the end of 1989 sales hit $250 million. They needed more Research and Development firepower, so they brought in Glenn Henry. He’d been at IBM for over 20 years and managed multiple generations of mid-range mainframes then servers and then RISC-based personal computers. He helped grow the R&D team into the hundreds and quality of computer went up, which paired well with costs of computers remaining affordable compared to the rest of the market. Dell was, and to a large degree still is, a direct to consumer company. They experimented with the channel in the early 1990s, which is to say 3rd parties that were authorized to sell their computers. They signed deals to sell through distributors, computer stores, warehouse clubs, and retail chains. But the margins didn’t work, so within just a few years they cancelled many of those relationships. Instead they went from selling to companies to the adjacent home market. It seems like that’s the last time in recent memory that direct mailing as a massive campaign worked. Dell was able to undercut most other companies who sold laptops at the time by going direct to consumers. They brought in marketing execs from other companies, like Tandy. The London office was a huge success, bringing in tens of millions in revenue, so they brought on a Munich office and then slowly expanded into tother countries. They were one of the best sales and marketing machines in that direct to consumer and business market. Customers could customize orders, so maybe add a faster CPU, some extra memory, or even a scanner, modem, or other peripheral. They got the manufacturing to the point where they could turn computers around in five days. Just a decade earlier people waited months for computers. They released their first laptop in 1989, which they called the 316LT. Just a few years earlier, Michael Dell was in a dorm room. If he’d completed a pre-med degree and gotten into medical school, he’d likely be in his first or second year. He was now a millionaire; and just getting started. With the help of their new R&D chief, they were able to get into the server market where the margins were higher, and that helped get more corporate customers. By the end of 1990, they were the sixth largest personal computer company in the US. To help sales in the rapidly growing European and Middle Eastern offices, they opened another manufacturing location in Ireland. And by 1992, they became a one of the top 500 companies in the world. Michael Dell, instead of being on an internship in medical school and staring down the barrel of school loans, was the youngest CEO in the Fortune 500. The story is almost boring. They just grow and grow. Especially when rivals like IBM, HP, Digital Equipment, and Compaq make questionable finance and management choices that don’t allow those companies to remain competitive. They all had better technology at many times, but none managed to capitalize on the markets. Instead of becoming the best computer maker they could be, they played corporate development games and wandered away from their core businesses. Or like IBM they decided that they didn’t want to compete with the likes of Dell and just sold off their PC line to Lenovo. But Dell didn’t make crappy computers. They weren’t physically inspiring like some computers at the time, but they got the job done and offices that needed dozens or hundreds of machines often liked working with Dell. They continued the global expansion through the 90s and added servers in 1996. By now there were customers buying their second or third generation of computer, going from DOS to Windows 3.1 to Windows 95. And they did something else really important in 1996: they began to sell through the web at dell.com. Within a few months they were doing a million a day in sales and the next year hit 10 million PCs sold. Little Dell magazines showed up in offices around the world. Web banners appeared on web pages. Revenues responded and went from $2.9 billion in 1994 to $3.5 billion in 1995. And they were running at margins over 20 percent. Revenue hit $5.3 billion in 1996, 7.8 in 1997, 12.3 in 1998, 18.2 in 1999, and $25.3 in 2000. The 1990s had been good to Dell. Their stock split 7 times. It wouldn’t double every other year again, but would double again by 2009. In the meantime, the market was changing. The Dell OptiPlex is one of the best selling lines of computers of all time and offers a glimpse into what was changing. Keep in mind, this was the corporate enterprise machine. Home machines can be better or less, according to the vendor. The processors ranged from a Celeron up to a Pentium i9 at this point. Again, we needed a mother board, usually an ATX or a derivative. They started with that standard ATX mother board form factor but later grew to be a line that came in the tower, the micro, and everything in between. Including an All-in-one. That Series 1 was beige and just the right size to put a big CRT monitor on top of it. It sported a 100 MHz 486 chip and could take up to 64 megabytes of memory across a pair of SIMM slots. The Series 2 was about half the size and by now we saw those small early LCD flat panel screens. They were still beige though. As computers went from beige to black with the Series 3 we started to see the iconic metallic accents we’re accustomed to now. They followed along the Intel replacement for the ATX motherboard, the BTX, and we saw those early PCI form factors be traded for PCIe. By the end of the Series 3 in 2010, the Optiplex 780 could have up to 16 gigs of memory as a max, although that would set someone back a pretty penning in 2009. And the processors came ranging from the 800 MHz to 1.2 GHz. We’d also gone from PS/2 ports with serial and parallel to USB 2 ports and from SIMM to DIMM slots, up to DDR4 with the memory about as fast as a CPU. But they went back to the ATX and newer Micro ATX with the Series 4. They embraced the Intel i series chips and we got all the fun little metal designs on the cases. Cases that slowly shifted to being made of recycled parts. The Latitude laptops followed a similar pattern. Bigger faster, and heavier. They released the Dell Dimension and acquired Alienware in 2006, at the time the darling of the gamer market. Higher margin hardware, like screaming fast GPU graphic cards. But also lower R&D costs for the Dell lines as there was the higher end line that flowed down to the OptiPlex then Dimension. Meanwhile, there was this resurgent Apple. They’d released the iMac in 1998 and helped change the design language for computers everywhere. Not that everyone needed clear cases. Then came the iPod in 2001. Beautiful design could sell products at higher prices. But they needed to pay a little more attention to detail. But more importantly, those Dells were getting bigger and faster and heavier while the Apple computers were getting lighter, and even the desktops more portable. The iPhone came in 2007. The Intel MacBook Air came 10 years after that iMac, in 2008. The entire PC industry was in a race for bigger power supplies to push more and more gigahertz through a CPU without setting the house on fire and Apple changed the game. The iPad was released in 2010. Apple finally delivered on the promise of the Dynabook that began life at Xerox PARC. Dell had been in the drivers seat. They became the top personal computer company in 2003 and held that spot until HP and Compaq merged. But their spot would never be regained as revenue slowed from the time the iPad was released for almost a decade, even contracting at times. See, Dell had a close partnership with Intel and Microsoft. Microsoft made operating systems for mobile devices but the Dell Venue was not competitive with the iPhone. They also tried making a mobile device using Android but the Streak never sold well either and was discontinued as well. While Microsoft retooled their mobile platforms to compete in the tablet space, Dell tried selling Android tablets but discontinued those in 2016. To make matters worse for Dell, they’d ridden a Microsoft Windows alliance where they never really had to compete with Microsoft for nearly 30 years and then Microsoft released the Surface in 2012. The operating systems hadn’t been pushing people to upgrade their computers and Microsoft even started selling Office directly and online, so Dell lost revenue bundling Office with computers. They too had taken their eye off the market. HP bought EDS in 2008, diversifying into a services organization, something IBM had done well over a decade before. Except rather than sell their PC business they made a go at both. So Dell did the same, acquiring Perot Systems, the company Perot started after he sold EDS and ran for president, for $3.9 billion, which came in at a solid $10 billion less than what HP paid for EDS. The US was in the midst of a recession, so that didn’t help matters either. But it did make for an interesting investment climate. Interest rates were down, so large investors needed to put money to work to show good returns for customers. Dell had acquired just 8 companies before the Great Recession but acquired an average of 5 over each of the next four years. This allowed them to diversify, And Michael Dell made another savvy finance move, he took the company private in 2013 with the help of Silver Lake partners. 5 years off the public market was just what they needed. 2018 they went public again on the backs of revenues that had shot up to to $79 billion from a low of around $50 billion in 2016. And they exceeded $94 billion in 2021. The acquisition of EMC-VMware was probably the most substantial to $67 billion. That put them in the enterprise server market and gave them a compelling offer at pretty much every level of the enterprise stack. Although at this point maybe it remains to be seen if the enterprise server and storage stack is still truly a thing. A Dell Optiplex costs about the same amount today as it did when Dell sold that first Turbo PC. They can be had cheaper but probably shouldn’t. Adjusted for an average 2.6 percent inflation rate, that brings those first Dell PCs to just north of $2,000 as of the time of this writing. Yet the computer remained the same, with fairly consistent margins. That means the components have gotten half as expensive because they’re made in places with cheaper labor than they were in the early 1980s. That means there are potentially less components, like a fan for certain chips or RAM when they’re memory integrated in a SoC, etc. But the world is increasingly mobile. Apple, Google, and Microsoft sell computers for their own operating systems now. Dell doesn’t make phones and they aren’t in the top 10 for the tablet market. People don’t buy products from magazines that show up any longer. Now it’s a quick search on Amazon. And looking for a personal computer there, the results right this second (that is, while writing this paragraph) showed the exact same order as vendor market share for 2021: Lenovo, followed by HP, then Dell. All of the devices looked about the same. Kinda’ like those beige injection-molded devices looked about the same. HP couldn’t have such a large company exist under one roof and eventually spun HP Enterprise out into its own entity. Dell sold Perot Systems to NTT Docomo to get the money to buy EMC on leverage. Not only do many of these companies have products that look similar, but their composition does as well. What doesn’t look similar is Michael Dell. He’s worth just shy of $60 billion dollars (according to the day and the markets). His book, Direct From Dell is one of the best looks at the insides of a direct order mail business making the transition to early commerce one can find. Oh, and it’s not just him and some friends in a dorm room. It’s 158,000 employees who help make up over a $42 billion market cap. And helped generations of people afford personal computers. That might be the best part of such a legacy.
2/4/2022 • 24 minutes, 24 seconds
Bill Atkinson's HyperCard
We had this Mac lab in school. And even though they were a few years old at the time, we had a whole room full of Macintosh SEs. I’d been using the Apple II Cs before that and these just felt like Isaac Asimov himself dropped them off just for me to play with. Only thing: no BASIC interpreter. But in the Apple menu, tucked away in the corner was a little application called HyperCard. HyperCard wasn’t left by Asimov, but instead burst from the mind of Bill Atkinson. Atkinson was the 51st employee at Apple and a former student of Jeff Raskin, the initial inventor of the Mac before Steve Jobs took over. Steve Jobs convinced him to join Apple where he started with the Lisa and then joined the Mac team until he left with the team who created General Magic and helped bring shape to the world of mobile devices. But while at Apple he was on the original Mac team developing the menu bar, the double-click, Atkinson dithering, MacPaint, QuickDraw, and HyperCard. Those were all amazing tools and many came out of his work on the original 1984 Mac and the Lisa days before that. But HyperCard was something entirely different. It was a glimpse into the future, even if self-contained on a given computer. See, there had been this idea floating around for awhile. Vannevar Bush initially introduced the world to a device with all the world’s information available in his article “As We May Think” in 1946. Doug Engelbart had a team of researchers working on the oN-Line System that saw him give “The Mother of All Demos in 1968” where he showed how that might look, complete with a graphical interface and hypertext, including linked content. Ted Nelson introduced furthered the ideas in 1969 of having linked content, which evolved into what we now call hyperlinks. Although Nelson thought ahead to include the idea of what he called transclusions, or the snippets of text displayed on the screen from their live, original source. HyperCard built on that wealth of information with a database that had a graphical front-end that allowed inserting media and a programming language they called HyperTalk. Databases were nothing new. But a simple form creator that supported graphics and again stressed simple, was new. Something else that was brewing was this idea of software economics. Brooks’ Law laid it out but Barry Boehm’s book on Software Engineering Economics took the idea of rapid application development another step forward in 1981. People wanted to build smaller programs faster. And so many people wanted to build tools that we needed to make it easier to do so in order for computers to make us more productive. Against that backdrop, Atkinson took some acid and came up with the idea for a tool he initially called WildCard. Dan Winkler signed onto the project to help build the programming language, HyperTalk, and they got to work in 1986. They changed the name of the program to HyperCard and released it in 1987 at MacWorld. Regular old people could create programs without knowing how to write code. There were a number of User Interface (UI) components that could easily be dropped on the screen, and true to his experience there was panel of elements like boxes, erasers, and text, just like we’d seen in MacPaint. Suppose you wanted a button, just pick it up from the menu and drop it where it goes. Then make a little script using the HyperText that read more like the English language than a programming language like LISP. Each stack might be synonymous with a web page today. And a card was a building block of those stacks. Consider the desktop metaphor extended to a rolodex of cards. Those cards can be stacked up. There were template cards and if the background on a template changed, that flowed to each card that used the template, like styles in Keynote might today. The cards could have text fields, video, images, buttons, or anything else an author could think of. And the author word is important. Apple wanted everyone to feel like they could author a hypercard stack or program or application or… app. Just as they do with Swift Playgrounds today. That never left the DNA. We can see that ease of use in how scripting is done in HyperTalk. Not only the word scripting rather than programming, but how HyperTalk is weakly typed. This is to say there’s no memory safety or type safety, so a variable might be used as an integer or boolean. That either involves more work by the interpreter or compiler - or programs tend to crash a lot. Put the work on the programmers who build programming tools rather than the authors of HyperCard stacks. The ease of use and visual design made Hypercard popular instantly. It was the first of its kind. It didn’t compile at first, although larger stacks got slow because HyperTalk was interpreted, so the team added a just-in-time compiler in 1989 with HyperCard 2.0. They also added a debugger. There were some funny behaviors. Like some cards could have objects that other cards in a stack didn’t have. This led to many a migration woe for larger stacks that moved into modern tools. One that could almost be considered HyperCard 3, was FileMaker. Apple spun their software business out as Claris, who bought Noshuba software, which had this interesting little database program called Nutshell. That became FileMaker in 1985. By the time HyperCard was ready to become 3.0, FileMaker Pro was launched in 1990. Attempts to make Hypercard 3.0 were still made, but Hypercard had its run by the mid-1990s and died a nice quiet death. The web was here and starting to spread. The concept of a bunch of stacks on just one computer had run its course. Now we wanted pages that anyone could access. HyperCard could have become that but that isn’t its place in history. It was a stepping stone and yet a milestone and a legacy that lives on. Because it was a small tool in a large company. Atkinson and some of the other team that built the original Mac were off to General Magic. Yet there was still this idea, this legacy. Hypercard’s interface inspired many modern applications we use to create applications. The first was probably Delphi, from Borland. But over time Visual Studio (which we still use today) for Microsoft’s Visual Basic. Even Powerpoint has some similarities with HyperCard’s interface. WinPlus was similar to Hypercard as well. Even today, several applications and tools use HyperCard’s ideas such as HyperNext, HyperStudio, SuperCard, and LiveCode. HyperCard also certainly inspired FileMaker and every Apple development environment since - and through that, most every tool we use to build software, which we call the IDE, or Integrated Development Environment. The most important IDE for any Apple developer is Xcode. Open Xcode to build an app and look at Interface Builder and you can almost feel Bill Atkinson’s pupils dilated pupils looking back at you, 10 hours into a trip. And within those pupils visions - visions of graphical elements being dropped into a card and people digitized CD collections, built a repository for their book collection, put all the Grateful Dead shows they’d recorded into a stack, or even built an application to automate their business. Oh and let’s not forget the Zine, or music and scene magazines that were so popular in the era that saw photocopying come down in price. HyperCard made for a pretty sweet Zine. HyperCard sprang from a trip when the graphical interface was still just coming into its own. Digital computing might have been 40 years old but the information theorists and engineers hadn’t been as interested in making things easy to use. They wouldn’t have been against it, but they weren’t trying to appeal to regular humans. Apple was, and still is. The success of HyperCard seems to have taken everyone by surprise. Apple sold the last copy in 2004, but the legacy lives on. Successful products help to mass- Its success made a huge impact at that time as well on the upcoming technology. Its popularity declined in the mid-1990s and it died quietly when Apple sold its last copy in 2004. But it surely left a legacy that has inspired many - especially old-school Apple programmers, in today’s “there’s an app for that” world.
1/29/2022 • 14 minutes, 22 seconds
How Ruby Got Nice
As with many a programming language, ruby was originally designed as a teaching language - to teach programming to students at universities. From there, it is now used to create various programs, including games, interfaces for websites, scripts to run on desktop computers, backend REST endpoints, and business software. Although ruby is used for web development more than anything else. It has an elegant syntax that makes it easy to read the code; this is one of the reasons why Ruby is so popular, especially with beginners (after all it was designed to teach programming). Yukihiro Matsumoto, or Mats for short, originally developed the ruby's programming language in the 1990s. Ruby was initially designed as an interpreted scripting language. That first interpreter, MRI, or Mats’s Ruby Interpreter, spread quickly. In part because he’s nice. In fact, he’s so nice that the motto MINASWAN, or “Matz is nice and so we are nice.” Juxtapose this against some of the angrier programmers who develop their own languages. And remember, it was a teach language. And so he named ruby after a character he encountered in a children's book. Or because it was a birthstone. Or both. He graduated from the University of Tsukuba and worked on compilers before writing a mail agent in Emacs Lisp. Having worked with Lisp and Perl and Python, he was looking for a language that was truly object-oriented from the ground up. He came up with the idea in 1993 of another Lisp at the core, but something that used objects like Smalltalk. That would allow developers to write less cyclomatically complex code. And yet he wanted to provide higher-order functions for routine tasks like Perl and Python did. Just with native objects rather than those bolted on the side. And he wanted to do so in as consistent a manner as possible. Believe it or not that meant dynamic typing. And garbage collection for free. And literal notation for some things like arrays and regular expressions while allowing for dynamic reflection for meta programming and allowing for everything to be an expression. The syntax is similar to a python or a perl and yet whitespace in things like indentation doesn’t play a part. It’s concise and the deep thinking that goes into making something concise can be incredible. And yet freeing. The first version of Ruby was released in 1995 and allowed programs to be concise, so written with fewer lines of code than would have been possible with other languages at the time. And yet elegant. In 1996, David Flanagan and Jim Weirich grabbed the MRI interpreter and started using ruby for real projects. And so ruby expanded outside of Japan. As the popularity grew, Matz founded his own company called Object Technology Inc, This allowed him to continue developing Ruby while making money. After all, programmers gotta’ eat too. In 2006 Matsumoto committed the first version of what would eventually become Rails on Version Control Systems (VCS), a precursor git. Ruby is written in C, which means it has access to most underlying operating systems given the right API access. It has a vast dictionary with nearly 1 million entries. It can often be found in many event-driven frameworks, with the most popular being Ruby on Rails, a server-side web application framework developed by David Heinemeier Hansson of Basecamp in 2004. Other frameworks include Sinatra (which came in 2007), Roda, Camping (which comes in at a whopping 4k in size), and Padrino. And Ramaze and Merb and Goliath. Each has their own merits. These libraries help developers code faster, easier, and more efficiently than if they had to write all the server-side code from scratch. Another aspect of Ruby that made it popular is a simple package manager. RubyGems came about in 2003. Here, we lay out a simple structure that includes a README, a gem spec with info about the gem, a lib directory (the code for the gem), a test directory, and a makefile for Ruby they call a Rake. This way the developer of the gem does everything needed to be able to call them in their code. And so there are now well over 100,000 gems out there. Not all work with all the interpreters. Ruby went from 1.0 in 1996 to 1.2 in 1998 to 1.4 in 1999 and 1.6 in 2000. Then to 1.8 in 2003 and by then it was gettin popular and ready to get standardized. This always slows down changes. So it went to become an ISO standard in 2012 - the hallmark if you will that a language is too big to fail. Ruby 2 came along the next year with nearly full backwards compatibility. And then 3 came in 2020 in order to bring just-in-time compilation, which can make the runtime faster than just interpreting. And unlike the XRuby variant, no need to do java-style compilation. Still, not all ruby tooling needs to be compiled. Ruby scripts can be loaded in tools like Amazon’s Lambda service or Google Cloud Functions. From there, it can talk to tools like MySQL and MongoDB. And it’s fun. I mean, Matz uses the word fun. And ruby can present a challenge that to experienced programmers might be seen as fun. Because anything you can do with other languages, you can do with ruby. Might not get as much for free as with a spring security for Java, but it’s still an excellent language and sometimes I can’t help but wonder if we shouldn’t get so much for free with certain lanuages. Matz is now the chief architect of Heroku. He has since written a slimmed down version of ruby called mruby and another language called streem. He also wrote a few books on ruby. Because you know, he’s nice.
1/24/2022 • 9 minutes, 12 seconds
Email: From Time Sharing To Mail Servers To The Cloud
With over 2.6 billion active users ad 4.6billion active accounts email has become a significant means of communication in the business, professional, academic, and personal worlds. Before email we had protocols that enabled us to send messages within small splinters of networks. Time Sharing systems like PLATO at the University of Champaign-Urbana, DTSS at Dartmouth College, BerkNet at the University of California Berkeley, and CTTS at MIT pioneered electronic communication. Private corporations like IBM launched VNET We could create files or send messages that were immediately transferred to other people. The universities that were experimenting with these messaging systems even used some of the words we use today. MIT’s CTSS used the MAIL program to send messages. Glenda Schroeder from there documented that messages would be placed into a MAIL BOX in 1965. She had already been instrumental in implementing the MULTICS shell that would later evolve into the Unix shell. Users dialed into the IBM 7094 mainframe and communicated within that walled garden with other users of the system. That was made possible after Tom Van Vleck and Noel Morris picked up her documentation and turned it into reality, writing the program in MAD or the Michigan Algorithm Decoder. But each system was different and mail didn’t flow between them. One issue was headers. These are the parts of a message that show what time the message was sent, who sent the message, a subject line, etc. Every team had different formats and requirements. The first attempt to formalize headers was made in RFC 561 by Abhay Bhutan and Ken Pogran from MIT, Jim White at Stanford, and Ray Tomlinson. Tomlinson was a programmer at Bolt Beranek and Newman. He defined the basic structure we use for email while working on a government-funded project at ARPANET (Advanced Research Projects Agency Network) in 1971. While there, he wrote a tool called CYPNET to send various objects over a network, then ported that into the SNDMSG program used to send messages between users of their TENEX system so people could send messages to other computers. The structure he chose was Username@Computername because it just made sense to send a message to a user on the computer that user was at. We still use that structure today, although the hostname transitioned to a fully qualified domain name a bit later. Given that he wanted to route messages between multiple computers, he had a keen interest in making sure other computers could interpret messages once received. The concept of instantaneous communication between computer scientists led to huge productivity gains and new, innovative ideas. People could reach out to others they had never met and get quick responses. No more walking to the other side of a college campus. Some even communicated primarily through the computers, taking terminals with them when they went on the road. Email was really the first killer app on the networks that would some day become the Internet. People quickly embraced this new technology. By 1975 almost 75% of the ARPANET traffic was electronic mails, which provided the idea to send these electronic mails to users on other computers and networks. Most universities that were getting mail only had one or two computers connected to ARPANET. Terminals were spread around campuses and even smaller microcomputers in places. This was before the DNS (Domain Name Service), so the name of the computer was still just a hostname from the hosts file and users needed to know which computer and what the correct username was to send mail to one another. Elizabeth “Jake” Feinler had been maintaining a hosts file to keep track of computers on the growing network when her employer Stanford was just starting the NIC, or Network Information Center. Once the Internet was formed that NIC would be the foundation or the InterNIC who managed the buying and selling of domain names once Paul Mockapetris formalized DNS in 1983. At this point, the number of computers was increasing and not all accepted mail on behalf of an organization. The Internet Service Providers (ISPs) began to connect people across the world to the Internet during the 1980s and for many people, electronic mail was the first practical application they used on the internet. This was made easier by the fact that the research community had already struggled with email standards and in 1981 had defined how servers sent mail to one another using the Simple Mail Transfer Protocol, or SMTP, in RFC 788, updated in 1982 with 821 and 822. Still, the computers at networks like CSNET received email and users dialed into those computers to read the email they stored. Remembering the name of the computer to send mail to was still difficult. By 1986 we also got the concept routing mail in RFC 974 from Craig Partridge. Here we got the first MX record. Those are DNS records that define the computer that received mail for a given domain name. So stanford.edu had a single computer that accepted mail for the university. These became known as mail servers. As the use of mail grew and reliance on mail increased, some had multiple mail servers for fault tolerance, for different departments, or to split the load between servers. We also saw some split various messaging roles up. A mail transfer agent, or MTA, sent mail between different servers. The received field in the header is stamped with the time the server acting as the MTA got an email. MTAs mostly used port 25 to transfer mail until SSL was introduced when port 587 started to be used for encrypted connections. Bandwidth and time on these computers was expensive. There was a cost to make a phone call to dial into a mail provider and providers often charged by the minute. So people also wanted to store their mail offline and then dial in to send messages and receive messages. Close enough to instant communication. So software was created to manage email storage, which we call a mail client or more formally a Mail User Agent, or MUA. This would be programs like Microsoft Outlook and Apple Mail today or even a web mail client as with Gmail. POP, or Post Office Protocol was written to facilitate that transaction in 1984. Receive mail over POP and send over SMTP. POP evolved over the years with POPv3 coming along in 1993. At this point we just needed a username and the domain name to send someone a message. But the number of messages was exploding. As were the needs. Let’s say a user needed to get their email on two different computers. POP mail needed to know to leave a copy of messages on servers. But then those messages all showed up as new on the next computer. So Mark Crispin developed IMAP, or Internet Message Access Protocol, in 1986, which left messages on the server and by IMAPv4 in the 1990s, was updated to the IMAPv4 we use today. Now mail clients had a few different options to use when accessing mail. Those previous RFCs focused on mail itself and the community could use tools like FTP to get files. But over time we also wanted to add attachments to emails so MIME, or Multipurpose Internet Mail Extensions became a standard with RFC 1341 in 1993. Those mail and RFC standards would evolve over the years to add better support for encapsulations and internationalization. With the more widespread use of electronic mail, the words were shortened and to email and became common in everyday conversations. With the necessary standards, the next few years saw a number of private vendors jump on the internet bandwagon and invest in providing mail to customers America Online added email in 1993, Echomail came along in 1994, Hotmail added advertisements to messages, launching in 1996, and Yahoo added mail in 1997. All of the portals added mail within a few years. The age of email kicked into high gear in the late 1990s, reaching 55 million users in 1997 and 400 million by 1999. During this time having an email address went from a luxury or curiosity to a societal and business expectation, like having a phone might be today. We also started to rely on digital contacts and calendars, and companies like HP released Personal Information Managers, or PIMs. Some companies wanted to sync those the same way they did email, so Microsoft Exchange was launched in 1996. That original concept went all the way back to PLATO in the 1960s with Dave Wooley’s PLATO NOTES and was Ray Ozzie’s inspiration when he wrote the commercial product that became Lotus Notes in 1989. Microsoft inspired Google who in turn inspired Microsoft to take Exchange to the cloud with Outlook.com. It hadn’t taken long after the concept of sending mail between computers was possible that we got spam. Then spam blockers and other technology to allow us to stay productive despite the thousands of messages from vendors desperately trying to sell us their goods through drip campaigns. We’ve even had legislation to limit the amount of spam, given that at one point over 9 out of 10 emails was spam. Diligent efforts have driven that number down to just shy of a third at this point. Email is now well over 40 years old and pretty much ubiquitous around the world. We’ve had other tools for instant messaging, messaging within every popular app, and group messaging products like bulletin boards online and now group instant messaging products like Slack and Microsoft Teams. We even have various forms of communication options integrated with one another. Like the ability to initiate a video call within Slack or Teams. Or the ability to toggle the Teams option when we send an invitation for a meeting in Outlook. Every few years there’s a new communication medium that some think will replace email. And yet email is as critical to our workflows today as it ever was.
1/15/2022 • 16 minutes, 25 seconds
The Teletype and TTY
Teleprinters, sometimes referred to as teletypes based on the dominance of the Tyletype corporation in their hayday, are devices that send or receive written transmissions over a wire or over radios. Those have evolved over time to include text and images. And while it may seem as though their development corresponds to the telegraph, that’s true only so far as discoveries in electromagnetism led to the ability to send tones or pules over wires once there was a constant current. That story of the teletype evolved through a number of people in the 1800s. The modern telegraph was invented in 1835 and taken to market a few years later. Soon after that, we were sending written messages encoded and typed on what we called a teletype machine, or teletypewriter if you will. Those were initially invented by a German inventor, Friedrich König in 1837, the same year Cooke and Wheatstone got their patent on telgraphy in England, and a few years before they patented automatic printing. König figured out how to send messages over about 130 miles. Parts of the telegraph were based on his work. But he used a wire per letter of the alphabet and Samuel Morse used a single wire and encoded messages with the Morse code he developed. Alexander Bain developed a printing telegraph that used electromagnets that turned clockworks. But keep in mind that these were still considered precision electronics at the time and human labor to encode, signal, receive, and decode communications were still cheaper. Therefore, the Morse telegraph service that went operational in 1846 became the standard. Meanwhile Royal Earl House built a device that used piano keyboards to send letters, which had a shift register to change characters being sent. Thus predating the modern typewriter, developed in 1878, by decades. Yet, while humans were cheaper, machines were less prone to error, unless of course they broke down. Then David Edward Hughes developed the first commercial teletype machine known as the Model 11 in 1855 to 1856. A few small telegraph companies then emerged to market the innovation, including Wester Union Telegraph company. Picking up where Morse left off, Émile Baudot developed a code that consisted of five units, that became popular in France and spread to England in 1897 before spreading to the US. That’s when Donald Murray added punching data into paper tape for transmissions and incremented the Baudot encoding scheme to add control characters like carriage returns and line feeds. And some of the Baudot codes and Murray codes are still in use. The ideas continued to evolve. In 1902, Charles Krum invented something he called the teletypewriter, picking up on the work started by Frank Pearne and funded by Joy Morton of the Morton Salt company. He filed a patent for his work. He and Morton then formed a new company called the Morkrum Printing Telegraph. Edward Kleinschmidt had filed a similar patent in 1916 so they merged the two companies into the Morkrump-Kleinschmidt Company in 1925 but to more easily market their innovation changed the name to the Teletype Corporation in 1928, then selling to the American Telegraph and Telephone Company, or AT&T, for $30M. And so salt was lucrative, but investing salt money netted a pretty darn good return as well. Teletype Corporation produced a number of models over the next few decades. The Model 15 through 35 saw an increase in the speed messages could be sent and improved encoding techniques. As the typewriter became a standard, the 8.5 by 11 inch came as a means of being most easily compatible for those typewriters. The A standard was developed so A0 is a square meter, A1 is half that, A2, half that, and so on, with A4 becoming a standard paper size in Europe. But teletypes often had continual feeds and so while they had the same width in many cases, paper moved from a small paper tape to a longer roll of paper cut the same size as letter paper. Decades after Krum was out of the company, the US Naval Observatory built what they called a Krum TTY to transmit data over radio, naming their device after him. Now, messages could be sent over a telegraph wire and wirelessly. Thus by 1966 when the Inktronic shipped and printed 1200 characters a minute, it was able to print in baud or ASCII, which Teletype had developed for guess who, the Navy. But they had also developed a Teletype they called the Dataspeed with what we think of as a modem today, which evolved into the Teletype 33, the first Teletype to be consistently used with a computer. The teletype could send data to a computer and receive information that was printed in the same way information would be sent to another teletype operator who would respond in a printout. Another teletype with the same line receives that signal. When hooked to a computer though, the operator presses one of the keys on the teletype keyboard, it transmits an electronic signal. Over time, those teletypes could be installed on the other side of a phone line. And if a person could talk to a computer, why couldn’t two computers talk to one another? ASCII was initially published in 1963 so computers could exchange information in a standardized fashion. Bell Labs was involved and so it’s no surprise we saw ASCII show up within just a couple of years on the Teletype. ASCII was a huge win. Teletype sold over 600,000 of the 32s and 33s. Early video screens cost over $10,000 so interactive computing meant sending characters to a computer, which translated the characters into commands, and those into machine code. But the invention of the integrated circuit, MOSFET, and microchip dropped those prices considerably. When screens dropped in price enough, and Unix came along in 1971, also from the Bell system, it’s no surprised that the first shells were referred to as TTY, short for teletype. After all, the developers and users were often literally using teletypes to connect. As computing companies embraced time sharing and added the ability to handle multiple tasks those evolved into the ability to invoke multiple TTY sessions as a given user, thus while waiting for a task to complete we could do another task. And so we got tty1, tty2, tty3, etc. The first GUIs were then effectively macros or shell scripts that were called by clicking a button. And those evolved so they weren’t obfuscating the shell but instead now we open a terminal emulator in most modern operating systems not to talk to the shell directly but to send commands to the emulator that interprets them in more modern languages. And yet run tty and we can still see the “return user’s terminal name” to quote the man page. Today we interact with computers in a very different way than we did over teletypes. We don’t send text and receive the output in a a print-out any longer. Instead we use monitors that allow us to use keyboards to type out messages through the Internet as we do over telnet and then ssh using either binary or ASCII codes. The Teletype and typewriter evolved into today's keyboard, which offers a faster and more efficient way to communicate. Those early CTSS then Unix C programs that evolved into ls and ssh and cat are now actions performed in graphical interfaces or shells. The last remaining teletypes are now used in airline telephone systems. And following the breakup of AT&T, Teletype Corporation need finally in 1990, as computer terminals evolved into a different direction. Yet we still see their remnants in everyday use.
1/10/2022 • 13 minutes
A History of Esports
It’s human nature to make everything we do competitive. I’ve played football, ran track at times, competed in hacking competitions at Def Con, and even participated in various gaming competitions like Halo tournaments. I always get annihilated by kids who had voices that were cracking, but I played! Humans have been competing in sports for thousands of years. The Lascaux in France shows people sprinting over 15,000 years ago. The Egyptians were bowling in the 5,000s BCE. The Sumerians were wrestling 5,000 years ago. Mesopotamian art shows boxing in the second or third millennium BCE. The Olmecs in Mesoamerican societies were playing games with balls around the same time. Egyptian monuments show a number of individual sports being practiced in Egypt as far back as 2,000 BCE. The Greeks evolved the games first with the Minoans and Mycenaeans between 1,500 BCE and 1,000BCE and then they first recorded their Olympic games in 776 BCE, although historians seem to agree the games were practiced at least 500 years before that evolving potentially from funeral games. Sports competitions began as ways to showcase an individuals physical prowess. Weight lifting, discus, whether individual or team sports, sports rely on physical strength, coordination, repetitive action, or some other feat that allows one person or team to stand out. Organized team sports first appeared in ancient times. The Olmecs in Mesoamerica but Hurling supposedly evolved past 1000 BCE, although written records of that only begin around the 16th century and it could be that was borrowed through the Greek game harpaston when the Romans evolved it into the game harpastum and it spread with Roman conquests. But the exact rules and timelines of all of these are lost to written history. Instead, written records back up that western civilization team sports began with polo appearing about 2,500 years ago in Persia. The Chinese gave us a form of kickball they called cuju, around 200 BCE. Football, or soccer for the American listeners, started in 9th century England but evolved into the game we think of today in the 1850s, then a couple of decades later to American football. Meanwhile, cricket came around in the 16th century and then hockey and baseball came along in the mid 1800s with basketball arriving in the 1890s. That’s also around the same time the modern darts game was born, although that started in the Middle Ages when troops threw arrows or crossbow bolts at wine barrels turned on their sides or sections of tree trunks. Many of these sports are big business today, netting multi-billion dollar contracts for media rights to show and stream games, naming rights to stadiums for hundreds of millions, and players signing contracts for hundreds of millions across all major sports. There’s been a sharp increase in sports contracts since the roaring 1920s, rising steadily until the television started to show up in homes around the world until ESPN solidified a new status in our lives when it was created in 1979. Then came the Internet and the money got crazy town. All that money leads the occasional entrepreneurial minded sports enthusiast to try something new. We got the World Wrestling Body in the 1950s, which evolved out of Jim McMahon’s father’s boxing promotions put him working with Toots Mondt on what they called Western Style Wrestling. Beating people up has been around since the dawn of life but became an official sport when UFC 1 was launched in 1993. We got the XFL in 1999. So it’s no surprise that we would take a sport that requires hand-eye coordination and turn that into a team endeavor. That’s been around for a long time, but we call it Esports today. Video Game Competitions Competing in video games is as almost as old as, well, video games. Spacewar! was written in 1962 and students from MIT competed with one another for dominance of deep space, dogfighting little ships, which we call sprites today, into oblivion. The game spread to campuses and companies as the PDP minicomputers spread. Countless hours spent playing and by 1972, there were enough players that they held the first Esports competition, appropriately called the Intergalactic Spacewar! Olympics. Of course, Steward Brand would report on that for Rolling Stone, having helped Mouse inventor Doug Englebart with the “Mother of All Demos” just four years before. Pinball had been around since the 1930s, or 1940s with flippers. They could be found around the world by the 1970s and 1972 was also the first year there was a Pinball World Champion. So game leagues were nothing new. But Brand and others, like Atari founder Nolan Bushnell knew that video games were about to get huge. Tennis was invented in the 1870s in England and went back to 11th century France. Tennis on a screen would make loads of sense as well when Tennis For Two debuted in 1958. So when Pong came along in 1972, the world (and the ability to mass produce technology) was ready for the first video game hit. So when people flowed into bars first in the San Francisco Bay Area, then around the country to play Pong, it’s no surprise that people would eventually compete in the game. From competing in billiards to a big game console just made sense. Now it was a quarter a game instead of just a dart board hanging in the corner. And so when Pong went to home consoles of course people competed there as well. Then came Space Invaders in 1978. By 1980 we got the first statewide Space Invaders competition, and 10,000 players showed up. The next year there was a Donkey Kong tournament and Billy Mitchell set the record for the game at 874,300 that stood for 18 years. We got the US National Video Game Team in 1983 and competitions for arcade games sprung up around the world. A syndicated television show called Starcade even ran to show competitions, which now we might call streaming. And Tron came in 1982. Then came the video game crash of 1983. But games never left us. The next generation of consoles and arcade games gave us competitions and tournaments for Street Fighter and Mortal Kombat then first-person game like Goldeneye and other first-person shooters later in the decade, paving the way for games like Call of Duty and World of Warcraft. Then in 1998 a legendary StarCraft 2 tournament was held and 50 million people around the world tuned in on the Internet. That’s a lot of eyeballs. Team options were also on the rise. Netter had been written to play over the Internet by 16 players at once. Within a few years, massive multiplayers could have hundreds of players duking it out in larger battle scenes. Some were recorded and posted to web pages. There was appetite for tracking scores for games and competing and even watching games, which we’ve all done over the shoulders of friends since the arcades and consoles of old. Esports and Twitch As the 2000s came, Esports grew in popularity. Esports is short for the term electronic sports, and refers to competitive video gaming, which includes tournaments and leagues. Let’s set aside the earlier gaming tournaments and think of those as classic video games. Let’s reserve the term Esports for events held after 2001. That’s because the World Cyber Games was founded in 2000 and initially held in 2001, in Seoul, Korea (although there was a smaller competition in 2000). The haul was $300,000 and events continue on through the current day, having been held in San Francisco, Italy, Singapore, and China. Hundreds of people play today. That started a movement. Major League Gaming (MLG) came along in 2002 and is now regarded as one of the most significant Esports hosts in the world. The Electronic Sports World Cup came in 2003 were the first tournaments, which were followed by the introduction of ESL Intel Extreme Master in 2007 and many others. The USA Network broadcast their first Halo 2 tournament in 2006. We’ve gone from 10 major tournaments held in 2000 to an incalculable number today. That means more teams. Most Esports companies are founded by former competitors, like Cloud9, 100 Thieves, and FaZeClan. Team SoloMid is the most valuable Esports organization. Launched by League of Legends star Dan Dinh and his brother in 2009, and is now worth over $400 million and has fielded teams like ZeRo for Super Smash Brothers, Excelerate Gaming for Rainbow Six Seige, Team Dignitas for Counter-Strike: Global Offensive, and even chess grandmaster Hikaru Nakamura. The analog counterpart would be sports franchises. Most of those were started by athletic clubs or people from the business community. Gaming has much lower startup costs and thus far has been more democratic in the ability to start a team with higher valuations. Teams play in competitions held by leagues, of which There seems to be new ones all the time. The NBA 2K League and the Overwatch League are two new leagues that have had early success. One reason for teams and leagues like this is naming and advertising rights. Another is events like The International 2021, with a purse of over $40M. The inaugural League of Legends World Championship took place in 2011. In 2013 another tournament was held in the Staples Center in Los Angeles (close to their US offices). Tickets for the event sold out within minutes. The purse for that was originally $100,000 and has since risen to over $7M. But others are even larger. Arena of Valor tournament Honor of Kings World Champion Cup is $7.7M and Fortnite World Cup Finals has gone as high as $15M. One reason for the leagues and teams is that companies that make games want to promote their games. The video game business is almost an 86 billion dollar industry. Another is that people started watching other people play on YouTube. But then YouTube wasn’t really purpose-built for gaming. Streamers made due using cameras to stream images of themselves in a picture-in-picture frame but that still wasn’t optimal. Esports had been broadcast (the original form of streaming) before but streaming wasn't all that commercially successful until the birth of Twitch in 2011. YouTube had come along in 2005 and Justin Kan and Emmett Shear created Justin.tv in 2007 as a place for people to broadcast video, or stream, online. They started with just one channel: Justin’s life. Like 24 by 7 life. They did Y Combinator and managed to land an $8M seed round. Justin had a camera mounted to his hat, and left that outside the bathroom since it wasn’t that kind of site. They made a video chat system and not only was he streaming, but he was interacting with people on the other side of the stream. It was like the Truman Show, but for reals. A few more people joined up, but then came other sites to provide this live streaming option. They added forums, headlines, comments, likes, featured categories of channels, and other features but just weren’t hitting it. One aspect was doing really well: gaming. They moved that to a new site in 2011 and called that Twitch. This platform allowed players to stream themselves and their games. And they could interact with their viewers, which gave the entire experience a new interactive paradigm. And it grew fast with the whole thing being rebranded as Twitch in 2014. Amazon bought Twitch in 2014 for $1B. They made $2.3 Billion in 2020 with an average of nearly 3 million concurrent viewers watching nearly 19 billion hours of content provided monthly by nearly 9 million streamers. Other services like Youtube Gaming have come and gone but Twitch remains the main way people watch others game. ESPN and others still have channels for Esports, but Twitch is purpose-built for gaming. And watching others play games is no different than Greeks showing up for the Olympics or watching someone play pool or watching Liverpool play Man City. In fact, the money they make is catching up. Platforms like Twitch allow professional gamers and those who announce the games to to become their own unique class of celebrities. The highest paid players have made between three and six million dollars, with the top 10 living outside the US and making their hauls from Dota 2. Others have made over a million playing games like Counter-Strike, Fortnite, League of Legends, and Call of Duty. None are likely to hold a record for any of those games for 18 years. But they are likely to diversify their sources of income. Add a YouTube channel, Twitch stream, product placements, and appearances - and a gamer could be looking at doubling what they bring in from competitions. Esports has come far but has far further to go. The total Esports market was just shy of $1B in 2020 and expected tor each $2.5B in 2025 (which the pandemic may push even faster). Not quite the 100 million that watch the Super Bowl every year or the half billion that tune into the World Cup finals but growing at a faster rate than the Super Bowl, which has actually declined in the past few years. And the International Olympic Committee recognized the tremendous popularity of Esports throughout the world in 2017 and left open the prospect of Esports becoming an Olympic sport in the future (although with the number of vendors involved that’s hard to imagine happening). Perhaps some day when archaeologists dig up what we’ve left behind, they’ll find some Egyptian Obelisk or gravestone with a controller and a high score. Although they’ll probably just scoff at the high score, since they already annihilated that when they first got their neural implants and have since moved on to far better games! Twitch is young in the context of the decades of history in computing. However, the impact has been fast and along with Esports shows us a windows into how computing has reshaped entire ways we week not only entertainment, but also how we make a living. In fact, the US Government recognized League of Legends as a sport as early as 2013, allowing people to get Visas to come into the US and play. And where there’s money to be made, there’s betting and abuse. 2010 saw SaviOr and some of the best Starcraft players to ever play embroiled in a match-fixing scandal. That almost destroyed the Esports gaming industry. And yet as with the Video Game Crash of 1983, the industry has always bounced back, at magnitudes larger than before.
1/8/2022 • 22 minutes, 34 seconds
Of Heath Robinson Contraptions And The Colossus
The Industrial Revolution gave us the rise of factories all over the world in the 1800s. Life was moving faster and we were engineering complex solutions to mass produce items. And many expanded from there to engineer complex solutions for simple problems. Cartoonist Heath Robinson harnessed the reaction from normal humans to this changing world in the forms of cartoons and illustrations of elaborate machines meant to accomplish simple tasks. These became known as “Heath Robinson contraptions” and were a reaction to the changing and increasingly complicated world order as much as anything. Just think of the rapidly evolving financial markets as one sign of the times! Following World War I, other cartoonists made similar cartoons. Like Rube Goldberg, giving us the concept of Rube Goldberg machines in the US. And the very idea of breaking down simple operations into Boolean logic from those who didn’t understand the “why” would have seemed preposterous. I mean a wheel with 60 teeth or a complex series of switches and relays to achieve the same result? And yet with flip-flop circuits one would be able to process infinitely faster than it would take that wheel to turn with any semblance of precision. The Industrial Revolution of our data was to come. And yet we were coming to a place in the world where we were just waking up to the reality of moving from analog to digital as Robinson passed away in 1944 with a series of electromechanical computers named after Robinson and then The Colossus. These came just one year after Claude Shannon and Alan Turing, two giants in the early history of computers, met at Bell Labs. And a huge step in that transition was a paper by Alan Turing in 1936 called "On Computable Numbers with an Application to the Entscheidungsproblem.” This would become the basis for a programmable computing machine concept and so before the war, Alan Turing had published papers about the computability of problems using what we now call a Turing machine - or recipes. Some of the work on that paper was inspired by Max Newman, who helped Turing go off to Princeton to work on all the maths, where Turing would get a PhD in 1938. He returned home and started working part-time at the Government Code and Cypher school during the pre-war buildup. Hitler invaded Poland the next year, sparking World War II. The Poles had gotten pretty good with codebreaking, being situated right between world powers Germany and Russia and their ability to see troop movements through decrypted communications was one way they were able to keep forces in optimal locations. And yet the Germans got in there. The Germans had built a machine called the Enigma that also allowed their Navy to encrypt communications. Unable to track their movements, Allied forces were playing a cat and mouse game and not doing very well at it. Turing came up with a new way of decrypting the messages and that went into a new version of the Polish Bomba. Later that year, the UK declared war on Germany. Turing’s work resulted in a lot of other advances in cryptanalysis throughout the war. But he also brought home the idea of an electromechanical machine to break those codes - almost as though he’d written a paper on building machines to do such things years before. The Germans had given away a key to decrypt communications accidentally in 1941 and the codebreakers at Bletchley Park got to work on breaking the machines that used the Lorenz Cipher in new and interesting ways. The work had reduced the amount of losses - but they needed more people. It was time intensive to go through the possible wheel positions or guess at them, and every week meant lives lost. Or they needed more automation of people tasks… So they looked to automate the process. Turing and the others wrote to Churchill directly. Churchill started his memo to General Ismay with “ACTION THIS DAY” and so they were able to get more bombes up and running. Bill Tutte and the codebreakers worked out the logic to process the work done by hand. The same number of codebreakers were able to a ton more work. The first pass was a device with uniselectors and relays. Frank Morrell did the engineering design to process the logic. And so we got the alpha test of an automation machine they called the Tunny. The start positions were plugged in by hand and it could still take weeks to decipher messages. Max Newman, Turing’s former advisor and mentor, got tapped to work on the project and Turing was able to take the work of Polish code breakers and others and add sequential conditional probability to guess at the settings of the 12 wheels of an Enigma machine and thus get to the point they could decipher messages coming out of the German navy on paper. No written records indicate that Turing was involved much in the project beyond that. Max Newman developed the specs, heavily influenced by Turing’s previous work. They got to work on an electro-mechanical device we now call the Heath Robinson. They needed to be able to store data. They used paper tape - which could process a thousand characters per second using photocell readers - but there were two and they had to run concurrently. Tape would rip and two tapes running concurrently meant a lot might rip. Charles Wynn-Williams was a brilliant physicist who worked with electric waves since the late 1920s at Trinity College, Cambridge and was recruited from a project helping to develop RADAR because he’d specifically worked on electronic counters at Cambridge. That work went into the counting unit, counting how many times a function returned a true result. As we saw with Bell Labs, the telephone engineers were looking for ways to leverage switching electronics to automate processes for the telephone exchange. Turing recommended they bring in telephone engineer Tommy Flowers to design the Combining unit, which used vacuum tubes to implement Boolean logic - much as the paper Shannon wrote in 1936 that he gave Turing over tea at Bell labs earlier 1943. It’s likely Turing would have also heard of the calculator George Stibitz of Bell Labs built out of relay switches all the way back in 1937. Slow but more reliable than the vacuum tubes of the era. And it’s likely he influenced those he came to help by collaborating on encrypted voice traffic and likely other projects as much if not more. Inspiration is often best found at the intersectionality between ideas and cultures. Flowers looked to use vacuum tubes where the wheel patterns were produced. This gave one less set of paper tapes and infinitely more reliability. And a faster result. The programs were stored but they were programmable. Input was made using the shift registers from the paper tape and thyratron rings that simulated the bitstream for the wheels. There was a master control unit that handled the timing between the clock, signals, readouts, and printing. It didn’t predate the Von Neumann architecture. But it didn’t not. The switch panel had a group of switches used to define the algorithm being used with a plug-board defining conditions. The combination provided billions of combinations for logic processing. Vacuum tube valves were still unstable but they rarely blew when on, it was the switching process. So if they could have the logic gates flow through a known set of wheel settings the new computer would be more stable. Just one thing - they needed 1,500 valves! This thing would be huge! And so the Colossus Mark 1 was approved by W.G. Radley in 1943. It took 50 people 11 months to build and was able to compute wheel settings for ciphered message tapes. Computers automating productivity at its finest. The switches and plugs could be repositioned and so not only was Colossus able get messages decrypted in hours but could be reprogrammed to do other tasks. Others joined and they got the character reading up to almost 10,000 characters a second. They improved on the design yet again by adding shift registers and got over four times the speeds. It could now process 25,000 characters per second. One of the best uses was to confirm that Hitler got tricked into thinking the attack at Normandy at D-Day would happen elsewhere. And so the invasion of Normandy was safe to proceed. But the ability to reprogram made it a mostly universal computing machine - proving the Turing machine concept and fulfilling the dreams of Charles Babbage a hundred years earlier. And so the war ended in 1945. After the war, The Colossus machines were destroyed - except the two sent to British GHCQ where they ran until 1960. So the simple story of Colossus is that it was a series of computers built in England from 1943 to 1945, at the heart of World War II. The purpose: cryptanalysis - or code breaking. Turing went on to work on the Automatic Computing Engine at the National Physical Laboratory after the war and wrote a paper on the ACE - but while they were off to a quick start in computing in England having the humans who knew the things, they were slow to document given that their wartime work was classified. ENIAC came along in 1946 as did the development of Cybernetics by Norbert Wiener. That same year Max Newman wrote to John Von Neumann (Wiener’s friend) about building a computer in England. He founded the Royal Society Computing Machine Laboratory at Victory University of Manchester, got Turing out to help and built the Manchester Baby, along with Frederic Williams and Thomas Kilburn. In 1946 Newman would also decline becoming Sir Newman when he rejected becoming an OBE, or Officer of the Order of the British Empire, over the treatment of his protege Turing not being offered the same. That’s leadership. They’d go on to collaborate on the Manchester Mark I and Ferranti Mark I. Turing would work on furthering computing until his death in 1954, from taking cyanide after going through years of forced estrogen treatments for being a homosexual. He has since been pardoned post Following the war, Flowers tried to get a loan to start a computer company - but the very idea was ludicrous and he was denied. He retired from the Post Office Research Station after spearheading the move of the phone exchange to an electric, or what we might think of as a computerized exchange. Over the next decade, the work from Claude Shannon and other mathematicians would perfect the implementation of Boolean logic in computers. Von Neumann only ever mentioned Shannon and Turing in his seminal 1958 paper called The Computer And The Brain. While classified by the British government the work on Colossus was likely known to Von Neumann, who will get his own episode soon - but suffice it to say was a physicist turned computer scientist and worked on ENIAC to help study and develop atom bombs - and who codified the von Neumann architecture. We did a whole episode on Turing and another on Shannon, and we have mentioned the 1945 article As We May Think where Vannevar Bush predicted and inspired the next couple generations of computer scientists following the advancements in computing around the world during the war. He too would have likely known of the work on Colossus at Bletchley Park. Maybe not the specifics but he certainly knew of ENIAC - which unlike Colossus was run through a serious public relations machine. There are a lot of heroes to this story. The brave men and women who worked tirelessly to break, decipher, and analyze the cryptography. The engineers who pulled it off. The mathematicians who sparked the idea. The arrival of the computer was almost deterministic. We had work on the Atanasoff-Berry Computer at Iowa State, work at Bell Labs, Norbert Wiener’s work on anti-aircraft guns at MIT during the war, Konrad Zuse’s Z3, Colossus, and other mechanical and electromechanical devices leading up to it. But deterministic doesn’t mean lacking inspiration. And what is the source of inspiration and when mixed with perspiration - innovation? There were brilliant minds in mathematics, like Turing. Brilliant physicists like Wynn-Williams. Great engineers like Flowers. That intersection between disciplines is the wellspring of many an innovation. Equally as important, then there’s a leader who can take the ideas, find people who align with a mission, and help clear roadblocks. People like Newman. When they have domain expertise and knowledge - and are able to recruit and keep their teams inspired, they can change the world. And then there are people with purse strings who see the brilliance and can see a few moves ahead on the chessboard - like Churchill. They make things happen. And finally, there are the legions who carried on the work in theoretical, practical, and in the pure sciences. People who continue the collaboration between disciplines, iterate, and bring products to ever growing markets. People who continue to fund those innovations. It can be argued that our intrepid heroes in this story helped win a war - but that the generations who followed, by connecting humanity and bringing productivity gains to help free our minds to solve bigger and bigger problems will hopefully some day end war. Thank you for tuning in to this episode of the History of Computing Podcast. We hope to cover your contributions. Drop us a line and let us know how we can. And thank you so much for listening. We are so, so lucky to have you.
12/14/2021 • 19 minutes, 46 seconds
Clifford Stoll and the Cuckoo’s Egg
A honeypot is basically a computer made to look like a sweet, yummy bit of morsel that a hacker might find yummy mcyummersons. This is the story of one of the earliest on the Internet. Clifford Stoll has been a lot of things. He was a teacher and a ham operator and appears on shows. And an engineer at a radio station. And he was an astronomer. But he’s probably best known for being an accidental systems administrator at Lawrence Berkeley National Laboratory who setup a honeypot in 1986 and used that to catch a KGB hacker. It sounds like it could be a movie. And it was - on public television. Called “The KGB, the Computer, and Me.” And a book. Clifford Stoll was an astronomer who stayed on as a systems administrator when a grant he was working on as an astronomer ran out. Many in IT came to the industry accidentally. Especially in the 80s and 90s. Now accountants are meticulous. The monthly accounting report at the lab had never had any discrepancies. So when the lab had a 75 cent accounting error, his manager Dave Cleveland had Stoll go digging into the system to figure out what happened. And yet what he found was far more than the missing 75 cents. This was an error of time sharing systems. And the lab leased out compute time at $300 per hour. Everyone who had accessed the system had an account number to bill time to. Well, everyone except a user named hunter. They disabled the user and then got an email that one of their computers tried to break into a computer elsewhere. This is just a couple years after the movie War Games had been released. So of course this was something fun to dig your teeth into. Stoll combed through the logs and found the account that attempted to break into the computers in Maryland was a local professor named Joe Sventek, now at the University of Oregon. One who it was doubtful made the attempt because he was out town at the time. So Stoll set his computer to beep when someone logged in so he could set a trap for the person using the professors account. Every time someone connected a teletype session, or tty, Stoll checked the machine. Until Sventek connected and with that, he went to see the networking team who confirmed the connection wasn’t a local terminal but had come in through one of the 50 modems through a dial-up session. There wasn’t much in the form of caller ID. So Stoll had to connect a printer to each of the modems - that gave him the ability to print every command the user ran. A system had been compromised and this user was able to sudo, or elevate their privileges. UNIX System V had been released 3 years earlier and suddenly labs around the world were all running similar operating systems on their mainframes. Someone with a working knowledge of Unix internals could figure out how to do all kinds of things. Like add a program to routine housecleaning items that elevated their privileges. They could also get into the passwd file that at the time housed all the passwords and delete those that were encrypted, thus granting access without a password. And they even went so far as to come up with dictionary brute force attacks similar to a modern rainbow table to figure out passwords so they wouldn’t get locked out when the user whose password was deleted called in to reset it again. Being root allowed someone to delete the shell history and given that all the labs and universities were charging time, remove any record they’d been there from the call accounting systems. So Stoll wired a pager into the system so he could run up to the lab any time the hacker connected. Turns out the hacker was using the network to move laterally into other systems, including going from what was ARPANET at the time to military systems on Milnet. The hacker used default credentials for systems and leave accounts behind so he could get back in later. Jaeger means hunter in German and those were both accounts used. So maybe they were looking for a German. Tymenet and Pacbell got involved and once they got a warrant they were able to get the phone number of the person connecting to the system. Only problem is the warrant was just for California. Stoll scanned the packet delays and determined the hacker was coming in from overseas. The hacker had come in through Mitre Corporation. After Mitre disabled the connection the hacker slipped up and came in through International Telephone and Telegraph. Now they knew he was not in the US. In fact, he was in West Germany. At the time, Germany was still divided by the Berlin Wall and was a pretty mature spot for espionage. They confirmed the accounts were indicating they were dealing with a German. Once they had the call traced to Germany they needed to keep the hacker online for an hour to trace the actual phone number because the facilities there still used mechanical switching mechanisms to connect calls. So that’s where the honeypot comes into play. Stoll’s girlfriend came up with the idea to make up a bunch of fake government data and host it on the system. Boom. It worked, the hacker stayed on for over an hour and they traced the number. Along the way, this hippy-esque Cliff Stoll had worked with “the Man.” Looking through the logs, the hacker was accessing information about missile systems, military secrets, members of the CIA. There was so much on these systems. So Stoll called some of the people at the CIA. The FBI and NSA were also involved and before long, German authorities arrested the hacker. Markus Hess, whose handle was Urmel, was a German hacker who we now think broke into over 400 military computers in the 80s. It wasn’t just one person though. Dirk-Otto Brezinski, or DOB, Hans Hübner, or Pengo, and Karl Koch, or Pengo were also involved. And not only had they stolen secrets, but they’d sold them to The KGB using Peter Carl as a handler. Back in 1985, Koch was part of a small group of hackers who founded the Computer-Stammtisch in Hanover. That later became the Hanover chapter of the Chaos Computer Club. Hübner and Koch confessed, which gave them espionage amnesty - important in a place with so much of that going around in the 70s and 80s. He would be found burned by gasoline to death and while it was reported a suicide, that has very much been disputed - especially given that it happened shortly before the trials. DOB and Urmel received a couple years of probation for their part in the espionage, likely less of a sentence given that the investigations took time and the Berlin Wall came down the year they were sentenced. Hübner’s story and interrogation is covered in a book called Cyberpunk - which tells the same story from the side of the hackers. This includes passing into East Germany with magnetic tapes, working with handlers, sex, drugs, and hacker-esque rock and roll. I think I initially read the books a decade apart but would strongly recommend reading Part II of it either immediately before or after The Cukoo’s Egg. It’s interesting how a bunch of kids just having fun can become something far more. Similar stories were happening all over the world - another book called The Hacker Crackdown tells of many, many of these stories. Real cyberpunk stories told by one of the great cyberpunk authors. And it continues through to the modern era, except with much larger stakes than ever. Gorbachev may have worked to dismantle some of the more dangerous aspects of these security apparatuses, but Putin has certainly worked hard to build them up. Russian-sponsored and other state-sponsored rings of hackers continue to probe the Internet, delving into every little possible hole they can find. China hacks Google in 2009, Iran hits casinos, the US hits Iranian systems to disable centrifuges, and the list goes on. You see, these kids were stealing secrets - but after the Morris Worm brought the Internet to its knees in 1988, we started to realize how powerful the networks were becoming. But it all started with 75 cents. Because when it comes to security, there’s no amount or event too small to look into.
12/3/2021 • 11 minutes, 38 seconds
Buying All The Things On Black Friday and Cyber Monday
The Friday after Thanksgiving to the Monday afterwards is a bonanza of shopping in the United States, where capitalism runs wild with reckless abandon. It’s almost a symbol of a society whose identity is as intertwined with with rampant consumerism as it is with freedom and democracy. We are free to spend all our gold pieces. And once upon a time, we went back to work on Monday and looked for a raise or bonus to help replenish the coffers. But since fast internet connections started to show up in offices in the late 90s the commodification of holiday shopping, the very digitization of materialism. But how did it come to be? The term Black Friday goes back to a financial crisis in 1869 after Jay Gould and Jim Fisk tried to corner the market on Gold. That backfired and led to a Wall Street crash in September of that year. As the decades rolled by, Americans in the suburbs of urban centers had more and more disposable income and flocked to city centers the day after Thanksgiving. Finally, by 1961, the term showed up in Philadelphia where turmoil over the holiday shopping extravaganza inside. And so as economic downturns throughout the 60s and 70s gave way to the 1980s, the term spread slowly across the country until marketers, decided to use it to their advantage and run sales just on that day. Especially the big chains that were by now in cities where the term was common. And many retailers spent the rest of the year in the red and made back all of their money over the holidays - thus they got in the black. The term went from a negative to a positive. Stores opened earlier and earlier on Friday. Some even unlocking the doors at midnight after shoppers got a nice nap in following stuffing their faces with turkey the earlier in the day. As the Internet exploded in the 90s and buying products online picked up steam, marketers of online e-commerce platforms wanted in on the action. See, they considered brick and mortar to be mortal competition. Most of them should have been looking over their shoulder at Amazon rising, but that’s another episode. And so Cyber Monday was born in 2005 when the National Retail Federation launched the term to the world in a press release. And who wanted to be standing in line outside a retail store at midnight on Friday? Especially when the first Wii was released by Nintendo that year and was sold out everywhere early Friday morning. But come Cyber Monday it was all over the internet. Not only that, but one of Amazon’s top products that year was the iPod. And the DS Lite. And World of Warcraft. Oh and that was the same year Tickle Me Elmo was sold out everywhere. But available on the Internets. The online world closed the holiday out at just shy of half a billion dollars in sales. But they were just getting started. And I’ve always thought it was kitschy. And yet I joined in with the rest of them when I started getting all those emails. Because opt-in campaigns were exploding as e-tailers honed those skills at appealing to not wanting to be the worst parent in the world. And Cyber Monday grew year over year. Even as the Great Recession came and has since grown first to a billion dollar shopping day in 2010 and as brick and mortar companies jumped in on the action, $4 billion by 2017, $6 billion in 2018, and nearly $8 billion in 2019. As Covid-19 spread and people stayed home during the 2020 holiday shopping season, revenues from Cyber Monday grew 15% over the previous year, hitting $10.8 billion. But it came at the cost of brick and mortar sales, which fell nearly 24% over the same time a year prior. I guess it kinda’ did, but we’ll get to that in a bit. Seeing the success of the Cyber Monday marketers, American Express launched Small Business Saturday in 2010, hoping to lure shoppers into small businesses that accepted their cards. And who doesn’t love small businesses? Politicians flocked into malls in support, including President Obama in 2011. And by 2012, spending was over $5 billion on Small Business Saturday, and grew to just shy of $20 billion in 2020. To put that into perspective, Georgia, Zimbabwe, Afghanistan, Jamaica, Niger, Armenia, Haiti, Mongolia, and dozens of other countries have smaller GDPs than just one shopping day in the US. Brick and mortar stores are increasingly part of online shopping. Buy online, pick up curb-side. But that trend goes back to the early 2000s when Walmart was a bigger player on Cyber Monday than Amazon. That changed in 2008 and Walmart fought back with Cyber Week, stretching the field in 2009. Target said “us too” in 2010. And everyone in between hopped in. The sales start at least a week early and spread from online to retail in person with hundreds of emails flooding my inbox at this point. This year, Americans are expected to spend over $36 billion during the weekend from Black Friday to Cyber Monday. And the split between all the sales is pretty much indistinguishable. Who knows or to some degrees cares what bucket each gets placed in at this point. Something else was happening in the decades as Black Friday spread to consume the other days around the Thanksgiving holiday: intensifying globalization. Products flooding into the US from all over the world. Some cheap, some better than what is made locally. Some awesome. Some completely unnecessary. It’s a land of plenty. And yet, does it make us happy? My kid enjoyed playing with an empty toilet paper roll just as much as a Furby. And loved the original Xbox just as much as the Switch. I personally need less and to be honest want less as I get older. And yet I still find myself getting roped into spending too much on people at the holidays. Maybe we should create “experience Sunday” where instead of buying material goods, we facilitate free experiences for our loved ones. Because I’m pretty sure they’d rather have that than another ugly pair of holiday socks. Actually, that reminds me: I have some of those in my cart on Amazon so I should wrap this up as they can deliver it tonight if I hurry up. So this Thanksgiving I’m thankful that I and my family are healthy and happy. I’m thankful to be able to do things I love. I’m thankful for my friends. And I’m thankful to all of you for staying with us as we turn another page into the 2022 year. I hope you have a lovely holiday season and have plenty to be thankful for as well. Because you deserve it.
11/26/2021 • 9 minutes, 55 seconds
An Abridged History of Free And Open Source Software
In the previous episodes, we looked at the rise of patents and software and their impact on the nascent computer industry. But a copyright is a right. And that right can be given to others in whole or in part. We have all benefited from software where the right to copy was waved and it’s shaped the computing industry as much, if not more, than proprietary software. The term Free and Open Source Software (FOSS for short) is a blanket term to describe software that’s free and/or whose source code is distributed for varying degrees of tinkeration. It’s a movement and a choice. Programmers can commercialize our software. But we can also distribute it free of copy protections. And there are about as many licenses as there are opinions about what is unique, types of software, underlying components, etc. But given that many choose to commercialize their work products, how did a movement arise that specifically didn’t? The early computers were custom-built to perform various tasks. Then computers and software were bought as a bundle and organizations could edit the source code. But as operating systems and languages evolved and businesses wanted their own custom logic, a cottage industry for software started to emerge. We see this in every industry - as an innovation becomes more mainstream, the expectations and needs of customers progress at an accelerated rate. That evolution took about 20 years to happen following World War II and by 1969, the software industry had evolved to the point that IBM faced antitrust charges for bundling software with hardware. And after that, the world of software would never be the same. The knock-on effect was that in the 1970s, Bell Labs pushed away from MULTICS and developed Unix, which AT&T then gave away as compiled code to researchers. And so proprietary software was a growing industry, which AT&T began charging for commercial licenses as the bushy hair and sideburns of the 70s were traded for the yuppy culture of the 80s. In the meantime, software had become copyrightable due to the findings of CONTU and the codifying of the Copyright Act of 1976. Bill Gates sent his infamous “Open Letter to Hobbyists” in 1976 as well, defending the right to charge for software in an exploding hobbyist market. And then Apple v Franklin led to the ability to copyright compiled code in 1983. There was a growing divide between those who’d been accustomed to being able to copy software freely and edit source code and those who in an up-market sense just needed supported software that worked - and were willing to pay for it, seeing the benefits that automation was having on the capabilities to scale an organization. And yet there were plenty who considered copyright software immoral. One of the best remembered is Richard Stallman, or RMS for short. Steven Levy described Stallman as “The Last of the True Hackers” in his epic book “Hackers: Heroes of the Computer Revolution.” In the book, he describes the MIT Stallman joined where there weren’t passwords and we didn’t yet pay for software and then goes through the emergence of the LISP language and the divide that formed between Richard Greenblatt, who wanted to keep The Hacker Ethic alive and those who wanted to commercialize LISP. The Hacker Ethic was born from the young MIT students who freely shared information and ideas with one another and help push forward computing in an era they thought was purer in a way, as though it hadn’t yet been commercialized. The schism saw the death of the hacker culture and two projects came out of Stallman’s technical work: emacs, which is a text editor that is still included freely in most modern Unix variants and the GNU project. Here’s the thing, MIT was sitting on patents for things like core memory and thrived in part due to the commercialization or weaponization of the technology they were producing. The industry was maturing and since the days when kings granted patents, maturing technology would be commercialized using that system. And so Stallman’s nostalgia gave us the GNU project, born from an idea that the industry moved faster in the days when information was freely shared and that knowledge was meant to be set free. For example, he wanted the source code for a printer driver so he could fix it and was told it was protected by an NDAQ and so couldn’t have it. A couple of years later he announced GNU, a recursive acronym for GNU’s Not Unix. The next year he built a compiler called GCC and the next year released the GNU Manifesto, launching the Free Software Foundation, often considered the charter of the free and open source software movement. Over the next few years as he worked on GNU, he found emacs had a license, GCC had a license, and the rising tide of free software was all distributed with unique licenses. And so the GNU General Public License was born in 1989 - allowing organizations and individuals to copy, distribute, and modify software covered under the license but with a small change, that if someone modified the source, they had to release that with any binaries they distributed as well. The University of California, Berkley had benefited from a lot of research grants over the years and many of their works could be put into the public domain. They had brought Unix in from Bell Labs in the 70s and Sun cofounder and Java author Bill Joy worked under professor Fabry, who brought Unix in. After working on a Pascal compiler that Unix coauthor Ken Thompson left for Berkeley, Joy and others started working on what would become BSD, not exactly a clone of Unix but with interchangeable parts. They bolted on the OSI model to get networking and through the 80s as Joy left for Sun and DEC got ahold of that source code there were variants and derivatives like FreeBSD, NetBSD, Darwin, and others. The licensing was pretty permissive and simple to understand: Copyright (c) . All rights reserved. Redistribution and use in source and binary forms are permitted provided that the above copyright notice and this paragraph are duplicated in all such forms and that any documentation, advertising materials, and other materials related to such distribution and use acknowledge that the software was developed by the . The name of the may not be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED ``AS IS AND WITHOUT ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. By 1990 the Board of Regents at Berkley accepted a four clause BSD license that spawned a class of licenses. While it’s matured into other formats like a 0 clause license it’s one of my favorites as it is truest to the FOSS cause. And the 90s gave us the Apache License, from the Apache Group, loosely based on the BSD License and then in 2004 leaning away from that with the release of the Apache License 2 that was more compatible with the GPL license. Given the modding nature of Apache they didn’t require derivative works to also be open sourced but did require leaving the license in place for unmodified parts of the original work. GNU never really caught on as an OS in the mainstream, although a collection of tools did. The main reason the OS didn’t go far is probably because Linus Torvalds started releasing prototypes of his Linux operating system in 1991. Torvalds used The GNU General Public License v2, or GPLv2 to license his kernel, having been inspired by a talk given by Stallman. GPL 2 had been released in 1991 and something else was happening as we turned into the 1990s: the Internet. Suddenly the software projects being worked on weren’t just distributed on paper tape or floppy disks; they could be downloaded. The rise of Linux and Apache coincided and so many a web server and site ran that LAMP stack with MySQL and PHP added in there. All open source in varying flavors of what open source was at the time. And collaboration in the industry was at an all-time high. We got the rise of teams of developers who would edit and contribute to projects. One of these was a tool for another aspect of the Internet, email. It was called popclient, Here Eric S Raymond, or ESR for short, picked it up and renamed it to fetchmail, releasing it as an open source project. Raymond presented on his work at the Linux Congress in 1997, expanded that work into an essay and then the essay into “The Cathedral and the Bazaar” where bazaar is meant to be like an open market. That inspired many to open source their own works, including the Netscape team, which resulted in Mozilla and so Firefox - and another book called “Freeing the Source: The Story of Mozilla” from O’Reilly. By then, Tim O’Reilly was a huge proponent of this free or source code available type of software as it was known. And companies like VA Linux were growing fast. And many wanted to congeal around some common themes. So in 1998, Christine Peterson came up with the term “open source” in a meeting with Raymond, Todd Anderson, Larry Augustin, Sam Ockman, and Jon “Maddog” Hall, author of the first book I read on Linux. Free software it may or may not be but open source as a term quickly proliferated throughout the lands. By 1998 there was this funny little company called Tivo that was doing a public beta of a little box with a Linux kernel running on it that bootstrapped a pretty GUI to record TV shows on a hard drive on the box and play them back. You remember when we had to wait for a TV show, right? Or back when some super-fancy VCRs could record a show at a specific time to VHS (but mostly failed for one reason or another)? Well, Tivo meant to fix that. We did an episode on them a couple of years ago but we skipped the term Tivoization and the impact they had on GPL. As the 90s came to a close, VA Linux and Red Hat went through great IPOs, bringing about an era where open source could mean big business. And true to the cause, they shared enough stock with Linus Torvalds to make him a millionaire as well. And IBM pumped a billion dollars into open source, with Sun moving to open source openoffice.org. Now, what really happened there might be that by then Microsoft had become too big for anyone to effectively compete with and so they all tried to pivot around to find a niche, but it still benefited the world and open source in general. By Y2K there was a rapidly growing number of vendors out there putting Linux kernels onto embedded devices. TiVo happened to be one of the most visible. Some in the Linux community felt like they were being taken advantage of because suddenly you had a vendor making changes to the kernel but their changes only worked on their hardware and they blocked users from modifying the software. So The Free Software Foundation updated GPL, bundling in some other minor changes and we got the GNU General Public License (Version 3) in 2006. There was a lot more in GPL 3, given that so many organizations were involved in open source software by then. Here, the full license text and original copyright notice had to be included along with a statement of significant changes and making source code available with binaries. And commercial Unix variants struggled with SGI going bankrupt in 2006 and use of AIX and HP-UX Many of these open source projects flourished because of version control systems and the web. SourceForge was created by VA Software in 1999 and is a free service that can be used to host open source projects. Concurrent Versions System, or CVS had been written by Dick Grune back in 1986 and quickly became a popular way to have multiple developers work on projects, merging diffs of code repositories. That gave way to git in the hearts of many a programmer after Linus Torvalds wrote a new versioning system called git in 2005. GitHub came along in 2008 and was bought by Microsoft in 2018 for 2018. Seeing a need for people to ask questions about coding, Stack Overflow was created by Jeff Atwood and Joel Spolsky in 2008. Now, we could trade projects on one of the versioning tools, get help with projects or find smaller snippets of sample code on Stack Overflow, or even Google random things (and often find answers on Stack Overflow). And so social coding became a large part of many a programmers day. As did dependency management, given how many tools are used to compile a modern web app or app. I often wonder how much of the code in many of our favorite tools is actually original. Another thought is that in an industry dominated by white males, it’s no surprise that we often gloss over previous contributions. It was actually Grace Hopper’s A-2 compiler that was the first software that was released freely with source for all the world to adapt. Sure, you needed a UNIVAC to run it, and so it might fall into the mainframe era and with the emergence of minicomputers we got Digital Equipment’s DECUS for sharing software, leading in part to the PDP-inspired need for source that Stallman was so adamant about. General Motors developed SHARE Operating System for the IBM 701 and made it available through the IBM user group called SHARE. The ARPAnet was free if you could get to it. TeX from Donald Knuth was free. The BASIC distribution from Dartmouth was academic and yet Microsoft sold it for up to $100,000 a license (see Commodore ). So it’s no surprise that people avoided paying upstarts like Microsoft for their software or that it took until the late 70s to get copyright legislation and common law. But Hopper’s contributions were kinda’ like open source v1, the work from RMS to Linux was kinda’ like open source v2, and once the term was coined and we got the rise of a name and more social coding platforms from SourceForge to git, we moved into a third version of the FOSS movement. Today, some tools are free, some are open source, some are free as in beer (as you find in many a gist), some are proprietary. All are valid. Today there are also about as many licenses as there are programmers putting software out there. And here’s the thing, they’re all valid. You see, every creator has the right to restrict the ability to copy their software. After all, it’s their intellectual property. Anyone who chooses to charge for their software is well within their rights. Anyone choosing to eschew commercialization also has that right. And every derivative in between. I wouldn’t judge anyone based on any model those choose. Just as those who distribute proprietary software shouldn’t be judged for retaining their rights to do so. Why not just post things we want to make free? Patents, copyrights, and trademarks are all a part of intellectual property - but as developers of tools we also need to limit our liability as we’re probably not out there buying large errors and omissions insurance policies for every script or project we make freely available. Also, we might want to limit the abuse of our marks. For example, Linus Torvalds monitors the use of the Linux mark through the Linux Mark Institute. Apparently some William Dell Croce Jr tried to register the Linux trademark in 1995 and Torvalds had to sue to get it back. He provides use of the mark using a free and perpetual global sublicense. Given that his wife won the Finnish karate championship six times I wouldn’t be messing with his trademarks. Thank you to all the creators out there. Thank you for your contributions. And thank you for tuning in to this episode of the History of Computing Podcast. Have a great day.
11/24/2021 • 22 minutes, 34 seconds
Perl, Larry Wall, and Camels
Perl was started by Larry Wall in 1987. Unisys had just released the 2200 series and only a few years stopped using the name UNIVAC for any of their mainframes. They merged with Burroughs the year before to form Unisys. The 2200 was a continuation of the 36-bit UNIVAC 1107, which went all the way back to 1962. Wall was one of the 100,000 employees that helped bring in over 10 and a half billion in revenues, making Unisys the second largest computing company in the world at the time. They merged just in time for the mainframe market to start contracting. Wall had grown up in LA and Washington and went to grad school at the University of California at Berkeley. He went to the Jet Propulsion Laboratory after Grad School and then landed at System Development Corporation, which had spun out of the SAGE missile air defense system in 1955 and merged into Burroughs in 1986, becoming Unisys Defense Systems. The Cold War had been good to Burroughs after SDC built the timesharing components of the AN/FSQ-32 and the JOVIAL programming language. But changes were coming. Unix System V had been released in 1983 and by 1986 there was a rivalry with BSD, which had been spun out of UC - Berkeley where Wall went to school. And by then AT&T had built up the Unix System Development Laboratory, so Unix was no longer just a language for academics. Wall had some complicated text manipulation to program on these new Unix system and as many of us have run into, when we exceed a certain amount of code, awk becomes unwieldy - both from a sheer amount of impossible to read code and from a runtime perspective. Others were running into the same thing and so he got started on a new language he named Practical Extraction And Report Language, or Perl for short. Or maybe it stands for Pathologically Eclectic Rubbish Lister. Only Wall could know. The rise of personal computers gave way to the rise of newsgroups, and NNTP went to the IETF to become an Internet Draft in RFC 977. People were posting tools to this new medium and Wall posted his little Perl project to comp.sources.unix in 1988, quickly iterating to Perl 2 where he added the languages form of regular expressions. This is when Perl became one of the best programming languages for text processing and regular expressions available at the time. Another quick iteration came when more and more people were trying to write arbitrary data into objects with the rise of byte-oriented binary streams. This allowed us to not only read data from text streams, terminated by newline characters, but to read and write with any old characters we wanted to. And so the era of socket-based client-server technologies was upon us. And yet, Perl would become even more influential in the next wave of technology as it matured alongside the web. In the meantime, adoption was increasing and the only real resource to learn Perl was a the manual, or man, page. So Wall worked with Randal Schwartz to write Programming Perl for O’Reilly press in 1991. O’Reilly has always put animals on the front of their books and this one came with a Camel on it. And so it became known as “the pink camel” due to the fact that the art was pink and later the art was blue and so became just “the Camel book”. The book became the primary reference for Perl programmers and by then the web was on the rise. Yet perl was still more of a programming language for text manipulation. And yet most of what we did as programmers at the time was text manipulation. Linux came around in 1991 as well. Those working on these projects probably had no clue what kind of storm was coming with the web, written in 1990, Linux, written in 1991, php in 1994, and mysql written in 1995. It was an era of new languages to support new ways of programming. But this is about Perl - whose fate is somewhat intertwined. Perl 4 came in 1993. It was modular, so you could pull in external libraries of code. And so CPAN came along that year as well. It’s a repository of modules written in Perl and then dropped into a location on a file system that was set at the time perl was compiled, like /usr/lib/perl5. CPAN covers far more libraries than just perl, but there are now over a quarter million packages available, with mirrors on every continent except Antartica. That second edition coincided with the release of Perl 5 and was published in 1996. The changes to the language had slowed down for a bit, but Perl 5 saw the addition of packages, objects, references, and the authors added Tom Christiansen to help with the ever-growing camel book. Perl 5 also brought the extension system we think of today - somewhat based off the module system in Linux. That meant we could load the base perl into memory and call those extensions. Meanwhile, the web had been on the rise and one aspect of the power of the web was that while there were front-ends that were stateless, cookies had come along to maintain a user state. Given the variety of systems html was able to talk to mod_perl came along in 1996, from Gisle Was and others started working on ways to embed perl into pages. Ken Coar chaired a working group in 1997 to formalize the concept of the Common Gateway Interface. Here, we’d have a common way to call external programs from web servers. The era of web interactivity was upon us. Pages that were constructed on the fly could call scripts. And much of what was being done was text manipulation. One of the powerful aspects of Perl was that you didn’t have to compile. It was interpreted and yet dynamic. This meant a source control system could push changes to a site without uploading a new jar - as had to be done with a language like Java. And yet, object-oriented programming is weird in perl. We bless an object and then invoke them with arrow syntax, which is how Perl locates subroutines. That got fixed in Perl 6, but maybe 20 years too late to use a dot notation as is the case in Java and Python. Perl 5.6 was released in 2000 and the team rewrote the camel book from the ground up in the 3rd edition, adding Jon Orwant to the team. This is also when they began the design process for Perl 6. By then the web was huge and those mod_perl servlets or CGI scripts were, along with PHP and other ways of developing interactive sites, becoming common. And because of CGI, we didn’t have to give the web server daemons access to too many local resources and could swap languages in and out. There are more modern ways now, but nearly every site needed CGI enabled back then. Perl wasn’t just used in web programming. I’ve piped a lot of shell scripts out to perl over the years and used perl to do complicated regular expressions. Linux, Mac OS X, and other variants that followed Unix System V supported using perl in scripting and as an interpreter for stand-alone scripts. But I do that less and less these days as well. The rapid rise of the web mean that a lot of languages slowed in their development. There was too much going on, too much code being developed, too few developers to work on the open source or open standards for a project like Perl. Or is it that Python came along and represented a different approach with modules in python created to do much of what Perl had done before? Perl saw small slow changes. Python moved much more quickly. More modules came faster, and object-oriented programming techniques hadn’t been retrofitted into the language. As the 2010s came to a close, machine learning was on the rise and many more modules were being developed for Python than for Perl. Either way, the fourth edition of the Camel Book came in 2012, when Unicode and multi-threading was added to Perl. Now with Brian Foy as a co-author. And yet, Perl 6 sat in a “it’s coming so soon” or “it’s right around the corner” or “it’s imminent” for over a decade. Then 2019 saw Perl 6 finally released. It was renamed to Raku - given how big a change was involved. They’d opened up requests for comments all the way back in 2000. The aim was to remove what they considered historical warts, that the rest of us might call technical debt. Rather than a camel, they gave it a mascot called Camelia, the Raku Bug. Thing is, Perl had a solid 10% market share for languages around 20 years ago. It was a niche langue maybe, but that popularity has slowly fizzled out and appears to be on a short resurgence with the introduction of 6 - but one that might just be temporary. One aspect I’ve always loved about programming is the second we’re done with anything, we think of it as technical debt. Maybe the language or server matures. Maybe the business logic matures. Maybe it’s just our own skills. This means we’re always rebuilding little pieces of our code - constantly refining as we go. If we’re looking at Perl 6 today we have to look at whether we want to try and do something in Python 3 or another language - or try and just update Perl. If Perl isn’t being used in very many micro-services then given the compliance requirements to use each tool in our stack, it becomes somewhat costly to think of improving our craft with Perl rather than looking to use possibly more expensive solutions at runtime, but less expensive to maintain. I hope Perl 6 grows and thrives and is everything we wanted it to be back in the early 2000s. It helped so much in an era and we owe the team that built it and all those modules so much. I’ll certainly be watching adoption with fingers crossed that it doesn’t fade away. Especially since I still have a few perl-based lamda functions out there that I’d have to rewrite. And I’d like to keep using Perl for them!
11/21/2021 • 15 minutes
The Von Neumann Architecture
John Von Neumann was born in Hungary at the tail end of the Astro-Hungarian Empire. The family was made a part of the nobility and as a young prodigy in Budapest, He learned languages and by 8 years old was doing calculus. By 17 he was writing papers on polynomials. He wrote his dissertation in 1925 he added to set theory with the axiom of foundation and the notion of class, or properties shared by members of a set. He worked on the minimax theorem in 1928, the proof of which established zero-sum games and started another discipline within math, game theory. By 1929 he published the axiom system that led to Von Neumann–Bernays–Gödel set theory. And by 1932 he’d developed foundational work on ergodic theory which would evolve into a branch of math that looks at the states of dynamical systems, where functions can describe a points time dependence in space. And so he of course penned a book on quantum mechanics the same year. Did we mention he was smart and given the way his brain worked it made sense that he would eventually gravitate into computing. He went to the best schools with other brilliant scholars who would go on to be called the Martians. They were all researching new areas that required more and more computing - then still done by hand or a combination of hand and mechanical calculators. The Martians included De Hevesy, who won a Nobel prize for Chemistry. Von Kármán got the National Medal of Science and a Franklin Award. Polanyl developed the theory of knowledge and the philosophy of science. Paul Erdős was a brilliant mathematician who published over 1,500 articles. Edward Teller is known as the father of the hydrogen bomb, working on nuclear energy throughout his life and lobbying for the Strategic Defense Initiative, or Star Wars. Dennis Gabor wrote Inventing the Future and won a Nobel Prize in Physics. Eugene Wigner also took home a Nobel Prize in Physics and a National Medal of Science. Leo Szilard took home an Albert Einstein award for his work on nuclear chain reactions and joined in the Manhattan Project as a patent holder for a nuclear reactor. Physicists and brilliant scientists. And here’s a key component to the explosion in science following World War II: many of them fled to the United States and other western powers because they were Jewish, to get away from the Nazis, or to avoid communists controlling science. And then there was Harsanyl, Halmos, Goldmark, Franz Alexander, Orowan, and John Kemeny who gave us BASIC. They all contributed to the world we live in today - but von Neumann sometimes hid how smart he was, preferring to not show just how much arithmetic computed through his head. He was married twice and loved fast cars, fine food, bad jokes, and was an engaging and enigmatic figure. He studied measure theory and broke dimension theory into algebraic operators. He studied topological groups, operator algebra, spectral theory, functional analysis and abstract Hilbert space. Geometry and Lattice theory. As with other great thinkers, some of his work has stood the test of time and some has had gaps filled with other theories. And then came the Manhattan project. Here, he helped develop explosive lenses - a key component to the nuclear bomb. Along the way he worked on economics and fluid mechanics. And of course, he theorized and worked out the engineering principals for really big explosions. He was a commissioner of the Atomic Energy Commission and at the height of the Cold War after working out game theory, developed the concept of mutually assured destruction - giving the world hydrogen bombs and ICBMs and reducing the missile gap. Hard to imagine but at the times the Soviets actually had a technical lead over the US, which was proven true when they launched Sputnik. As with the other Martians, he fought Communism and Fasciscm until his death - which won him a Medal of Freedom from then president Eisenhower. His friend Stanislaw Ulam developed the modern Markov Chain Monte Carlo method and Von Neumann got involved in computing to work out those calculations. This combined with where his research lay landed him as an early power user of ENIAC. He actually heard about the machine at a station while waiting for a train. He’d just gotten home from England and while we will never know if he knew of the work Turing was doing on Colossus at Bletchley Park, we do know that he offered Turing a job at the Institute for Advanced Study that he was running in Princeton before World War II and had read Turing’s papers, including “On Computable Numbers” and understood the basic concepts of stored programs - and breaking down the logic into zeros and ones. He discussed using ENIAC to compute over 333 calculations per second. He could do a lot in his head, but he wasn’t that good of a computer. His input was taken and when Eckert and Mauchly went from ENIAC to EDVAC, or the Electronic Discrete Variable Calculator, the findings were published in a paper called “First Draft of a Report on the EDVAC” - a foundational paper in computing for a number of reasons. One is that Mauchly and Eckert had an entrepreneurial spirit and felt that not only should their names have been on the paper but that it was probably premature and so they quickly filed a patent in 1945, even though some of what they told him that went into the paper helped to invalidate the patent later. They considered these trade secrets and didn’t share in von Neumann’s idea that information must be set free. In the paper lies an important contribution, Von Neumann broke down the parts of a modern computer. He set the information for how these would work free. He broke down the logical blocks of how a computer works into the modern era. How once we strip away the electromechanical computers that a fully digital machine works. Inputs go into a Central Processing Unit, which has an instruction register, a clock to keep operations and data flow in sync, and a counter - it does the math. It then uses quick-access memory, which we’d call Random Access Memory, or RAM today, to make processing data instructions faster. And it would use long-term memory for operations that didn’t need to be as highly available to the CPU. This should sound like a pretty familiar way to architect devices at this point. The result would be sent to an output device. Think of a modern Swift app for an iPhone - the whole of what the computer did could be moved into a single wafer once humanity worked out how first transistors and then multiple transistors on a single chip worked. Yet another outcome of the paper was to inspire Turing and others to work on computers after the war. Turing named his ACE or Automatic Computing Engine out of respect to Charles Babbage. That led to the addition of storage to computers. After all, punched tape was used for Colossus during the war and and punched cards and tape had been around for awhile. It’s ironic that we think of memory as ephemeral data storage and storage as more long-term storage. But that’s likely more to do with the order these scientific papers came out than anything - and homage to the impact each had. He’d write The Computer and the Brain, Mathematical Foundations of Quantum Mechanics, The Theory of Games and Economic Behavior, Continuous Geometry, and other books. He also studied DNA and cognition and weather systems, inferring we could predict the results of climate change and possibly even turn back global warming - which by 1950 when he was working on it was already acknowledged by scientists. As with many of the early researchers in nuclear physics, he died of cancer - invoking Pascal’s wager on his deathbed. He died in 1957 - just a few years too early to get a Nobel Prize in one of any number of fields. One of my favorite aspects of Von Neumann was that he was a lifelong lover of history. He was a hacker - bouncing around between subjects. And he believed in human freedom. So much so that this wealthy and charismatic pseudo-aristocrat would dedicate his life to the study of knowledge and public service. So thank you for the Von Neumann Architecture and breaking computing down into ways that it couldn’t be wholesale patented too early to gain wide adoption. And thank you for helping keep the mutually assured destruction from happening and for inspiring generations of scientists in so many fields. I’m stoked to be alive and not some pile of nuclear dust. And to be gainfully employed in computing. He had a considerable impact in both.
11/12/2021 • 12 minutes, 24 seconds
Getting Fit With Fitbit
Fitbit was founded in 2007, originally as Healthy Metrics Research, Inc, by James Park and Eric Friedman. They had a goal to bring fitness trackers to market. They didn’t invent the pedometer and in fact wanted to go far further. That prize goes to Abraham-Louis Perrelet of Switzerland in 1780 or possibly back to da Vinci. And there are stories of calculating the distance armies moved using various mechanisms that used automations based on steps or the spinning of wagon wheels. The era of wearables arguably began in 1953 when the transistor radio showed up and Akio Morita and Masaru Ibuka started Sony. People started to get accustomed to carrying around technology. 1961 and Claude Shannon and Edward Thorp build a small computer to time when balls would land in roulette. Which they put in a shoe. Meanwhile sensors that could detect motion and the other chips to essentially create a small computer in a watch-sized package were coming down in price. Apple had already released the Nike+iPod Sports Kit the year before, with a little sensor that went in my running shoes. And Fitbit capitalized on an exploding market for tracking fitness. Apple effectively proved the concept was ready for higher end customers. But remember that while the iPod was incredibly popular at the time, what about everyone else? Park and Friedman raised $400,000 on the idea in a pre-seed round and built a prototype. No, it wasn’t actually a wearable, it was a bunch of sensors in a wooden box. That enabled them to shop around for more investors to actually finish a marketable device. By 2008 they were ready to take the idea to TechCrunch 50 and Tim O’Reilly and other panelists from TechCrunch loved it. And they picked up a whopping 2,000 pre-release orders. Only problem is they weren’t exactly ready to take that kind of volume. So they toured suppliers around Asia for months and worked overtime in hotel rooms fixing design and architecture issues. And in 2009 they were finally ready and took 25,000 orders, shipping about one fifth of them. That device was called the Fitbit Tracker and took on a goal of 10,000 steps that became a popular goal in Japan in the 1960s. It’s a little money-clip sized device with just one button that shows the status towards that 10,000 step goal. And once synchronized we could not only see tons of information about how many calories we burned and other statistics but we could also see Those first orders were sold directly through the web site. The next batch would be much different, going through Best Buy. The margins selling directly were much better and so they needed to tune those production lines. They went to four stores, then ten times that, then 15 times that. They announced the Fitbit Ultra in 2011. Here we got a screen that showed a clock but also came with a stopwatch. That would evolve into the Fitbit One in 2012. Bluetooth now allowed us to sync with our phones. That original device would over time evolve to the Zip and then the Inspire Clip. They grew fast in those first few years and enjoyed a large swathe of the market initially, but any time one vendor proves a market others are quick to fast-follow. The Nike Fuelband came along in 2012. There were also dozens of cheap $15 knock-offs in stores like Fry’s. But those didn’t have nearly as awesome an experience. A simple experience was the Fitbit Flex, released in 2013. The Fitbit could now be worn on the wrist. It looked more like the original tracker but a little smaller so it could slide in and out of a wristband. It could vibrate so could wake us up and remind us to get up and move. And the Fitbit Force came out that year, which could scroll through information on the screen, like our current step count. But that got some bad press for the nickel used on the device so the Charge came out the next year, doing much of the same stuff. And here we see the price slowly going up from below a hundred dollars to $130 as new models with better accelerometers came along. In 2014 they released a mobile app for all the major mobile platforms that allowed us to track devices through Bluetooth and opened up a ton of options to show other people our information. Chuck Schumer was concerned about privacy but the options for fitness tracking were about to explode in the other direction, becoming even less private. That’s the same year the LG G Watch came out, sporting a Qualcomm Snapdragon chip. The ocean was getting redder and devices were becoming more like miniature computers that happened to do tracking as well. After Android Wear was released in 2014, now called Wear OS, the ocean was bound to get much, much redder. And yet, they continued to grow and thrive. They did an IPO, or Initial Public Offering, in 2015 on the back of selling over 21 million devices. They were ready to reach a larger market. Devices were now in stores like Walmart and Target, and they had badges. It was an era of gamification and they were one of the best in the market at that. Walk enough steps to have circumnavigated the sun? There’s a badge for that. Walk the distance of the Nile? There’s a badge for that. Do a round trip to the moon and back? Yup, there’s a badge for that as well. And we could add friends in the app. Now we could compete to see who got more steps on the day. And of course some people cheated. Once I was wearing a Fitbit on my wrist I got 60,000 steps one day as I painted the kitchen. So we sometimes didn’t even mean to cheat. And an ecosystem had sprung up around Fitbit. Like Fitstar, a personal training coach, which got acquired by Fitbit and rebranded as Fitbit Coach. 2015 was also when the Apple Watch was released. The Apple Watch added many of the same features like badges and similar statistics. By then there were models of the Fitbit that could show who was calling our phone or display a text message we got. And that was certainly part of the Wear OS for of Android. But those other devices were more expensive and Fitbit was still able to own the less expensive part of the market and spend on R&D to still compete at the higher end. They were flush with cash by 2016 so while selling 22 million more devices, they bought Coin and Pebble that year, taking in technology developed through crowdfunding sources and helping mass market it. That’s the same year we got the Fitbit Alta, effectively merging the Charge and Alta and we got HR models of some devices, which stands for Heart Rate. Yup, they could now track that too. They bought Vector Watch SRL in 2017, the same year they released the Ionic smartwatch, based somewhat on the technology acquired from Pebble. But the stock took a nosedive, and the market capitalization was cut in half. They added weather to the Ionic and merged that tech with that from the Blaze, released the year before. Here, we see technology changing quickly - Pebble was merged with Blaze but Wear OS from Google and Watch OS from Apple were forcing changes all the faster. The apps on other platforms were a clear gap as were the sensors baked into so many different integrated circuit packages. But Fitbit could still compete. In 2018 they released a cheaper version of the smartwatch called the Versa. They also released an API that allowed for a considerable amount of third party development, as well as Fitbit OS 3. They also bought Twine Health in 2018 Partnered with Adidas in 2018 for the ionic. Partnered with Blue Cross Blue Shield to reduce insurance rates 2018 released the Charge 3 with oxygen saturation sensors and a 40% larger screen than the Charge 2. From there the products got even more difficult to keep track of, as they poked at every different corner of the market. The Inspire, Inspire HR, Versa 2, Versa Lite, Charge 4, Versa 3, Sense, Inspire 2, Luxe. I wasn’t sure if they were going to figure out the killer device or not when Fitbit was acquired by Google in 2021. And that’s where their story ends and the story of the ubiquitous ecosystem of Google begins. Maybe they continue with their own kernels or maybe they’re moving all of their devices to WearOS. Maybe Google figures out how to pull together all of their home automation and personal tracking devices into one compelling offer. Now they get to compete with Amazon who now has the Halo to help attack the bottom of the market. Or maybe Google leaves the Fitbit team alone to do what they do. Fitbit has sold over 100 million devices and sports well over 25 million active users. The Apple Watch surpassed that number and blew right past it. WearOS lives in a much more distributed environment where companies like Asus, Samsung, and LG sell products but it appears to have a similar installation base. And it’s a market still growing and likely looking for a leader, as it’s easy to imagine a day when most people have a smart watch. But the world has certainly changed since Mark Weiser was the Chief Technologist at the famed Xerox Palo Alto Research Center, or Xerox Parc in 1988 when he coined the term "ubiquitous computing.” Technology hadn’t entered every aspect of our lives at the time like it has now. The team at Fitbit didn’t invent wearables. George Atwood invented them in 1783. That was mostly pulleys and mechanics. Per V. Brüel first commercialized the piezoelectric accelerometer in 1943. It certainly took a long time to get packaged into an integrated circuit and from there it took plenty of time to end up on my belt loop. But from there it took less than a few years to go on my wrist and then once there were apps for all the things true innovation came way faster. Because it turns out that once we open up a bunch of APIs, we have no idea the amazing things people use with what then go from devices to platforms. But none of that would have happened had Fitbit not helped prove the market was ready for Weiser’s ubiquitous computing. And now we get to wrestle with the fallout while innovation is moving even faster. Because telemetry is the opposite of privacy. And if we forget to protect just one of those API endpoints, like not implementing rate throttling or messing up the permissions, or leaving a micro-service open to all the things, we can certainly end up telling the world all about things. Because the world is watching, whether we think we’re important enough to watch or not.
11/5/2021 • 16 minutes, 18 seconds
Our Friend, The Commodore Amiga
Jay Miner was born in 1932 in Arizona. He got his Bachelor of Science at the University of California at Berkeley and helped design calculators that used the fancy new MOS chips where he cut his teeth doing microprocessor design, which put him working on the MOS 6500 series chips. Atari decided to use those in the VCS gaming console and so he ended up going to work for Atari. Things were fine under Bushnell but once he was off to do Chuck E Cheese and Time-Warner was running Atari things started to change. There he worked on chip designs that would go into the Atari 400 and 800 computers, which were finally released in 1979. But by then, Miner was gone after he couldn’t get in step with the direction Atari was taking. So he floated around for a hot minute doing chip design for other companies until Larry Kaplan called. Kaplan had been at Atari and founded Activision in 1979. He had half a dozen games under his belt by then, but was ready for something different by 1982. He and Doug Neubauer saw the Nintendo NES was still using the MOS 6502 core, although now a Ricoh 2A03. They knew they could do better. Miner’s company didn’t want in on it, so they struck out on their own. Together they started a company called Hi-Toro, which they quickly renamed to Amiga. They originally wanted to build a new game console based on the Motorola 68000 chips, which were falling in price. They’d seen what Apple could do with the MOS 6502 chips and what Tandy did with the Z-80. These new chips were faster and had more options. Everyone knew Apple was working on the Lisa using the chips and they were slowly coming down in price. They pulled in $6 million in funding and started to build a game console, codenamed Lorraine. But to get cash flow, they worked on joysticks and various input devices for other gaming platforms. But development was expensive and they were burning through cash. So they went to Atari and signed a contract to give them exclusive access to the chips they were creating. And of course, then came the video game crash of 1983. Amazing timing. That created a shakeup around the industry. Jack Tramiel was out at Commodore, the company he founded originally to create calculators at the dawn of MOS chip technology. And Tramiel bought Atari from Time Warner. The console they were supposed to give Atari wasn’t done yet. Meanwhile Tramiel had cut most of the Atari team and was bringing in his trusted people from Commodore, so seeing they’d have to contend with a titan like Tramiel, the team at Amiga went looking for investors. That’s when Commodore bought Amiga to become their new technical team and next thing you know, Tramiel sues Commodore and that drags on from 1983 to 1987. Meanwhile, the nerds worked away. And by CES of 1984 they were able to show off the power of the graphics with a complex animation of a ball spinning and bouncing and shadows rendered on the ball. Even if the OS wasn’t quite done yet, there was a buzz. By 1985, they announced The Amiga from Commodore - what we now know as the Amiga 1000. The computer was prone to crash, they had very little marketing behind them, but they were getting sales into the high thousands per month. Not only was Amiga competing with the rest of the computer industry, but they were competing with the PET and VIC-20, which Commodore was still selling. So they finally killed off those lines and created a strategy where they would produce a high end machine and a low end machine. These would become the Amiga 2000 and 500. Then the Amiga 3000 and 500 Plus, and finally the 4000 and 1200 lines. The original chips evolved into the ECS then AGA chipsets but after selling nearly 5,000,000 machines, they just couldn’t keep up with missteps from Commodore after Irving Gould outside yet another CEO. But those Amiga machines. They were powerful and some of the first machines that could truly crunch the graphics and audio. And those higher end markets responded with tooling built specifically for the Amiga. Artists like Andy Warhol flocked to the platform. We got LightWave used on shows like Max Headroom. I can still remember that Money For Nothing video from Dire Straits. And who could forget Dev. The graphics might not have aged well but they were cutting edge at the time. When I toured colleges in that era, nearly every art department had a lab of Amigas doing amazing things. And while artists like Calvin Harris might have started out on an Amiga, many slowly moved to the Mac over the ensuing years. Commodore had emerged from a race to the bottom in price and bought themselves a few years in the wake of Jack Tramiel’s exit. But the platform wars were raging with Microsoft DOS and then Windows rising out of the ashes of the IBM PC and IBM-compatible clone makers were standardizing. Yet Amiga stuck with the Motorola chips, even as Apple was first in line to buy them from the assembly line. Amiga had designed many of their own chips and couldn’t compete with the clone makers at the lower end of the market or the Mac at the higher end. Nor the specialty systems running variants of Unix that were also on the rise. And while the platform had promised to sell a lot of games, the sales were a fourth or less of the other platforms and so game makers slowly stopped porting to the Amiga. They even tried to build early set-top machines, with the CDTV model, which they thought would help them merge the coming set-top television control and the game market using CD-based games. They saw MPEG coming but just couldn’t cash in on the market. We were entering into an era of computing where it was becoming clear that the platform that could attract the most software titles would be the most popular, despite the great chipsets. The operating system had started slow. Amiga had a preemptive multitasking kernel and the first version looked like a DOS windowing screen when it showed up iii 1985. Unlike the Mac or Windows 1 it had a blue background with oranges interspersed. It wasn’t awesome but it did the trick for a bit. But Workbench 2 was released for the Amiga 3000. They didn’t have a lot of APIs so developers were often having to write their own tools where other operating systems gave them APIs. It was far more object-oriented than many of its competitors at the time though, and even gave support for multiple languages and hypertext schemes and browsers. Workbench 3 came in 1992, along with the A4000. There were some spiffy updates but by then there were less and less people working on the project. And the tech debt piled up. Like a lack of memory protection in the Exec kernel meant any old task could crash the operating system. By then, Miner was long gone. He again clashed with management at the company he founded, which had been purchased. Without the technical geniuses around, as happens with many companies when the founders move on, they seemed almost listless. They famously only built features people asked for. Unlike Apple, who guided the industry. Miner passed away in 1994. Less than two years later, Commodore went bankrupt in 1996. The Amiga brand was bought and sold to a number of organizations but nothing more ever became of them. Having defeated Amiga, the Tramiel family sold off Atari in 1996 as well. The age of game consoles by American firms would be over until Microsoft released the Xbox in 2001. IBM had pivoted out of computers and the web, which had been created in 1989 was on the way in full force by then. The era of hacking computers together was officially over.
10/28/2021 • 13 minutes, 32 seconds
All About Amdahl
Gene Amdahl grew up in South Dakota and as with many during the early days of computing went into the Navy during World War II. He got his degree from South Dakota State in 1948 and went on to the University of Wisconsin-Madison for his PhD, where he got the bug for computers in 1952, joining the ranks of IBM that year. At IBM he worked on the iconic 704 and then the 7030 but found it too bureaucratic. And yet he came back to become the Chief Architect of the IBM S/360 project. They pushed the boundaries of what was possible with transistorized computing and along the way, Amdahl gave us Amdahl’s Law, which is an important aspect of parallel computing - how much latency tasks take when split across different CPUs. Think of it like the law of diminishing returns applied to processing. Contrast this with Fred Brook’s Brook’s Law - which says that adding incremental engineers don’t make projects happen faster by the same increment, or that it can cause a project to take even more time. As with Seymour Cray, Amdahl had ideas for supercomputers and left IBM again in 1970 when they didn’t want to pursue them - ironically just a few years after Thomas Watson Jr admitted that just 34 people at CDC had kicked IBM out of their leadership position in the market. First he needed to be able to build a computer, then move into supercomputers. Fully transistorized computing had somewhat cleared the playing field. So he developed the Amdahl 470V/6 - more reliable, more pluggable, and so cheaper than the IBM S/370. He also used virtual machine technology so customers could simulate a 370 and so run existing workloads cheaper. The first went to NASA and the second to the University of Michigan. During the rise of transistorized computing they just kept selling more and more machines. The company grew fast, taking nearly a quart of the market share. As we saw in the CDC episode, the IBM antitrust case was again giving a boon to other companies. Amdahl was able to leverage the fact that IBM software was getting unbundled with the hardware as a big growth hack. As with Cray at the time, Amdahl wanted to keep to one CPU per workload and developed chips and electronics with Fujitsu to enable doing so. By the end of the 70s they had grown to 6,000 employees on the back of a billion dollars in sales. And having built a bureaucratic organization like the one he just left, he left his namesake company much as Seymour Cray had left CDC after helping build it (and would later leave Cray to start yet another Cray). That would be Trilogy systems, which failed shortly after an IPO. I guess we can’t always bet on the name. Then Andor International. Then Commercial Data Servers, now a part of Xbridge systems. Meanwhile the 1980s weren’t kind to the company with his name on the masthead. The rise of Unix and first minicomputers then standard servers meant people were building all kinds of new devices. Amdahl started selling servers, given the new smaller and pluggable form factors. They sold storage. They sold software to make software, like IDEs. The rapid proliferation of networking and open standards let them sell networking products. Fujitsu ended up growing faster and when Gene Amdahl was gone, in the face of mounting competition with IBM, Amdahl tried to merge with Storage Technology Corporation, or StorageTek as it might be considered today. CDC had pushed some of its technology to StorageTek during their demise and StorageTek in the face of this new competition ended up filing Chapter 11 and getting picked up by Sun for just over $4 billion. But Amdahl was hemorrhaging money as we moved into the 90s. They sold off half the shares to Fujitsu, laid off over a third of their now 10,000 plus workforce, and by the year 2000 had been lapped by IBM on the high end market. They sold off their software division, and Fujitsu acquired the rest of the shares. Many of the customers then moved to the then-new IBM Z series servers that were coming out with 64 bit G3 and G4 chips. As opposed to the 31-bit chips Amdahl, now Fujitsu under the GlobalServer mainframe brand, sells. Amdahl came out of the blue, or Big Blue. On the back of Gene Amdahl’s name and a good strategy to attack that S/360 market, they took 8% of the mainframe market from IBM at one point. But they sold to big customers and eventually disappeared as the market shifted to smaller machines and a more standardized lineup of chips. They were able to last for awhile on the revenues they’d put together but ultimately without someone at the top with a vision for the future of the industry, they just couldn’t make it as a standalone company. The High Performance Computing server revenues steadily continue to rise at Fujitsu though - hitting $1.3 billion in 2020. In fact, in a sign of the times, the 20 million Euro PRIMEHPC FX700 that’s going to the Minho Advanced Computing Centre in Portugal is a petascale computer built on an ARM plus x86 architecture. My how the times have changed. But as components get smaller, more precise, faster, and more mass producible we see the same types of issues with companies being too large to pivot quickly from the PC to the post-PC era. Although at this point, it’s doubtful they’ll have a generations worth of runway from a patron like Fujitsu to be able to continue in business. Or maybe a patron who sees the benefits downmarket from the new technology that emerges from projects like this and takes on what amounts to nation-building to pivot a company like that. Only time will tell.
10/24/2021 • 8 minutes, 47 seconds
The Dartmouth Time Sharing System and Time Sharing
DTSS, or The Dartmouth Time Sharing System, began at Dartmouth College in 1963. That was the same year Project MAC started at MIT, which is where we got Multics, which inspired Unix. Both contributed in their own way to the rise of the Time Sharing movement, an era in computing when people logged into computers over teletype devices and ran computing tasks - treating the large mainframes of the era like a utility. The notion had been kicking around in 1959 but then John McCarthy at MIT started a project on an IBM 704 mainframe. And PLATO was doing something similar over at the University of Illinois, Champaign-Urbana. 1959 is also when John Kemeny and Thomas Kurtz at Dartmouth College bought Librascope General Purpose computer, then being made in partnership with the Royal Typewriter Company and Librascope - whichwould later be sold off to Lockheed Martin. Librascope had Stan Frankel - who had worked on both the Manhattan Project and the ENIAC. And he architected the LGP-30 in 1956, which ended up at Dartmouth. At this point, the computer looked like a desk with a built-in typewriter. Kurtz had four students that were trying to program in ALGOL 58. And they ended up writing a language called DOPE in the early 60s. But they wanted everyone on campus to have access to computing - and John McCarthy said why not try this new time sharing concept. So they went to the National Science Foundation and got funding for a new computer, which to the chagrin of the local IBM salesman, ended up being a GE-225. This baby was transistorized. It sported 10,0000 transistors and double that number of diodes. It could do floating-point arithmetic, used a 20-bit word, and came with 186,000 magnetic cores for memory. It was so space aged that one of the developers, Arnold Spielberg, would father one of the greatest film directors of all time. Likely straight out of those diodes. Dartmouth also picked up a front-end processor called a DATANET-30 from GE. This only had an 18-bit word size but could do 4k to 16k words and supported hooking up 128 terminals that could transfer data to and from the system at 3,000 bits a second using the Bell 103 modem. Security wasn’t a thing yet, so these things had direct memory access to the 225, which was a 235 by the time they received the computer. They got to work in 1963, installing the equipment and writing the code. The DATANET-30 received commands from the terminals and routed them to the mainframe. They scanned for commands 110 times per second from the terminals and ran them when the return key was pressed on a terminal. If the return key was a command they queued it up to run, taking into account routine tasks the computer might be doing in the background. Keep in mind, the actual CPU was only doing one task at a time, but it seemed like it was multi-tasking! Another aspect of democratizing computing across campus was to write a language that was more approachable than a language like Algol. And so they released BASIC in 1964, picking up where DOPE left off, and picking up a more marketable name. Here we saw a dozen undergraduates develop a language that was as approachable as the name implies. Some of the students went to Phoenix, where the GE computers were built. And the powers at GE saw the future. After seeing what Dartmouth had done, GE ended up packaging the DATANET-30 and GE-235 as one machine, which they marketed as the GE-265 the next year. And here we got the first commercially viable time-sharing system, which started a movement. One so successful that GE decided to get out of making computers and focus instead on selling access to time sharing systems. By 1968 they actually ended up shooting up to 40% of the market of the day. Dartmouth picked up a GE Mark II in 1966 and got to work on DTSS version 2. Here, they added some of the concepts coming out of the Multics project that was part of Project MAC at MIT and built on previous experiences. They added pipes and communication files to promote inter-process communications - thus getting closer to the multiple user conferencing like what was being done on PLATO with Notes. Things got more efficient and they could handle more and more concurrent sessions. This is when they went from just wanting to offer computing as a basic right on campus to opening up to schools in the area. Nearby Hanover High School started first and by 1967 they had over a dozen. Using further grants from NSF they added another dozen schools to what by then they were calling the Kiewit Network. Then added other smaller colleges and by 1971 supported a whopping 30,000 users. And by 73 supported leased line connections all the way to Ohio, Michigan, New York, and even Montreal. The system continued on in one form or another, allowing students to code in FORTRAN, COBOL, LISP, and yes… BASIC. It became less of a thing as Personal Computers started to show up here and there. But BASIC didn’t. Every computer needed a BASIC. But people still liked to connect on the system and share information. At least, until the project was finally shut down in 1999. Turns out we didn’t need time sharing once the Internet came along. Following the early work done by pioneers, companies like Tymshare and CompuServe were born. Tymshare came out of two of the GE team, Thomas O’Rourke and David Schmidt. They ran on SDS hardware and by 1970 had over 100 people, focused on time sharing with their Tymnet system and spreading into Europe by the mid-70s, selling time on their systems until the cost of personal computing caught up and they were acquired by McDonnell Douglas in 1984. CompuServe began on a PDP-10 and began similarly but by the time they were acquired by H&R Block had successfully pivoted into a dial-up online services company and over time focused on selling access to the Internet. And they survived through to an era when they migrated their own proprietary tooling to HTML in the late 90s - although they were eventually merged into AOL and are now a part of Verizon media. So the pivot bought them an extra decade or so. Time sharing and BASIC proliferated across the country and then the world from Dartmouth. Much of this - and a lot of personal stories from the people involved can be found in Dr Joy Rankin’s “A People’s History of Computing in the United States.” Published in 2018, it’s a fantastic read that digs in deep on the ways that many of these systems evolved. There are other works, but she does a phenomenal job tying events into one another. One consistent point across her book is around societal impact. These pioneers democratized access to computing. Many of those who built businesses around time sharing missed the rapidly falling price of chips and the ready access to personal computers that were coming. They also missed that BASIC would be monetized by companies like Microsoft. But they brought computing to high schools in the area, established blueprints for teaching that are used through to this day, and as Grace Hopper did a generation before - made us think of even more ways to make programming more accessible to a new generation with BASIC. One other author of note here is John Kemeny. His book “Man and the computer” is a must read. He didn’t have the knowledge of the upcoming personal computing - but far more prophetic than not around cloud operations as we get back to a time sharing-esque model of computing. And we do owe him, Kurtz, and everyone else involved a huge debt for their work. Many others pushed the boundaries of what was possible with computers. They pushed the boundaries of what was possible with accessibility. And now we have ubiquity. So when we see something complicated. Something that doesn’t seem all that approachable. Maybe we should just wonder if - by some stretch - we can make it a bit more BASIC. Like they did.
10/14/2021 • 12 minutes, 1 second
eBay, Pez, and Immigration
We talk about a lot of immigrants in this podcast. There’s the Hungarian mathemeticians and scientists that helped usher in the nuclear age and were pivotal in the early days of computing. There are the Germans who found a safe haven in the US following World War II. There are a number of Jewish immigrants who fled persecution, like Jack Tramiel - a Holocaust survivor who founded Commodore and later took the helm at Atari. An Wang immigrated from China to attend Harvard and stayed. And the list goes on and on. Georges Doriot, the father of venture capital came to the US from France in 1899, also to go to Harvard. We could even go back further and look at great thinkers like Nikolai Tesla who emigrated from the former Austrian empire. And then there’s the fact that many Americans, and most of the greats in computer science, are immigrants if we go a generation or four back. Pierre Omidyar’s parents were Iranian. They moved to Paris so his mom could get a doctorate in linguistics at the famous Sorbonne. While in Paris, his dad became a surgeon, and they had a son. They didn’t move to the US to flee oppression but found opportunity in the new land, with his dad becoming a urologist at Johns Hopkins. He learned to program in high school and got paid to do it at a whopping 6 bucks an hour. Omidyar would go on to Tufts, where he wrote shareware to manage memory on a Mac. And then the University of California, Berkeley before going to work on the MacDraw team at Apple. He started a pen-computing company, then a little e-commerce company called eShop, which Microsoft bought. And then he ended up at General Magic in 1994. We did a dedicated episode on them - but supporting developers at a day job let him have a little side hustle building these newish web page things. In 1995, his girlfriend, who would become his wife, wanted to auction off (and buy) Pez dispensers online. So Omidyar, who’d been experimenting with e-commerce since eShop, built a little auction site. He called it auction web. But that was a little boring. They lived in the Bay Area around San Francisco and so he changed it to electronic Bay, or eBay for short. The first sale was a broken laser printer he had laying around that he originally posted for a dollar and after a week, went for $14.83. The site was hosted out of his house and when people started using the site, he needed to upgrade the plan. It was gonna’ cost 8 times the original $30. So he started to charge a nominal fee to those running auctions. More people continued to sell things and he had to hire his first employee, Chris Agarpao. Within just a year they were doing millions of dollars of business. And this is when they hired Jeffrey Skoll to be the president of the company. By the end of 1997 they’d already done 2 million auctions and took $6.7 million in venture capital from Benchmark Capital. More people, more weird stuff. But no guns, drugs, booze, Nazi paraphernalia, or legal documents. And nothing that was against the law. They were growing fast and by 1998 brought in veteran executive Meg Whitman to be the CEO. She had been a VP of strategy at Disney, then the CEO of FTD, then a GM for Playskool before that. By then, eBay was making $4.7 million a year with 30 employees. Then came Beanie Babies. And excellent management. They perfected the online auction model, with new vendors coming into their space all the time, but never managing to unseat the giant. Over the years they made onboarding fast and secure. It took minutes to be able to sell and the sellers are the ones where the money is made with a transaction fee being charged per sale, in addition to a nominal percentage of the transaction. Executives flowed in from Disney, Pepsi, GM, and anywhere they were looking to expand. Under Whitman’s tenure they weathered the storm of the dot com bubble bursting, grew from 30 to 15,000 employees, took the company to an IPO, bought PayPal, bought StubHub, and scaled the company up to handle over $8 billion in revenue. The IPO made Omidyar a billionaire. John Donahoe replaced Whitman in 2008 when she decided to make a run at politics, working on Romney and then McCain’s campaigns. She then ran for the governor of California and lost. She came back to the corporate world taking on the CEO position at Hewlett-Packard. Under Donahoe they bought Skype, then sold it off. They bought part of Craigslist, then tried to develop a competing product. And finally sold off PayPal, which is now a public entity of its own right. Over the years since, revenues have gone up and down. Sometimes due to selling off companies like they did with PayPal and later with StubHub in 2019. They now sit at nearly $11 billion in revenues, over 13,000 employees, and are a mature business. There are still over 300,000 listings for Beanie Babies. And to the original inspiration over 50,000 listings for the word Pez. Omidyar has done well, growing his fortune to what Forbes estimated to be just over $13 billion dollars. Much of which he’s pledged to give away during his lifetime, having joined the Bill Gates and Warren Buffet giving pledge. So far, he’s given away well over a billion with a focus in education, governance, and citizen engagement. Oh and this will come as no surprise, helping fund consumer and mobile access to the Internet. Much of this giving is funneled through the Omidyar Network. The US just evacuated over 65,000 Afghans following the collapse of that government. Many an oppressive government runs off the educated, those who are sometimes capable of the most impactful dissent. Some of the best and most highly skilled of an entire society leaves a vacuum in regions that further causes a collapse. And yet finding a home in societies known for inclusion and opportunity, and being surrounded by inspiring stories of other immigrants who made a home and took advantage of opportunity. Or whose children could. Those melting pots in the history of science are when diversity of human and discipline combine to make society for everyone better. Even in the places they left behind. Anyone who’s been to Hungary or Poland or Germany - places where people once fled - can see it in the street every time people touch a mobile device and are allowed to be whomever they want to be. Thank you to the immigrants, past and future, for joining us to create a better world. I look forward to welcoming the next wave with open arms.
10/7/2021 • 9 minutes, 46 seconds
Ross Perot For President
Ross Perot built two powerhouse companies and changed the way politicians communicate with their constituents. Perot was an Eagle Scout who went on to join the US Naval Academy in 1949, and served in the Navy until the late 1950s. He then joined the IBM sales organization and one year ended up meeting his quota in the second week of the year. He had all kinds of ideas for new things to do and sell, but no one was interested. So he left and formed a new company called Electronic Data Systems, or EDS, in 1962. You see, these IBM mainframes weren’t being used for time sharing so most of the time they were just sitting idle. So he could sell the unused time from one company to another. Perot learned from the best. As with IBM he maintained a strict dress code. Suits, no facial hair, and a high and tight crew cut as you’d find him still sporting years after his Navy days. And over time they figured out many of these companies didn’t have anyone capable of running these machines in the first place, so they could also step in and become a technology outsourcer, doing maintenance and servicing machines. Not only that, but they were perfectly situated to help process all the data from the new Medicare and Medicaid programs that were just starting up. States had a lot of new paperwork to process and that meant computers. He hired Morton Meyerson out at Bell Helicopter in 1966, who would become the president and effectively created the outsourcing concept in computing. Meyerson would become the president of EDS before leaving to take a series of executive roles at other organizations, including the CTO at General Motors in the 1980s before retiring. EDS went public in 1968. He’d taken $1,000 in seed money from his wife Margot to start the company, and his stake was now worth $350 million, which would rise sharply in the ensuing years as the company grew. By the 1970s they were practically printing cash. They were the biggest insurance data provider and added credit unions then financial markets and were perfectly positioned to help build the data networks that ATMs and point of sale systems would use. By the start of 1980 they were sitting on a quarter billion dollars in revenues and 8,000 employees. They continued to expand into new industries with more transactional needs, adding airlines and travel. He sold in 1984 to General Motors for $2.5 billion and Perot got $700 million personally. Meyerson stayed on to run the company and by 1990 their revenues topped $5 billion and neared 50,000 employees. Perot just couldn’t be done in business. He was good at it. So in 1988 he started another firm, Perot Systems. The company grew quickly. Perot knew how to sell, how to build sales teams, and how to listen to customers and build services products they wanted. Perot again looked for an effective leader and tapped Meyerson yet again, who became the CEO of Perot Systems from 1992 to 1998. Perot’s son Ross Jr took over the company. In 2008, EDS and their 170,000 employees was sold to Hewlett-Packard for $13.9 billion and in 2009 Perot Systems was sold to Dell for $3.9 billion. Keep in mind that Morton Meyerson was a mentor to Michael Dell. When they were sold, Perot Systems had 23,000 employees and $2.8 billion in revenues. That’s roughly a 1.4x multiple of revenues, which isn’t as good as the roughly 2x multiple Perot got off EDS - but none too shabby given that by then multiples were down for outsourcers. Based on his work and that of others, they’d built two companies worth nearly $20 billion - before 2010, employing nearly 200,000 people. Along the way, Perot had some interesting impacts other than just building so many jobs for so many humans. He passed on an opportunity to invest in this little company called Microsoft. So when Steve Jobs left Apple and looked for investors he jumped on board, pumping $20 million into NeXT Computer, and getting a nice exit when the company went to Apple for nearly half a billion. Perot was philanthropic. He helped a lot of people coming home from various armed services in his lifetime. He was good to those he loved. He gave $10 million to have his friend Morton Meyerson’s name put on the Dallas Symphony Orchestra’s Symphony Center. And he was interested in no BS politics. Yet politics had been increasingly polarized since Nixon. So Perot also ran for president of the US in 1992, against George Bush and Bill Clinton. He didn’t win but he flooded the airwaves with common sense arguments about government inefficiency and a declining market for doing business. He showed computer graphics with all the charts and graphs you can imagine. And while he didn’t get even one vote in the electoral college did manage to get 19 percent of the vote. His message was one of populism. Take the country back, stop deficit spending just like he ran his companies, and that persists with various wings of especially the Republican Party to this day. Especially in Perot’s home state of Texas. He didn’t win, but he effectively helped define the Contract with America that that Newt Gingrich and the 90s era of oversized suit jacket Republicans used to as a strategy. He argued for things to help the common people - not politicians. Ironically, those that took much of his content actually did just the opposite, slowed down the political machine by polarizing the public. And allowed deficit spending to increase on their watch. He ran again in 1996 but this time got far less votes and didn’t end up running for office again. He had a similar impact on IBM. Around 30 years after leaving the company, his success in services was one of the many inspirations for IBM pivoting into services as well. By then the services industry was big enough for plenty of companies to thrive and while sales could be competitive they all did well as personal computing put devices on desks across the world and those devices needed support. Perot died in 2019, one of the couple hundred richest people in the US. Navy Lieutenant. Founder. Philanthropist. Texan. Father. Husband. His impact on the technology industry was primarily around seeing waste. Wasted computing time. Wasted staffing where more efficient outsourcing paradigms were possible. He inspired massive shifts in the industry that persist to this day.
9/30/2021 • 10 minutes, 56 seconds
The Osborne Effect
The Osborne Effect isn’t an episode about Spider-Man that covers turning green or orange and throwing bombs off little hoverboards. Instead it’s about the impact of The Osborne 1 computer on the history of computers. Although many might find discussing the Green Goblin or Hobgoblin much more interesting. The Osborne 1 has an important place in the history of computing because when it was released in 1981, it was the first portable computer that found commercial success. Before the Osborne, there were portable teletype machines for sure, but computers were just starting to get small enough that a fully functional machine could be taken on an airplane. It ran 2.2 of the CP/M operating system and came with a pretty substantial bundle of software. Keep in mind, there weren’t internal hard drives in machines like this yet but instead CP/M was a set of floppies. It came with MBASIC from Microsoft, dBASE II from Ashton-Tate, the WordStar word processor, SuperCalc for spreadsheets, the Grammatik grammar checker, the Adventure game, early ledger tools from PeachTree Software, and tons of other software. By bundling so many titles, they created a climate where other vendors did the same thing, like Kaypro. After all, nothing breeds competitors like the commercial success of a given vendor. The Osborne was before flat panel screens so had a built-in CRT screen. This and the power supply and the heavy case meant it weighed almost 25 pounds and came in at just shy of $1,800. Imagine two disk drives with a 5 inch screen in the middle. The keyboard, complete with a full 10-key pad, was built into a cover that could be pulled off and used to interface with the computer. The whole thing could fit under a seat on an airplane. Airplane seats were quite a bit larger than they are today back then! We think of this as a luggable rather than a portable because of that and because computers didn’t have batteries yet. Instead it pulled up to 37 watts of power. All that in a 20 inch wide case that stood 9 inches tall. The two people most commonly associated with the Osborne are Adam Osborne and Lee Felsenstein. Osborne got his PhD from the University of Delaware in 1968 and went to work in chemicals before he moved to the Bay Area and started writing books about computers and started a company called Osborne and Associates to write computer books. He sold that to McGraw-Hill in 1979. By then he’d been hanging around the Homebrew Computer Club for a few years and there were some pretty wild ideas floating around. He saw Jobs and Wozniak demo the Apple I and watched their rise. Founders and engineers from Cromemco, IMSAI, Tiny BASIC, and Atari were also involved there - mostly before any of those products were built. So with the money from McGraw-Hill and sales of some of his books like An Introduction To Microcomputers, he set about thinking through what he could build. Lee Felsenstein was another guy from that group who’d gotten his degree in Computer Science at Berkeley before co-creating Community Memory, a project to build an early bulletin board system on top of a SDS 940 timesharing mainframe with links to terminals like a Teletype Model 33 sitting at Leopold’s Records in Berkeley. That had started up back in 1973 when Doug Englebart donated his machine from The Mother of All Demos and eventually moved to minicomputers as those became more available. Having seen the world go from a mainframe the size of a few refrigerators to minicomputers and then to early microcomputers like the Altair, when a hardware hacker like Felsenstein paired up with someone with a little seed money like Osborne, magic was bound to happen. The design was similar to the NoteTaker that Alan Kay had built at Xerox in the 70s - but hacked together from parts they could find. Like 5 inch Fujitsu floppy drives. They made 10 prototypes with metal cases and quickly moved to injection molded plastic cases, taking them to the 1981 West Coast Computer Faire and getting a ton of interest immediately. Some thought the screen was a bit too small but at the time the price justified the software alone. By the end of 1981 they’d had months where they did a million dollars in sales and they fired up the assembly line. People bought modems to hook to the RS-232 compatible serial port and printers to hook to the parallel port. Even external displays. Sales were great. They were selling over 10,000 computers a month and Osborne was lining up more software vendors, offering stock in the Osborne Computer Corporation. By 1983 they were preparing to go public and developing a new line of computers, one of which was the Osborne Executive. That machine would come with more memory, a slightly larger screen, an expansion slot and of course more software using sweetheart licensing deals that accompanied stock in the company to keep the per-unit cost down. He also announced the Vixen - same chipset but lighter and cheaper. Only issue is this created a problem, which we now call the Osborne Effect. People didn’t want the Osborne 1 any more. Seeing something new was on the way, people cancelled their orders in order to wait for the Executive. Sales disappeared almost overnight. At the time, computer dealers pushed a lot of hardware and the dealers didn’t want to have all that stock of an outdated model. Revenue disappeared and this came at a terrible time. The market was changing. IBM showed up with a PC, Apple had the Lisa and were starting to talk about the Mac. KayPro had come along as a fierce competitor. Other companies had clued in on the software bundling idea. The Compaq portable wasn’t far away. The company ended up cancelling the IPO and instead filing for bankruptcy. They tried to raise money to build a luggable or portable IBM clone - and if they had done so maybe they’d be what Compaq is today - a part of HP. The Osborne 1 was cannibalized by the Osborne Executive that never actually shipped. Other companies would learn the same lesson as the Osborne Effect throughout history. And yet the Osborne opened our minds to this weird idea of having machines we could take with us on airplanes. Even if they were a bit heavy and had pretty small screens. And while the timing of announcements is only one aspect of the downfall of the company, the Osborne Effect is a good reminder to be deliberate about how we talk about future products. Especially for hardware but we also have to be careful not to sell features that don’t exist yet in software.
9/26/2021 • 9 minutes, 43 seconds
Chess Throughout The History Of Computers
Chess is a game that came out of 7th century India, originally called chaturanga. It evolved over time, perfecting the rules - and spread to the Persians from there. It then followed the Moorish conquerers from Northern Africa to Spain and from there spread through Europe. It also spread from there up into Russia and across the Silk Road to China. It’s had many rule formations over the centuries but few variations since computers learned to play the game. Thus, computers learning chess is a pivotal time in the history of the game. Part of chess is thinking through every possible move on the board and planning a strategy. Based on the move of each player, we can review the board, compare the moves to known strategies, and base our next move on either blocking the strategy of our opponent or carrying out a strategy of our own to get a king into checkmate. An important moment in the history of computers is when computers got to the point that they could beat a chess grandmaster. That story goes back to an inspiration from the 1760s where Wolfgang von Kempelen built a machine called The Turk to impress Austrian Empress Maria Theresa. The Turk was a mechanical chess playing robot with a Turkish head in Ottoman robes that moved pieces. The Turk was a maze of cogs and wheals and moved the pieces during play. It travelled through Europe, beating the great Napoleon Bonaparte and then the young United States, also besting Benjamin Franklin. It had many owners and they all kept the secret of the Turk. Countless thinkers wrote about theories about how it worked, including Edgar Allen Poe. But eventually it was consumed by fire and the last owner told the secret. There had been a person in the box moving the pieces the whole time. All those moving parts were an illusion. And still in 1868 a knockoff of a knockoff called Ajeeb was built by a cabinet maker named Charles Hooper. Again, people like Theodore Roosevelt and Harry Houdini were bested, along with thousands of onlookers. Charles Gumpel built another in 1876 - this time going from a person hiding in a box to using a remote control. These machines inspired people to think about what was possible. And one of those people was Leonardo Torres y Quevedo who built a board that also had electomagnets move pieces and light bulbs to let you know when the king was in check or mate. Like all good computer games it also had sound. He started the project in 1910 and by 1914 it could play a king and rook endgame, or a game where there are two kings and a rook and the party with the rook tries to get the other king into checkmate. At the time even a simplified set of instructions was revolutionary and he showed his invention off at the Paris where notable other thinkers were at a conference, including Norbert Weiner who later described how minimax search could be used to play chess in his book Cybernetics. Quevedo had built an analytical machine based on Babbage’s works in 1920 but adding electromagnets for memory and would continue building mechanical or analog calculating machines throughout his career. Mikhail Botvinnik was 9 at that point and the Russian revolution wound down in 1923 when the Soviet Union was founded following the fall of the Romanovs. He would become the first Russian Grandmaster in 1950, in the early days of the Cold War. That was the same year Claude Shannon wrote his seminal work, “Programming a Computer for Playing Chess.” The next year Alan Turing actually did publish executable code to play on a Ferranti Mark I but sadly never got to see it complete before his death. The prize to actually play a game would go to Paul Stein and Mark Wells in 1956 working on the MANIAC. Due to the capacity of computers at the time, the board was smaller but the computer beat an actual human. But the Russians were really into chess in the years that followed the crowing of their first grandmaster. In fact it became a sign of the superior Communist politic. Botvinnik also happened to be interested in electronics, and went to school in Leningrad University's Mathematics Department. He wanted to teach computers to play a full game of chess. He focused on selective searches which never got too far as the Soviet machines of the era weren’t that powerful. Still the BESM managed to ship a working computer that could play a full game in 1957. Meanwhile John McCarthy at MIT introduced the idea of an alpha-beta search algorithm to minimize the number of nodes to be traversed in a search and he and Alan Kotok shipped A Chess Playing Program for the IBM 7090 Computer, which would be updated by Richard Greenblatt when moving from the IBM mainframes to a DEC PDP-6 in 1965, as a side project for his work on Project MAC while at MIT. Here we see two things happening. One we are building better and better search algorithms to allow for computers to think more moves ahead in smarter ways. The other thing happening was that computers were getting better. Faster certainly, but more space to work with in memory, and with the move to a PDP, truly interactive rather than batch processed. Mac Hack VI as Greenblatt’s program would eventually would be called, added transposition tables - to show lots of previous games and outcomes. He tuned the algorithms, what we would call machine learning today, and in 1967 became the first computer program to defeat a person at the tournament level and get a chess rating. For his work, Greenblatt would become an honorary member of the US Chess Federation. By 1970 there were enough computers playing chess to have the North American Computer Chess Championships and colleges around the world started holding competitions. By 1971 Ken Thompson of Bell Labs, in a sign of the times, wrote a computer chess game for Unix. And within just 5 years we got the first chess game for the personal computer, called Microchess. From there computers got incrementally better at playing chess. Computer games that played chess shipped to regular humans, dedicated physical games, little cheep electronics knockoffs. By the 80s regular old computers could evaluate thousands of moves. Ken Thompson kept at it, developing Belle from 1972 and it continued on to 1983. He and others added move generators, special circuits, dedicated memory for the transposition table, and refined the alpha-beta algorithm started by McCarthy, getting to the point where it could evaluate nearly 200,000 moves a second. He even got the computer to the rank of master but the gains became much more incremental. And then came IBM to the party. Deep Blue began with researcher Feng-hsiung Hsu, as a project called ChipTest at Carnegie Mellon University. IBM Research asked Hsu and Thomas Anantharamanto complete a project they started to build a computer program that could take out a world champion. He started with Thompson’s Belle. But with IBM’s backing he had all the memory and CPU power he could ask for. Arthur Hoane and Murray Campell joined and Jerry Brody from IBM led the team to sprint towards taking their device, Deep Thought, to a match where reigning World Champion Gary Kasparov beat the machine in 1989. They went back to work and built Deep Blue, which beat Kasparov in their third attempt in 1997. Deep Blue was comprised of 32 RS/6000s running 200 MHz chips, split across two racks, and running IBM AIX - with a whopping 11.38 gigaflops of speed. And chess can be pretty much unbeatable today on an M1 MacBook Air, which comes pretty darn close to running at a teraflop. Chess gives us an unobstructed view at the emergence of computing in an almost linear fashion. From the human powered codification of electromechanical foundations of the industry to the emergence of computational thinking with Shannon and cybernetics to MIT on IBM servers when Artificial Intelligence was young to Project MAC with Greenblatt to Bell Labs with a front seat view of Unix to college competitions to racks of IBM servers. It even has little misdirections with pre-World War II research from Konrad Zuse, who wrote chess algorithms. And the mechanical Turk concept even lives on with Amazon’s Mechanical Turk services where we can hire people to do things that are still easier for humans than machines.
9/16/2021 • 12 minutes, 58 seconds
Sage: The Semi-Automatic Ground Environment Air Defense
The Soviet Union detonated their first nuclear bomb in 1949, releasing 20 kilotons worth of an explosion and sparking the nuclear arms race. A weather reconnaissance mission confirmed that the Soviets did so and Klaus Fuchs was arrested for espionage, after passing blueprints for the Fat Man bomb that had been dropped on Japan. A common name in the podcast is Vannevar Bush. At this point he was the president of the Carnegie Institute and put together a panel to verify the findings. The Soviets were catching up to American science. Not only did they have a bomb but they also had new aircraft that were capable of dropping a bomb. People built bomb shelters, schools ran drills to teach students how to survive a nuclear blast and within a few years we’d moved on to the hydrogen bomb. And so the world lived in fear of nuclear fall-out. Radar had come along during World War II and we’d developed Ground Control of Intercept, an early radar network. But that wouldn’t be enough to protect against this new threat. If one of these Soviet bombers, like the Tupolev 16 “Badger” were to come into American airspace, the prevailing thought was that we needed to shoot it down before the payload could be delivered. The Department of Defense started simulating what a nuclear war would look like. And they asked the Air Force to develop an air defense system. Given the great work done at MIT, much under the careful eye of Vannevar Bush, they reached out to George Valley, a professor in the Physics Department who had studied nuclear weapons. He also sat on the Air Force Scientific Advisory Board, and toured some of the existing sites and took a survey of the US assets. He sent his findings and they eventually made their way to General Vandenberg, who assigned General Fairchild to assemble a committee which would become the Valley Committee, or more officially the Air Defense Systems Engineering Committee, or ADSEC. ADSEC dug in deeper and decided that we needed a large number of radar stations with a computer that could aggregate and then analyze data to detect enemy aircraft in real time. John Harrington had worked out how to convert radar into code and could send that over telephone lines. They just needed a computer that could crunch the data as it was received. And yet none of the computer companies at the time were able to do this kind of real time operation. We were still in a batch processing mainframe world. Jay Forrester at MIT was working on the idea of real-time computing. Just one problem, the Servomechanisms lab where he was working on Project Whirlwind for the Navy for flight simulation was over budget and while they’d developed plenty of ground-breaking technology, they needed more funding. So Forrester was added to ADSEC and added the ability to process the digital radar information. By the end of 1950, the team was able to complete successful tests of sending radar information to Whirlwind over the phone lines. Now it was time to get funding, which was proposed at $2 million a year to fund a lab. Given that Valley and Forrester were both at MIT, they decided it should be at MIT. Here, they saw a way to help push the electronics industry forward and the Navy’s Chief Scientist Louis Ridenour knew that wherever that lab was built would become a the next scientific hotspot. The president at MIT at the time, James Killian, wasn’t exactly jumping on the idea of MIT becoming an arm of the department of defense so put together 28 scientists to review the plans from ADSEC, which became Project Charles and threw their support to forming the new lab. They had measured twice and were ready to cut. There were already projects being run by the military during the arms buildup named after other places surrounding MIT so they picked Project Lincoln for the name of the project to Project Lincoln. They appointed F Wheeler Loomis as the director with a mission to design a defense system. As with all big projects, they broke it up into five small projects, or divisions; things like digital computers, aircraft control and warning, and communications. A sixth did the business administration for the five technical divisions and another delivered technical services as needed. They grew to over 300 people by the end of 1951 and over 1,300 in 1952. They moved offsite and built a new campus - thus establishing Lincoln Lab. By the end of 1953 they had written a memo called A Proposal for Air Defense System Evolution: The Technical Phase. This called for a net of radars to be set up that would track the trajectory of all aircraft in the US airspace and beyond. And to build communications to deploy the weapons that could destroy those aircraft. The Manhattan project had brought in the nuclear age but this project grew to be larger as now we had to protect ourselves from the potential devastation we wrought. We were firmly in the Cold War with America testing the hydrogen bomb in 52 and the Soviets doing so in 55. That was the same year the prototype of the AN/FSQ-7 to replace Whirlwind. To protect the nation from these bombs they would need 100s of radars, 24 centers to receive data, and 3 combat centers. They planned for direction centers to have a pair of AN/FSQ-7 computers, which were the Whirlwind evolved. That meant half a million lines of code which was by far the most ambitious software ever written. Forrester had developed magnetic-core memory for Whirlwind. That doubled the speed of the computer. They hired IBM to build the AN/FSQ-7 computers and from there we started to see commercial applications as well when IBM added it to the 704 mainframe in 1955. Stalin was running labor camps and purges. An estimated nine million people died in Gulags or from hunger. Chairman Mao visited Moscow in 1957, sparking the Great Leap Forward policy that saw 45 million people die. All in the name of building a utopian paradise. Americans were scared. And Stalin was distrustful of computers for any applications beyond scientific computing for the arms race. By contrast, people like Ken Olsen from Lincoln Lab left to found Digital Equipment Corporation and sell modular mini-computers on the mass market, with DEC eventually rising to be the number two computing company in the world. The project also needed software and so that was farmed out to Rand who would have over 500 programmers work on it. And a special display to watch planes as they were flying, which began as a Stromberg-Carlson Charactron cathode ray tube. IBM got to work building the 24 FSQ-7s, with each coming in at a whopping 250 tons and nearly 50,000 vacuum tubes - and of course that magnetic core memory. All this wasn’t just theoretical. Given the proximity, they deployed the first net of around a dozen radars around Cape Cod as a prototype. They ran dedicated phone lines from Cambridge and built the first direction center, equipping it with an interactive display console that showed an x for each object being tracked, adding labels and then Robert Everett came up with the idea of a light gun that could be used as a pointing device, along with a keyboard, to control the computers from a terminal. They tested the Cape Cod installation in 1953 and added long range radars in Maine and New York by the end of 1954, working out bugs as they went. The Suffolk County Airfield in Long Island was added so Strategic Air Command could start running exercises for response teams. By the end of 1955 they put the system to the test and it passed all requirements from the Air Force. The radars detected the aircraft and were able to then control manned antiaircraft operations. By 1957 they were adding logic and capacity to the system, having fine tuned over a number of test runs until they got to a 100 percent interception rate. They were ready to build out the direction centers. The research and development phase was done - now it was time to produce an operational system. Western Electric built a network of radar and communication systems across Northern Canada that became known as the DEW line, short for Distant Early Warning. They added increasingly complicated radar, layers of protection, like Buckminster Fuller joining for a bit to develop a geodesic dome to protect the radars using fiberglass. They added radar to what looked like oil rigs around Texas, experimented with radar on planes and ships, and how to connect those back to the main system. By the end of 1957 the system was ready to move into production and integration with live weapons into the code and connections. This is where MIT was calling it done for their part of the program. Only problem is when the Air Force looked around for companies willing to take on such a large project, no one could. So MITRE corporation was spun out of Lincoln Labs pulling in people from a variety of other government contractors and continues on to this day working on national security, GPS, election integrity, and health care. They took the McChord airfare online as DC-12 in 1957, then Syracuse New York in 1958 and started phasing in automated response. Andrews, Dobbins, Geiger Field, Los Angeles Air Defense Sector, and others went online over the course of the next few years. The DEW line went operational in 1962, extending from Iceland to the Aleutians. By 1963, NORAD had a Combined Operations Center where the war room became reality. Burroughs eventually won a contract to deploy new D825 computers to form a system called BUIC II and with the rapidly changing release of new solid state technology those got replaced with a Hughes AN/TSQ-51. With the rise of Airborn Warning and Control Systems (AWACS), the ground systems started to slowly get dismantled in 1980, being phased out completely in 1984, the year after WarGames was released. In WarGames, Matthew Broderick plays David Lightman, a young hacker who happens upon a game. One Jon Von Neumann himself might have written as he applied Game Theory to the nuclear threat. Lightman almost starts World War III when he tries to play Global Thermonuclear War. He raises the level of DEFCON and so inspires a generation of hackers who founded conferences like DEFCON and to this day war dial, or war drive, or war whatever. The US spent countless tax money on advancing technology in the buildup for World War II and the years after. The Manhattan Project, Project Whirlwind, SAGE, and countless others saw increasing expenditures. Kennedy continued the trend in 1961 when he started the process of putting humans on the moon. And the unpopularity of the Vietnam war, which US soldiers had been dying in since 1959, caused a rollback of spending. The legacy of these massive projects was huge spending to advance the sciences required to produce each. The need for these computers in SAGE and other critical infrastructure to withstand a nuclear war led to ARPANET, which over time evolved into the Internet. The subsequent privatization of these projects, the rapid advancement in making chips, and the drop in costs while frequent doubling of speeds based on findings from each discipline finding their way into others then gave us personal computing and the modern era of PCs then mobile devices. But it all goes back to projects like ENIAC, Whirlwind, and SAGE. Here, we can see generations of computing evolve with each project. I’m frequently asked what’s next in our field. It’s impossible to know exactly. But we can look to mega projects, many of which are transportation related - and we can look at grants from the NSF. And DARPA and many major universities. Many of these produce new standards so we can also watch for new RFCs from the IETF. But the coolest tech is probably classified, so ask again in a few years! And we can look to what inspires - sometimes that’s a perceived need, like thwarting nuclear war. Sometimes mapping human genomes isn’t a need until we need to rapidly develop a vaccine. And sometimes, well… sometimes it’s just returning to some sense of normalcy. Because we’re all about ready for that. That might mean not being afraid of nuclear war as a society any longer. Or not being afraid to leave our homes. Or whatever the world throws at us next.
9/9/2021 • 18 minutes, 10 seconds
IBM Pivots To Services In The 90s
IBM is the company with nine lives. They began out of the era of mechanical and electro-mechanical punch card computing. They helped bring the mainframe era to the commercial market. They played their part during World War II. They helped make the transistorized computer mainstream with the S360. They helped bring the PC into the home. We’ve covered a number of lost decades - and moving into the 90s, IBM was in one. One that was largely created by an influx of revenues with the personal computer business. That revenue gave IBM a shot in the arm. But one that was temporary. By the early 90s the computer business was under assault by the clone makers. They had been out-maneuvered by Microsoft and the writing was on the wall that Big Blue was in trouble. The CEO who presided during the fall of the hardware empire was John Akers. At the time, IBM had their fingers in every cookie jar. They were involved with instigating the Internet. They made mainframes. They made PCs. They made CPUs. They made printers. They provided services. How could they be in financial trouble? Because their core business, making computers, was becoming a commodity and quickly becoming obsolete. IBM loves to own an industry. But they didn’t own PCs any more. They never owned PCs in the home after the PC Jr flopped. And mainframes were quickly going out of style. John Akers had been a lifer at IBM and by then there was generations of mature culture and its byproduct bureaucracy to contend with. Akers simply couldn’t move the company fast enough. The answer was to get rid of John Akers and bring in a visionary. The visionaries in the computing field didn’t want IBM. CEOs like John Sculley at Apple and Bill Gates at Microsoft turned them down. That’s when someone at a big customer came up. Louis Gerstner. He had been the CEO of American Express and Nabisco. He had connections to IBM, with his brother having run the PC division for a time. And he was the first person brought in from the outside to run the now-nearly 100 year old company. And the first of a wave of CEOs paid big money. Commonplace today. Starting in 1993, he moved from an IBM incapable of making decisions because of competing visions to one where execution and simplification was key. He made few changes in the beginning. At the time, competitor CDC was being split up into smaller companies and lines of business were being spun down as they faced huge financial losses. John Akers had let each division run itself - Gerstner saw the need for services given all this off-the-shelf tech being deployed in the 90s. The industry was standardizing, making it ripe for re-usable code that could run on this standardized hardware but then sold with a lot of services to customize it for each customer. In other words, it was time for IBM to become an integrator. One that could deliver a full stack of solutions. This meant keeping the company as one powerhouse rather than breaking it up. You see, buy IBM kit, have IBM supply a service, and then IBM could use that as a wedge to sell more and more automation services into the companies. Each aspect on its own wasn’t hugely profitable, but combined - much larger deal sizes. And given IBMs piece of the internet, it was time for e-commerce. Let that Gates kid have the operating system market and the clone makers have the personal computing market in their races to the bottom. He’d take the enterprise - where IBM was known and trusted and in many sectors loved. And he’d take what he called e-business, which we’d call eCommerce today. He brought in Irving Wladowsky-Berger and they spent six years pivoting one of the biggest companies in the world into this new strategy. The strategy also meant streamlining various operations. Each division previously had the autonomy to pick their own agency. He centralized with Ogilvy & Mather. One brand. One message. Unlike Akers he didn’t have much loyalty to the old ways. Yes, OS/2 was made at IBM but by the time Windows 3.11 shipped, IBM was outmaneuvered and in so one of his first moves was to stop development of OS/2 in 1994. They didn’t own the operating system market so they let it go. Cutting divisions meant there were a lot of people who didn’t fit in with the new IBM any longer. IBM had always hired people for life. Not any more. Over the course of his tenure over 100,000 people were laid off. According to Gerstner they’d grown lazy because performance didn’t really matter. And the high performers complained about the complacency. So those first two years came as a shock. But he managed to stop hemorrhaging cash and start the company back on a growth track. Let’s put this perspective. His 9 years saw the companies market cap nearly quintuple. This in a company that was founded in 1911 so by then 72 years old. Microsoft, Dell, and so many others grew as well. But a rising tide lifts all boats. Gerstner brought ibm back. But withdrew from categories that would take over the internet. He was paid hundreds of millions of dollars for his work. There were innovative new products in his tenure. The Simon Personal Communicator in 1994. This was one of the earliest mobile devices. Batteries and cellular technology weren’t where they needed to be just yet but it certainly represented a harbinger of things to come. IBM introduced the PC Jr all the way back in 1983 and killed it off within two years. But they’d been selling into retail the whole time. So he killed that off and by 2005 IBM pulled out of PCs entirely, selling the division off to Lenovo. A point I don’t think I’ve ever seen made is that Akers inherited a company embroiled in an anti-trust case. The Justice Department filed the case in 1975 and it ran until 1982 eating up thousands of hours of testimony across nearly a thousand witnesses. Akers took over in 1985 and by then IBM was putting clauses in every contract that allowed companies like Microsoft, Sierra Online, and everyone else involved with PCs to sell their software, services, and hardware to other vendors. This opened the door for the clone makers to take the market away after IBM had effectively built the ecosystem and standardized the hardware and form factors that would be used for decades. Unlike Akers, Gerstner inherited an IBM in turmoil - and yet with some of the brightest minds in the world. They had their fingers in everything from the emerging public internet to mobile devices to mainframes to personal computers. He gave management bonuses when they did well and wasn’t afraid to cut divisions, which in his book he says that only an outsider could do. This formalized into three “personal business commitments” that contributed to IBM strategies. He represented a shift not only at IBM but across the industry. The computer business didn’t require PhD CEOs as the previous generations had. Companies could manage the market and change cultures. Companies could focus on doing less and sell assets (like lines of business) off to raise cash to focus. Companies didn’t have to break up, as CDC had done - but instead could re-orient around a full stack of solutions for a unified enterprise. An enterprise that has been good to IBM and others who understand what they need ever since. The IBM turnaround out of yet another lost decade showed us options for large megalith organizations that maybe previously thought different divisions had to run with more independence. Some should - not all. Most importantly though, the turnaround showed us that a culture can change. It’s one of the hardest things to do. Part of that was getting rid of the dress code and anti-alcohol policy. Part of that was performance-based comp. Part of that was to show leaders that consensus was slow and decisions needed to be made. Leaders couldn’t be perfect but a fast decision was better than one that held up business. As with the turnaround after Apple’s lost decade, the turnaround was largely attributable to one powerful personality. Gerstner often shied away from the media. Yet he wrote a book about his experiences called Who Says Elephants Can’t Dance. Following his time at IBM he became the chairman of the private equity firm The Carlyle Group, where he helped grow them into a powerhouse in leveraged buyouts, bringing in Hertz, Kinder Morgan, Freescale Semiconductor, Nielson Corporation, and so many others. One of the only personal tidbits you get about him in his book is that he really hates to lose. We’re all lucky he turned the company around as since he got there IBM has filed more patents than any other company for 28 consecutive years. These help push the collective conscious forward from 2,300 AI patents to 3,000 cloud patents to 1,400 security patents to laser eye surgery to quantum computing and beyond. 150,000 patents in the storied history of the company. That’s a lot of work to bring computing into companies and increase productivity at scale. Not at the hardware level, with the constant downward pricing pressures - but at the software + services layer. The enduring legacy of the changes Gerstner made at IBM.
9/6/2021 • 13 minutes, 39 seconds
Spam Spam Spam!
Today's episode on spam is read by the illustrious Joel Rennich. Spam is irrelevant or inappropriate and unsolicited messages usually sent to a large number of recipients through electronic means. And while we probably think of spam as something new today, it’s worth noting that the first documented piece of spam was sent in 1864 - through the telegraph. With the advent of new technologies like the fax machine and telephone, messages and unsolicited calls were quick to show up. Ray Tomlinson is widely accepted as the inventor of email, developing the first mail application in 1971 for the ARPANET. It took longer than one might expect to get abused, likely because it was mostly researchers and people from the military industrial research community. Then in 1978, Gary Thuerk at Digital Equipment Corporation decided to send out a message about the new VAX computer being released by Digital. At the time, there were 2,600 email accounts on ARPANET and his message found its way to 400 of them. That’s a little over 15% of the Internet at the time. Can you imagine sending a message to 15% of the Internet today? That would be nearly 600 million people. But it worked. Supposedly he closed $12 million in deals despite rampant complaints back to the Defense Department. But it was too late; the damage was done. He proved that unsolicited junk mail would be a way to sell products. Others caught on. Like Dave Rhodes who popularized MAKE MONEY FAST chains in the 1988. Maybe not a real name but pyramid schemes probably go back to the pyramids so we might as well have them on the Internets. By 1993 unsolicited email was enough of an issue that we started calling it spam. That came from the Monty Python skit where Vikings in a cafe and spam was on everything on the menu. That spam was in reference to canned meat made of pork, sugar, water, salt, potato starch, and sodium nitrate that was originally developed by Jay Hormel in 1937 and due to how cheap and easy it was found itself part of a cultural shift in America. Spam came out of Austin, Minnesota. Jay’s dad George incorporated Hormel in 1901 to process hogs and beef and developed canned lunchmeat that evolved into what we think of as Spam today. It was spiced ham, thus spam. During World War II, Spam would find its way to GIs fighting the war and Spam found its way to England and countries the war was being fought in. It was durable and could sit on a shelf for moths. From there it ended up in school lunches, and after fishing sanctions on Japanese-Americans in Hawaii restricted the foods they could haul in, spam found its way there and some countries grew to rely on it due to displaced residents following the war. And yet, it remains a point of scorn in some cases. As the Monty Python sketch mentions, spam was ubiquitous, unavoidable, and repetitive. Same with spam through our email. We rely on email. We need it. Email was the first real, killer app for the Internet. We communicate through it constantly. Despite the gelatinous meat we sometimes get when we expect we’re about to land that big deal when we hear the chime that our email client got a new message. It’s just unavoidable. That’s why a repetitive poster on a list had his messages called spam and the use just grew from there. Spam isn’t exclusive to email. Laurence Canter and Martha Siegel sent the first commercial Usenet spam in the “Green Card” just after the NSF allowed commercial activities on the Internet. It was a simple Perl script to sell people on the idea of paying a fee to have them enroll people into the green card lottery. They made over $100,000 and even went so far as to publish a book on guerrilla marketing on the Internet. Canter got disbarred for illegal advertising in 1997. Over the years new ways have come about to try and combat spam. RBLs, or using DNS blacklists to mark hosts as unable to send blacklists and thus having port 25 blocked emerged in 1996 from the Mail Abuse Prevention System, or MAPS. Developed by Dave Rand and Paul Vixie, the list of IP addresses helped for a bit. That is, until spammers realized they could just send from a different IP. Vixie also mentioned the idea of of matching a sender claim to a mail server a message came from as a means of limiting spam, a concept that would later come up again and evolve into the Sender Policy Framework, or SPF for short. That’s around the same time Steve Linford founded Spamhaus to block anyone that knowingly spams or provides services to spammers. If you have a cable modem and try to setup an email server on it you’ve probably had to first get them to unblock your address from their Don’t Route list. The next year Mark Jeftovic created a tool called filter.plx to help filter out spam and that project got picked up by Justin Mason who uploaded his new filter to SourceForge in 2001. A filter he called SpamAssassin. Because ninjas are cooler than pirates. Paul Graham, the co-creator of Y Combinator (and author a LISP-like programming language) wrote a paper he called “A Plan for Spam” in 2002. He proposed using a Bayesian filter as antivirus software vendors used to combat spam. That would be embraced and is one of the more common methods still used to block spam. In the paper he would go into detail around how scoring of various words would work and probabilities that compared to the rest of his email that a spam would get flagged. That Bayesian filter would be added to SpamAssassin and others the next year. Dana Valerie Reese came up with the idea for matching sender claims independently and she and Vixie both sparked a conversation and the creation of the Anti-Spam Research Group in the IETF. The European Parliament released the Directive on Privacy and Electronic Communications in the EU criminalizing spam. Australia and Canada followed suit. 2003 also saw the first laws in the US regarding spam. The CAN-SPAM Act of 2003 was signed by President George Bush in 2003 and allowed the FTC to regulate unsolicited commercial emails. Here we got the double-opt-in to receive commercial messages and it didn’t take long before the new law was used to prosecute spammers with Nicholas Tombros getting the dubious honor of being the first spammer convicted. What was his spam selling? Porn. He got a $10,000 fine and six months of house arrest. Fighting spam with laws turned international. Christopher Pierson was charged with malicious communication after he sent hoax emails. And even though spammers were getting fined and put in jail all the time, the amount of spam continued to increase. We had pattern filters, Bayesian filters, and even the threat of legal action. But the IETF Anti-Spam Research Group specifications were merged by Meng Weng Wong and by 2006 W. Schlitt joined the paper to form a new Internet standard called the Sender Policy Framework which lives on in RFC 7208. There are a lot of moving parts but at the heart of it, Simple Mail Transfer Protocol, or SMTP, allows sending mail from any connection over port 25 (or others if it’s SSL-enabled) and allowing a message to pass requiring very little information - although the sender or sending claim is a requirement. A common troubleshooting technique used to be simply telnetting into port 25 and sending a message from an address to a mailbox on a mail server. Theoretically one could take the MX record, or the DNS record that lists the mail server to deliver mail bound for a domain to and force all outgoing mail to match that. However, due to so much spam, some companies have dedicated outbound mail servers that are different than their MX record and block outgoing mail like people might send if they’re using personal mail at work. In order not to disrupt a lot of valid use cases for mail, SPF had administrators create TXT records in DNS that listed which servers could send mail on their behalf. Now a filter could check the header for the SMTP server of a given message and know that it didn’t match a server that was allowed to send mail. And so a large chunk of spam was blocked. Yet people still get spam for a variety of reasons. One is that new servers go up all the time just to send junk mail. Another is that email accounts get compromised and used to send mail. Another is that mail servers get compromised. We have filters and even Bayesian and more advanced forms of machine learning. Heck, sometimes we even sign up for a list by giving our email out when buying something from a reputable site or retail vendor. Spam accounts for over 90% of the total email traffic on the Internet. This is despite blacklists, SPF, and filters. And despite the laws and threats spam continues. And it pays well. We mentioned Canter & Sigel. Shane Atkinson was sending 100 million emails per day in 2003. That doesn’t happen for free. Nathan Blecharczyk, a co-founder of Airbnb paid his way through Harvard on the back of spam. Some spam sells legitimate products in illegitimate ways, as we saw with early IoT standard X10. Some is used to spread hate and disinformation, going back to Sender Argic, known for denying the Armenian genocide through newsgroups in 1994. Long before infowars existed. Peter Francis-Macrae sent spam to solicit buying domains he didn’t own. He was convicted after resorting to blackmail and threats. Jody Michael Smith sold replica watches and served almost a year in prison after he got caught. Some spam is sent to get hosts loaded with malware so they could be controlled as happened with Peter Levashov, the Russian czar of the Kelihos botnet. Oleg Nikolaenko was arrested by the FBI in 2010 for spamming to get hosts in his Mega-D botnet. The Russians are good at this; they even registered the Russian Business Network as a website in 2006 to promote running an ISP for phishing, spam, and the Storm botnet. Maybe Flyman is connected to the Russian oligarchs and so continues to be allowed to operate under the radar. They remain one of the more prolific spammers. Much is sent by a small number of spammers. Khan C. Smith sent a quarter of the spam in the world until he got caught in 2001 and fined $25 million. Again, spam isn’t limited to just email. It showed up on Usenet in the early days. And AOL sued Chris “Rizler” Smith for over $5M for his spam on their network. Adam Guerbuez was fined over $800 million dollars for spamming Facebook. And LinkedIn allows people to send me unsolicited messages if they pay extra, probably why Microsoft payed $26 billion for the social network. Spam has been with us since the telegraph; it isn’t going anywhere. But we can’t allow it to run unchecked. The legitimate organizations that use unsolicited messages to drive business help obfuscate the illegitimate acts where people are looking to steal identities or worse. Gary Thuerk opened a Pandora’s box that would have been opened if hadn’t of done so. The rise of the commercial Internet and the co-opting of the emerging cyberspace as a place where privacy and so anonymity trump verification hit a global audience of people who are not equal. Inequality breeds crime. And so we continually have to rethink the answers to the question of sovereignty versus the common good. Think about that next time an IRS agent with a thick foreign accent calls asking for your social security number - and remember (if you’re old enough) that we used to show our social security cards to grocery store clerks when we wrote checks. Can you imagine?!?!
8/26/2021 • 11 minutes, 42 seconds
Do You Yahoo!?
The simple story of Yahoo! Is that they were an Internet search company that came out of Stanford during the early days of the web. They weren’t the first nor the last. But they represent a defining moment in the rise of the web as we know it today, when there was enough content out there that there needed to be an easily searchable catalog of content. And that’s what Stanford PhD students David Philo and Jerry Yang built. As with many of those early companies it began as a side project called “Jerry and David's Guide to the World Wide Web.” And grew into a company that at one time rivaled any in the world. At the time there were other search engines and they all started adding portal aspects to the site growing fast until the dot-com bubble burst. They slowly faded until being merged with another 90s giant, AOL, in 2017 to form Oath, which got renamed to Verizon Media in 2019 and then effectively sold to investment management firm Apollo Global Management in 2021. Those early years were wild. Yang moved to San Jose in the 70s from Taiwan, and earned a bachelors then a masters at Stanford - where he met David Filo in 1989. Filo is a Wisconsin kid who moved to Stanford and got his masters in 1990. The two went to Japan in 1992 on an exchange program and came home to work on their PhDs. That’s when they started surfing the web. Within two years they started their Internet directory in 1994. As it grew they hosted the database on Yang’s student computer called akebono and the search engine on konishiki, which was Filo’s. They renamed it to Yahoo, short for Yet Another Hierarchical Officious Oracle - after all they maybe considered themselves Yahoos at the time. And so Yahoo began life as akebono.stanford.edu/~yahoo. Word spread fast and they’d already had a million hits by the end of 1994. It was time to move out of Stanford. Mark Andreesen offered to let them move into Netscape. They bought a domain in 1995 and incorporated the company, getting funding from Sequoia Capital raising $3,000,000. They tinkered with selling ads on the site to fund buying more servers but there was a lot of businessing. They decided that they would bring in Tim Koogle (which ironically rhymes with Google) to be CEO who brought in Jeff Mallett from Novell’s consumer division to be the COO. They were the suits and got revenues up to a million dollars. The idea of the college kids striking gold fueled the rise of other companies and Yang and Filo became poster children. Applications from all over the world for others looking to make their mark started streaming in to Stanford - a trend that continues today. Yet another generation was about to flow into Silicon Valley. First the chip makers, then the PC hobbyists turned businesses, and now the web revolution. But at the core of the business were Koogle and Mallett, bringing in advertisers and investors. And the next year needing more and more servers and employees to fuel further expansion, they went public, selling over two and a half million shares at $13 to raise nearly $34 million. That’s just one year after a gangbuster IPO from Netscape. The Internet was here. Revenues shot up to $20 million. A concept we repeatedly look at is the technological determinism that industries go through. At this point it’s easy to look in the rear view mirror and see change coming at us. First we document information - like Jerry and David building a directory. Then we move it to a database so we can connect that data. Thus a search engine. Given that Yahoo! was a search engine they were already on the Internet. But the next step in the deterministic application of modern technology is to replace human effort with increasingly sophisticated automation. You know, like applying basic natural language processing, classification, and polarity scoring algorithms to enrich the human experience. Yahoo! hired “surfers” to do these tasks. They curated the web. Yes, they added feeds for news, sports, finance, and created content. Their primary business model was to sell banner ads. And they pioneered the field. Banner ads mean people need to be on the site to see them. So adding weather, maps, shopping, classifieds, personal ads, and even celebrity chats were natural adjacencies given that mental model. Search itself was almost a competitor, sending people to other parts of the web that they weren’t making money off eyeballs. And they were pushing traffic to over 65 million pages worth of data a day. They weren’t the only ones. This was the portal era of search and companies like Lycos, Excite, and InfoSeek were following the same model. They created local directories and people and companies could customize the look and feel. Their first designer, David Shen, takes us through the user experience journey in his book Takeover! The Inside Story the Yahoo Ad Revolution. They didn’t invent pay-per-clic advertising but did help to make it common practice and proved that money could be made on this whole new weird Internet thing everyone was talking about. The first ad they sold was for MCI and from there they were practically printing money. Every company wanted in on the action - and sales just kept going up. Bill Clinton gave them a spot in the Internet Village during his 1997 inauguration and they were for a time seemingly synonymous with the Internet. The Internet was growing fast. Cataloging the Internet and creating content for the Internet became a larger and larger manual task. As did selling ads, which was a manual transaction requiring a larger and larger sales force. As with other rising internet properties, people dressed how they wanted, they’d stay up late building code or content and crash at the desk. They ran funny cheeky ads with that yodel - becoming a brand that people knew and many equated to the Internet. We can thank San Francisco’s Black Rocket ad agency for that. They grew fast. The founders made several strategic acquisitions and gobbled up nearly every category of the Internet that has each grown to billions of dollars. They bought Four 11 for $95 million in their first probably best acquisition, and used them to create Yahoo! Mail in 1997 and a calendar in 1998. They had over 12 million Yahoo! Email users by he end of the year, inching their way to the same number of AOL users out there. There were other tools like Yahoo Briefcase, to upload files to the web. Now common with cloud storage providers like Dropbox, Box, Google Drive, and even Office 365. And contacts and Messenger - a service that would run until 2018. Think of all the messaging apps that have come with their own spin on the service since. 1998 also saw the acquisition of Viaweb, founded by the team that would later create Y Combinator. It was just shy of a $50M acquisition that brought the Yahoo! Store - which was similar to the Shopify of today. They got a $250 million investment from Softbank, bought Yoyodyne, and launched AT&T’s WorldNet service to move towards AOL’s dialup services. By the end of the year they were closing in on 100 million page views a day. That’s a lot of banners shown to visitors. But Microsoft was out there, with their MSN portal at the height of the browser wars. Yahoo! bought Broadcast.com in 1999 saddling the world with Mark Cuban. They dropped $5.7 billion for 300 employees and little more than an ISDN line. Here, they paid over a 100x multiple of annual revenues and failed to transition sellers into their culture. Sales cures all. In his book We Were Yahoo! Jeremy Ring describes the lays much of the blame of the failure to capitalize on the acquisition as not understanding the different selling motion. I don’t remember him outright saying it was hubris, but he certainly indicates that it should have worked out and that broadcast.com was could have been what YouTube would become. Another market lost in a failed attempt at Yahoo TV. And yet many of these were trends started by AOL. They also bought GeoCities in 99 for $3.7 billion. Others have tried to allow for fast and easy site development - the no code wysiwyg web. GeoCities lasted until 2009 - a year after Google launched Google Sites. And we have Wix, Squarespace, WordPress, and so many others offering similar services today. As they grew some of the other 130+ search engines at the time folded. The new products continued. The Yahoo Notebook came before Evernote. Imagine your notes accessible to any device you could log into. The more banners shown, the more clicks. Advertisers could experiment in ways they’d never been able to before. They also inked distribution deals, pushing traffic to other site that did things they didn’t. The growth of the Internet had been fast, with nearly 100 million people armed with Internet access - and yet it was thought to triple in just the next three years. And even still many felt a bubble was forming. Some, like Google, had conserved cash - others like Yahoo! Had spent big on acquisitions they couldn’t monetize into truly adjacent cash flow generating opportunities. And meanwhile they were alienating web properties by leaning into every space that kept eyeballs on the site. By 2000 their stock traded at $118.75 and they were the most valuable internet company at $125 billion. Then as customers folded when the dot-com bubble burst, the stock fell to $8.11 the next year. One concept we talk about in this podcast is a lost decade. Arguably they’d entered into theirs around the time the dot-com bubble burst. They decided to lean into being a media company even further. Again, showing banners to eyeballs was the central product they sold. They brought in Terry Semel in 2001 using over $100 million in stock options to entice him. And the culture problems came fast. Semel flew in a fancy jet, launched television shows on Yahoo! and alienated programmers, effectively creating an us vs them and de-valuing the work done on the portal and search. Work that could have made them competitive with Google Adwords that while only a year old was already starting to eat away at profits. But media. They bought a company called LaunchCast in 2001, charging a monthly fee to listen to music. Yahoo Music came before Spotify, Pandora, Apple Music, and even though it was the same year the iPod was released, they let us listen to up to 1,000 songs for free or pony up a few bucks a month to get rid of ads and allow for skips. A model that has been copied by many over the years. By then they knew that paid search was becoming a money-maker over at Google. Overture had actually been first to that market and so Yahoo! Bought them for $1.6 billion in 2003. But again, they didn’t integrate the team and in a classic “not built here” moment started Project Panama where they’d spend three years building their own search advertising platform. By the time that shipped the search war was over and executives and great programmers were flowing into other companies all over the world. And by then they were all over the world. 2005 saw them invest $1 billion in a little company called Alibaba. An investment that would accelerate Alibaba to become the crown jewel in Yahoo’s empire and as they dwindled away, a key aspect of what led to their final demise. They bought Flickr in 2005 for $25M. User generated content was a thing. And Flickr was almost what Instagram is today. Instead we’d have to wait until 2010 for Instagram because Flickr ended up yet another of the failed acquisitions. And here’s something wild to thin about - Stewart Butterfield and Cal Henderson started another company after they sold Flickr. Slack sold to Salesforce for over $27 billion. Not only is that a great team who could have turned Flickr into something truly special, but if they’d been retained and allowed to flourish at Yahoo! they could have continued building cooler stuff. Yikes. Additionally, Flickr was planning a pivot into social networking, right before a time when Facebook would take over that market. If fact, they tried to buy Facebook for just over a billion dollars in 2006. But Zuckerberg walked away when the price went down after the stock fell. They almost bought YouTube and considered buying Apple, which is wild to think about today. Missed opportunities. And Semmel was the first of many CEOs who lacked vision and the capacity to listen to the technologists - in a technology company. These years saw Comcast bring us weather.com, the rise of espn online taking eyeballs away from Yahoo! Sports, Gmail and other mail services reducing reliance on Yahoo! Mail. Facebook, LinkedIn, and other web properties rose to take ad placements away. Even though Yahoo Finance is still a great portal even sites like Bloomberg took eyeballs away from them. And then there was the rise of user generated content - a blog for pretty much everything. Jerry Yang came back to run the show in 2007 then Carol Bartz from 2009 to 2011 then Scott Thompson in 2012. None managed to turn things around after so much lost inertia - and make no mistake, inertia is the one thing that can’t be bought in this world. Wisconsin’s Marissa Mayer joined Yahoo! In 2012. She was Google’s 20th employee who’d risen through the ranks from writing code to leading teams to product manager to running web products and managing not only the layout of that famous homepage but also helped deliver Google AdWords and then maps. She had the pedigree and managerial experience - and had been involved in M&A. There was an immediate buzz that Yahoo! was back after years of steady decline due to incoherent strategies and mismanaged acquisitions. She pivoted the business more into mobile technology. She brought remote employees back into the office. She implemented a bell curve employee ranking system like Microsoft did during their lost decade. They bought Tumblr in 2013 for $1.1 billion. But key executives continued to leave - Tumbler’s value dropped, and the stock continued to drop. Profits were up, revenues were down. Investing in the rapidly growing China market became all the rage. The Alibaba investment was now worth more than Yahoo! itself. Half the shares had been sold back to Alibaba in 2012 to fund Yahoo! pursuing the Mayer initiatives. And then there was Yahoo Japan, which continued to do well. After years of attempts, activist investors finally got Yahoo! to spin off their holdings. They moved most of the shares to a holding company which would end up getting sold back to Alibaba for tens of billions of dollars. More missed opportunities for Yahoo! And so in the end, they would get merged with AOL - the two combined companies worth nearly half a trillion dollars at one point to become Oath in 2017. Mayer stepped down and the two sold for less than $5 billion dollars. A roller coaster that went up really fast and down really slow. An empire that crumbled and fragmented. Arguably, the end began in 1998 when another couple of grad students at Stanford approached Yahoo to buy Google for $1M. Not only did Filo tell them to try it alone but he also introduced them to Michael Moritz of Sequoia - the same guy who’d initially funded Yahoo!. That wasn’t where things really got screwed up though. It was early in a big change in how search would be monetized. But they got a second chance to buy Google in 2002. By then I’d switched to using Google and never looked back. But the CEO at the time, Terry Semel, was willing to put in $3B to buy Google - who decided to hold out for $5B. They are around a $1.8T company today. Again, the core product was selling advertising. And Microsoft tried to buy Yahoo! In 2008 for over 44 billion dollars to become Bing. Down from the $125 billion height of the market cap during the dot com bubble. And yet they eventually sold for less than four and a half billion in 2016 and went down in value from there. Growth stocks trade at high multiples but when revenues go down the crash is hard and fast. Yahoo! lost track of the core business - just as the model was changing. And yet never iterated it because it just made too much money. They were too big to pivot from banners when Google showed up with a smaller, more bite-sized advertising model that companies could grow into. Along the way, they tried to do too much. They invested over and over in acquisitions that didn’t work because they ran off the innovative founders in an increasingly corporate company that was actually trying to pretend not to be. We have to own who we are and become. And we have to understand that we don’t know anything about the customers of acquired companies and actually listen - and I mean really listen - when we’re being told what those customers want. After all, that’s why we paid for the company in the first place. We also have to avoid allowing the market to dictate a perceived growth mentality. Sure a growth stock needs to hit a certain number of revenue increase to stay considered a growth stock and thus enjoy the kind of multiples for market capitalization. But that can drive short term decisions that don’t see us investing in areas that don’t effectively manipulate stocks. Decisions like trying to keep eyeballs on pages with our own content rather than investing in the user generated content that drove the Web 2.0 revolution. The Internet can be a powerful medium to find information, allow humans to do more with less, and have more meaningful experiences in this life. But just as Yahoo! was engineering ways to keep eyeballs on their pages, the modern Web 2.0 era has engineered ways to keep eyeballs on our devices. And yet what people really want is those meaningful experiences, which happen more when we aren’t staring at our screens than when we are. As I look around at all the alerts on my phone and watch, I can’t help but wonder if another wave of technology is coming that disrupts that model. Some apps are engineered to help us lead healthier lifestyles and take a short digital detoxification break. Bush’s Memex in “As We May Think” was arguably an Apple taken from the tree of knowledge. If we aren’t careful, rather than the dream of computers helping humanity do more and free our minds to think more deeply we are simply left with less and less capacity to think and less and less meaning. The Memex came and Yahoo! helped connect us to any content we might want in the world. And yet, like so many others, they stalled in the phase they were at in that deterministic structure that technologies follow. Too slow to augment human labor with machine learning like Google did - but instead too quick to try and do everything for everyone with no real vision other than be everything to everyone. And so the cuts went on slowly for a long time, leaving employees constantly in fear of losing their jobs. As you listen to this if I were to leave a single parting thought - it would be that companies should always be willing to cannibalize their own businesses. And yet we have to have a vision that our teams rally behind for how that revenue gets replaced. We can’t fracture a company and just sprawl to become everything for everyone but instead need to be targeted and more precise. And to continue to innovate each product beyond the basic machine learning and into deep learning and beyond. And when we see those who lack that focus, don’t get annoyed but instead get stoked - that’s called a disruptive opportunity. And if there’s someone with 1,000 developers in a space, Nicholas Carlson in his book “Marissa Mayer and the Fight To Save Yahoo!” points out that one great developer is worth a thousand average ones. And even the best organizations can easily turn great developers into average ones for a variety of reason. Again, we can call these opportunities. Yahoo! helped legitimize the Internet. For that we owe them a huge thanks. And we can fast follow their adjacent expansions to find a slew of great and innovative ideas that increased the productivity of humankind. We owe them a huge thanks for that as well. Now what opportunities do we see out there to propel us further yet again?
8/20/2021 • 28 minutes, 15 seconds
The Innovations Of Bell Labs
What is the nature of innovation? Is it overhearing a conversation as with Morse and the telegraph? Working with the deaf as with Bell? Divine inspiration? Necessity? Science fiction? Or given that the answer to all of these is yes, is it really more the intersectionality between them and multiple basic and applied sciences with deeper understandings in each domain? Or is it being given the freedom to research? Or being directed to research? Few have as storied a history of innovation as Bell Labs and few have had anything close to the impact. Bell Labs gave us 9 Nobel Prizes and 5 Turing awards. Their alumni have even more, but those were the ones earned while at Bell. And along the way they gave us 26,000 patents. They researched, automated, and built systems that connected practically every human around the world - moving us all into an era of instant communication. It’s a rich history that goes back in time from the 2018 Ashkin Nobel for applied optical tweezers and 2018 Turing award for Deep Learning to an almost steampunk era of tophats and the dawn of the electrification of the world. Those late 1800s saw a flurry of applied and basic research. One reason was that governments were starting to fund that research. Alessandro Volta had come along and given us the battery and it was starting to change the world. So Napolean’s nephew, Napoleon III, during the second French Empire gave us the Volta Prize in 1852. One of those great researchers to receive the Volta Prize was Alexander Graham Bell. He invented the telephone in 1876 and was awarded the Volta Prize, getting 50,000 francs. He used the money to establish the Volta Laboratory, which would evolve or be a precursor to a research lab that would be called Bell Labs. He also formed the Bell Patent Association in 1876. They would research sound. Recording, transmission, and analysis - so science. There was a flurry of business happening in preparation to put a phone in every home in the world. We got the Bell System, The Bell Telephone Company, American Bell Telephone Company patent disputes with Elisha Gray over the telephone (and so the acquisition of Western Electric), and finally American Telephone and Telegraph, or AT&T. Think of all this as Ma’ Bell. Not Pa’ Bell mind you - as Graham Bell gave all of his shares except 10 to his new wife when they were married in 1877. And her dad ended up helping build the company and later creating National Geographic, even going international with International Bell Telephone Company. Bell’s assistant Thomas Watson sold his shares off to become a millionaire in the 1800s, and embarking on a life as a Shakespearean actor. But Bell wasn’t done contributing. He still wanted to research all the things. Hackers gotta’ hack. And the company needed him to - keep in mind, they were a cutting edge technology company (then as in now). That thirst for research would infuse AT&T - with Bell Labs paying homage to the founder’s contribution to the modern day. Over the years they’d be on West Street in New York and expand to have locations around the US. Think about this: it was becoming clear that automation would be able to replace human efforts where electricity is concerned. The next few decades gave us the vacuum tube, flip flop circuits, mass deployment of radio. The world was becoming ever so slightly interconnected. And Bell Labs was researching all of it. From physics to the applied sciences. By the 1920s, they were doing sound synchronized with motion and shooting that over long distances and calculating the noise loss. They were researching encryption. Because people wanted their calls to be private. That began with things like one-time pad cyphers but would evolve into speech synthesizers and even SIGSALY, the first encrypted (or scrambled) speech transmission that led to the invention of the first computer modem. They had engineers like Harry Nyquist, whose name is on dozens of theories, frequencies, even noise. He arrived in 1917 and stayed until he retired in 1954. One of his most important contributions was to move beyond printing telegraph to paper tape and to helping transmit pictures over electricity - and Herbert Ives from there sent color photos, thus the fax was born (although it would be Xerox who commercialized the modern fax machine in the 1960s). Nyquist and others like Ralph Hartley worked on making audio better, able to transmit over longer lines, reducing feedback, or noise. While there, Hartley gave us the oscillator, developed radio receivers, parametric amplifiers, and then got into servomechanisms before retiring from Bell Labs in 1950. The scientists who’d been in their prime between the two world wars were titans and left behind commercializable products, even if they didn’t necessarily always mean to. By the 40s a new generation was there and building on the shoulders of these giants. Nyquist’s work was extended by Claude Shannon, who we devoted an entire episode to. He did a lot of mathematical analysis like writing “A Mathematical Theory of Communication” to birth Information Theory as a science. They were researching radio because secretly I think they all knew those leased lines would some day become 5G. But also because the tech giants of the era included radio and many could see a day coming when radio, telephony, and aThey were researching how electrons diffracted, leading to George Paget Thomson receiving the Nobel Prize and beginning the race for solid state storage. Much of the work being done was statistical in nature. And they had William Edwards Deming there, whose work on statistical analysis when he was in Japan following World War II inspired a global quality movement that continues to this day in the form of frameworks like Six Sigma and TQM. Imagine a time when Japanese manufacturing was of such low quality that he couldn’t stay on a phone call for a few minutes or use a product for a time. His work in Japan’s reconstruction paired with dedicated founders like Akio Morita, who co-founded Sony, led to one of the greatest productivity increases, without sacrificing quality, of any time in the world. Deming would change the way Ford worked, giving us the “quality culture.” Their scientists had built mechanical calculators going back to the 30s (Shannon had built a differential analyzer while still at MIT) - first for calculating the numbers they needed to science better then for ballistic trajectories, then with the Model V in 1946, general computing. But these were slow; electromechanical at best. Mary Torrey was another statistician of the era who along with Harold Hodge gave us the theory of acceptance sampling and thus quality control for electronics. And basic electronics research to do flip-flop circuits fast enough to establish a call across a number of different relays was where much of this was leading. We couldn’t use mechanical computers for that, and tubes were too slow. And so in 1947 John Bardeen, Walter Brattain, and William Shockley invented the transistor at Bell Labs, which be paired with Shannon’s work to give us the early era of computers as we began to weave Boolean logic in ways that allowed us to skip moving parts and move to a purely transistorized world of computing. In fact, they all knew one day soon, everything that monster ENIAC and its bastard stepchild UNIVAC was doing would be done on a single wafer of silicon. But there was more basic research to get there. The types of wires we could use, the Marnaugh map from Maurice Karnaugh, zone melting so we could do level doping. And by 1959 Mohamed Atalla and Dawon Kahng gave us metal-oxide semiconductor field-effect transistors, or MOSFETs - which was a step on the way to large-scale integration, or LSI chips. Oh, and they’d started selling those computer modems as the Bell 101 after perfecting the tech for the SAGE air-defense system. And the research to get there gave us the basic science for the solar cell, electronic music, and lasers - just in the 1950s. The 1960s saw further work work on microphones and communication satellites like Telstar, which saw Bell Labs outsource launching satellites to NASA. Those transistors were coming in handy, as were the solar panels. The 14 watts produced certainly couldn’t have moved a mechanical computer wheel. Blaise Pascal and would be proud of the research his countries funds inspired and Volta would have been perfectly happy to have his name still on the lab I’m sure. Again, shoulders and giants. Telstar relayed its first television signal in 1962. The era of satellites was born later that year when Cronkite televised coverage of Kennedy manipulating world markets on this new medium for the first time and IBM 1401 computers encrypted and decrypted messages, ushering in an era of encrypted satellite communications. Sputnik may heave heated the US into orbit but the Telstar program has been an enduring system through to the Telstar 19V launched in 2018 - now outsourced to a Falcon 9 rocket from Space X. It might seem like Bell Labs had done enough for the world. But they still had a lot of the basic wireless research to bring us into the cellular age. In fact, they’d plotted out what the cellular age would look like all the way back in 1947! The increasing use of computers to do the all the acoustics and physics meant they were working closely with research universities during the rise of computing. They were involved in a failed experiment to create an operating system in the late 60s. Multics influenced so much but wasn’t what we might consider a commercial success. It was the result of yet another of DARPA’s J.C.R. Licklider’s wild ideas in the form of Project MAC, which had Marvin Minsky and John McCarthy. Big names in the scientific community collided with cooperation and GE, Bell Labs and Multics would end up inspiring many a feature of a modern operating system. The crew at Bell Labs knew they could do better and so set out to take the best of Multics and implement a lighter, easier operating system. So they got to work on Uniplexed Information and Computing Service, or Unics, which was a pun on Multics. Ken Thompson, Dennis Ritchie, Doug McIllroy, Joe Assana, Brian Kernigan, and many others wrote Unix originally in assembly and then rewrote it in C once Dennis Ritchie wrote that to replace B. Along the way, Alfred Aho, Peter Weinber, and Kernighan gave us AWSK and with all this code they needed a way to keep the source under control so Marc Rochkind gave us the SCCS, or Course Code Control System, first written for an IBM S/3370 and then ported to C - which would be how most environments maintained source code until CVS came along in 1986. And Robert Fourer, David Gay, and Brian Kernighan wrote A Mathematical Programming Language, or AMPL, while there. Unix began as a bit of a shadow project but would eventually go to market as Research Unix when Don Gillies left Bell to go to the University of Illinois at Champaign-Urbana. From there it spread and after it fragmented in System V led to the rise of IBM’s AIX, HP-UX, SunOS/Solaris, BSD, and many other variants - including those that have evolved into the macOS through Darwin, and Android through Linux. But Unix wasn’t all they worked on - it was a tool to enable other projects. They gave us the charge-coupled device, which resulted in yet another Nobel Prize. That is an image sensor built on the MOS technologies. While fiber optics goes back to the 1800s, they gave us attenuation over fiber and thus could stretch cables to only need repeaters every few dozen miles - again reducing the cost to run the ever-growing phone company. All of this electronics allowed them to finally start reducing their reliance on electromechanical and human-based relays to transistor-to-transistor logic and less mechanical meant less energy, less labor to repair, and faster service. Decades of innovation gave way to decades of profit - in part because of automation. The 5ESS was a switching system that went online in 1982 and some of what it did - its descendants still do today. Long distance billing, switching modules, digital line trunk units, line cards - the grid could run with less infrastructure because the computer managed distributed switching. The world was ready for packet switching. 5ESS was 100 million lines of code, mostly written in C. All that source was managed with SCCS. Bell continued with innovations. They produced that modem up into the 70s but allowed Hayes, Rockewell, and others to take it to a larger market - coming back in from time to time to help improve things like when Bell Labs, branded as Lucent after the breakup of AT&T, helped bring the 56k modem to market. The presidents of Bell Labs were as integral to the success and innovation as the researchers. Frank Baldwin Jewett from 1925 to 1940, Oliver Buckley from 40 to 51, the great Mervin Kelly from 51 to 59, James Fisk from 59 to 73, William Oliver Baker from 73 to 79, and a few others since gave people like Bishnu Atal the space to develop speech processing algorithms and predictive coding and thus codecs. And they let Bjarne Stroustrup create C++, and Eric Schmidt who would go on to become a CEO of Google and the list goes on. Nearly every aspect of technology today is touched by the work they did. All of this research. Jon Gerstner wrote a book called The Idea Factory: Bell Labs and the Great Age of American Innovation. He chronicles the journey of multiple generations of adventurers from Germany, Ohio, Iowa, Japan, and all over the world to the Bell campuses. The growth and contraction of the basic and applied research and the amazing minds that walked the halls. It’s a great book and a short episode like this couldn’t touch the aspects he covers. He doesn’t end the book as hopeful as I remain about the future of technology, though. But since he wrote the book, plenty has happened. After the hangover from the breakup of Ma Bell they’re now back to being called Nokia Bell Labs - following a $16.6 billion acquisition by Nokia. I sometimes wonder if the world has the stomach for the same level of basic research. And then Alfred Aho and Jeffrey Ullman from Bell end up sharing the Turing Award for their work on compilers. And other researchers hit a terabit a second speeds. A storied history that will be a challenge for Marcus Weldon’s successor. He was there as a post-doc there in 1995 and rose to lead the labs and become the CTO of Nokia - he said the next regeneration of a Doctor Who doctor would come in after him. We hope they are as good of stewards as those who came before them. The world is looking around after these decades of getting used to the technology they helped give us. We’re used to constant change. We’re accustomed to speed increases from 110 bits a second to now terabits. The nature of innovation isn’t likely to be something their scientists can uncover. My guess is Prometheus is guarding that secret - if only to keep others from suffering the same fate after giving us the fire that sparked our imaginations. For more on that, maybe check out Hesiod’s Theogony. In the meantime, think about the places where various sciences and disciplines intersect and think about the wellspring of each and the vast supporting casts that gave us our modern life. It’s pretty phenomenal when ya’ think about it.
8/15/2021 • 22 minutes, 18 seconds
VisiCalc, Excel, and The Rise Of The Spreadsheet
Once upon a time, people were computers. It’s probably hard to imagine teams of people spending their entire day toiling in large grids of paper, writing numbers and calculating numbers by hand or with mechanical calculators, and then writing more numbers and then repeating that. But that’s the way it was before the 1979. The term spreadsheet comes from back when a spread, like a magazine spread, of ledger cells for bookkeeping. There’s a great scene in the Netflix show Halston where a new guy is brought in to run the company and he’s flying through an electro-mechanical calculator. Halston just shuts the door. Ugh. Imagine doing what we do in a spreadsheet in minutes today by hand. Even really large companies jump over into a spreadsheet to do financial projections today - and with trendlines, tweaking this small variable or that, and even having different algorithms to project the future contents of a cell - the computerized spreadsheet is one of the most valuable business tools ever built. It’s that instant change we see when we change one set of numbers and can see the impact down the line. Even with the advent of mainframe computers accounting and finance teams had armies of people who calculated spreadsheets by hand, building complicated financial projections. If the formulas changed then it could take days or weeks to re-calculate and update every cell in a workbook. People didn’t experiment with formulas. Computers up to this point had been able to calculate changes and provided all the formulas were accurate could output results onto punch cards or printers. But the cost had been in the millions before Digital Equipment and Data Nova came along and had dropped into the tens or hundreds of thousands of dollars The first computerized spreadsheets weren’t instant. Richard Mattessich developed an electronic, batch spreadsheet in 1961. He’d go on to write a book called “Simulation of the Firm Through a Budget Computer Program.” His work was more theoretical in nature, but IBM developed the Business Computer Language, or BCL the next year. What IBM did got copied by their seven dwarves. former GE employees Leroy Ellison, Harry Cantrell, and Russell Edwards developed AutoPlan/AutoTab, another scripting language for spreadsheets, following along delimited files of numbers. And in 1970 we got LANPAR which opened up more than reading files in from sequential, delimited sources. But then everything began to change. Harvard student Dan Bricklin graduated from MIT and went to work for Digital Equipment Corporation to work on an early word processor called WPS-8. We were now in the age of interactive computing on minicomputers. He then went to work for FasFax in 1976 for a year, getting exposure to calculating numbers. And then he went off to Harvard in 1977 to get his MBA. But while he was at Harvard he started working on one of the timesharing programs to help do spreadsheet analysis and wrote his own tool that could do five columns and 20 rows. Then he met Bob Frankston and they added Dan Fylstra, who thought it should be able to run on an Apple - and so they started Software Arts Corporation. Frankston got the programming bug while sitting in on a class during junior high. He then got his undergrad and Masters at MIT, where he spent 9 years in school and working on a number of projects with CSAIL, including Multics. He’d been consulting and working at various companies for awhile in the Boston area, which at the time was probably the major hub. Frankston and Bricklin would build a visible calculator using 16k of space and that could fit on a floppy. They used a time sharing system and because they were paying for time, they worked at nights when time was cheaper, to save money. They founded a company called Software Arts and named their Visual Calculator VisiCalc. Along comes the Apple II. And computers were affordable. They ported the software to the platform and it was an instant success. It grew fast. Competitors sprung up. SuperCalc in 1980, bundled with the Osborne. The IBM PC came in 1981 and the spreadsheet appeared in Fortune for the first time. Then the cover of Inc Magazine in 1982. Publicity is great for sales and inspiring competitors. Lotus 1-2-3 came in 1982 and even Boeing Computer Services got in the game with Boeing Calc in 1985. They extended the ledger metaphor to add sheets to the spreadsheet, which we think of as tabs today. Quattro Pro from Borland copied that feature and despite having their offices effectively destroyed during an earthquake just before release, came to market in 1989. Ironically they got the idea after someone falsely claimed they were making a spreadsheet a few years earlier. And so other companies were building Visible Calculators and adding new features to improve on the spreadsheet concept. Microsoft was one who really didn’t make a dent in sales at first. They released an early spreadsheet tool called Multiple in 1982. But Lotus 1-2-3 was the first killer application for the PC. It was more user friendly and didn’t have all the bugs that had come up in VisiCalc as it was ported to run on platform after platform. Lotus was started by Mitch Kapor who brought Jonathan Sachs in to develop the spreadsheet software. Kapor’s marketing prowess would effectively obsolete VisiCalc in a number of environments. They made TV commercials so you know they were big time! And they were written natively in the x86 assembly so it was fast. They added the ability to add bar charts, pie charts, and line charts. They added color and printing. One could even spread their sheet across multiple monitors like in a magazine. It was 1- spreadsheets, 2 - charts and graphs and 3 - basic database functions. Heck, one could even change the size of cells and use it as a text editor. Oh, and macros would become a standard in spreadsheets after Lotus. And because VisiCalc had been around so long, Lotus of course was immediately capable of reading a VisiCalc file when released in 1983. As could Microsoft Excel, when it came along in 1985. And even Boeing Calc could read Lotus 1-2-3 files. After all, the concept went back to those mainframe delimited files and to this day we can import and export to tab or comma delimited files. VisiCalc had sold about a million copies but that would cease production the same year Excel was released, although the final release had come in 1983. Lotus had eaten their shorts in the market, and Borland had watched. Microsoft was about to eat both of theirs. Why? Visi was about to build a windowing system called Visi-On. And Steve Jobs needed a different vendor to turn to. He looked to Lotus who built a tool called Jazz that was too basic. But Microsoft had gone public in 1985 and raised plenty of money, some of which they used to complete Excel for the Mac that year. Their final release in 1983 began to fade away And so Excel began on the Mac and that first version was the first graphical spreadsheet. The other developers didn’t think that a GUI was gonna’ be much of a thing. Maybe graphical interfaces were a novelty! Version two was released for the PC in 1987 along with Windows 2.0. Sales were slow at first. But then came Windows 3. Add Microsoft Word to form Microsoft Office and by the time Windows 95 was released Microsoft became the de facto market leader in documents and spreadsheets. That’s the same year IBM bought Lotus and they continued to sell the product until 2013, with sales steadily declining. And so without a lot of competition for Microsoft Excel, spreadsheets kinda’ sat for a hot minute. Computers became ubiquitous. Microsoft released new versions for Mac and Windows but they went into that infamous lost decade until… competition. And there were always competitors, but real competition with something new to add to the mix. Google bought a company called 2Web Technologies in 2006, who made a web-based spreadsheet called XL2WEB. That would become Google Sheets. Google bought DocVerse in 2010 and we could suddenly have multiple people editing a sheet concurrently - and the files were compatible with Excel. By 2015 there were a couple million users of Google Workspace, growing to over 5 million in 2019 and another million in 2020. In the years since, Microsoft released Office 365, starting to move many of their offerings onto the web. That involved 60 million people in 2015 and has since grown to over 250 million. The statistics can be funny here, because it’s hard to nail down how many free vs paid Google and Microsoft users there are. Statista lists Google as having a nearly 60% market share but Microsoft is clearly making more from their products. And there are smaller competitors all over the place taking on lots of niche areas. There are a few interesting tidbits here. One is that the tools that there’s a clean line of evolution in features. Each new tool worked better, added features, and they all worked with previous file formats to ease the transition into their product. Another is how much we’ve all matured in our understanding of data structures. I mean we have rows and columns. And sometimes multiple sheets - kinda’ like multiple tables in a database. Our financial modeling and even scientific modeling has grown in acumen by leaps and bounds. Many still used those electro-mechanical calculators in the 70s when you could buy calculator kits and build your own calculator. Those personal computers that flowed out in the next few years gave every business the chance to first track basic inventory and calculate simple information, like how much we might expect in revenue from inventory in stock to now thousands of pre-built formulas that are supported across most spreadsheet tooling. Despite expensive tools and apps to do specific business functions, the spreadsheet is still one of the most enduring and useful tools we have. Even for programmers, where we’re often just getting our data in a format we can dump into other tools! So think about this. What tools out there have common file types where new tools can sit on top of them? Which of those haven’t been innovated on in a hot minute? And of course, what is that next bold evolution? Is it moving the spreadsheet from a book to a batch process? Or from a batch process to real-time? Or from real-time to relational with new tabs? Or to add a GUI? Or adding online collaboration? Or like some big data companies using machine learning to analyze the large data sets and look for patterns automatically? Not only does the spreadsheet help us do the maths - it also helps us map the technological determinism we see repeated through nearly every single tool for any vertical or horizontal market. Those stuck need disruptive competitors if only to push them off the laurels they’ve been resting on.
8/8/2021 • 17 minutes, 2 seconds
Microsoft's Lost Decade
Microsoft went from a fledgeling purveyor of a BASIC for the Altair to a force to be reckoned with. The biggest growth hack was when they teamed up with IBM to usher in the rise of the personal computer. They released apps and an operating system and by licensing DOS to anyone (not just IBM) and then becoming the dominant OS they allowed clone makers to rise and thus broke the hold IBM had on the computing industry since the days the big 8 mainframe companies were referred to as “Snow White and the Seven Dwarfs.” They were young and bold and grew fast. They were aggressive, taking on industry leaders in different segments, effectively putting CP/M out of business, taking out Lotus, VisiCalc, Novell, Netscape, `and many, many other companies. Windows 95 and Microsoft Office helped the personal computer become ubiquitous in homes and offices. The team knew about the technical debt they were accruing in order to grow fast. So they began work on projects that would become Windows NT and that kernel would evolve into Windows 2000, phasing out the legacy operating systems. They released Windows Server, Microsoft Exchange, Flight Simulators, maps, and seemed for a time to be poised to take over the world. They even seemed to be about to conquer the weird new smart phone world. And then something strange happened. They entered into what we can now call a lost decade. Actually there’s nothing strange about it. This happens to nearly every company. Innovation dropped off. Releases of Windows got buggy. The market share of their mobile operating system fell away. Apple and Android basically took the market away entirely. They let Google take the search market and after they failed to buy Yahoo! they released an uninspired Bing. The MSN subscriptions that once competed with AOL fell away. Google Docs came and was a breath of fresh air. Windows Servers started moving into cloud solutions where Box or Dropbox were replacing filers and Sharepoint became a difficult story to tell. They copied features from other companies. But were followers - not leaders. And the stock barely moved for a decade, while Apple more than doubled the market cap of Microsoft for a time. What exactly happened here? Some have blamed Steve Ballmer, who replaced Bill Gates who had led the company since 1975 and if we want to include Traf-O-Data - since 1972. They grew fast and by Y2K there were memes about how rich Bill Gates was. Then a lot changed over the next decade. Windows XP was released in 2001, the same year the first Xbox was released. They launched the Windows Mobile operating system in 2003, planning to continue the whole “rule the operating system” approach. Vista comes along in 2007. Bill Gates retires in 2008. Later that year, Google launches Chrome - which would eat market share away from Microsoft over time. Windows 7 launches in 2009. Microsoft releases Bing in 2009 and Azure in 2010. The Windows phone comes in 2010 as well, and they would buy Skype for $8.5 billion dollars the next year. The tablet Microsoft Surface coming in 2012, the same year the iPad was released. And yet, there were market forces operating to work against what Microsoft was doing. Google had come roaring out of the dot com bubble bursting and proved how money could be made with search. Yahoo! was slow to respond. As Google’s aspirations became clear by 2008, Ballmer moved to buy them for $20 billion eventually growing the bid to nearly $45 billion - a move that was thwarted but helped to take the attention of the Yahoo! team away from the idea of making money. That was the same year Android and Chrome was released. Meanwhile, Apple released the iPhone in 2007 and were shipping the 3G in 2008, taking the mobile market by storm. By 2010, slow sales of the Windows phone were already spelling the end for Ballmer. Microsoft had launched Windows CE in 1996, held the smaller Handheld PC market for a time. They took over and owned the operating system market for personal computers and productivity software. They were able to seize a weakened and lumbering IBM to do so. And yet they turned into that lumbering juggernaut of a company. All those products and all the revenues being generated, Microsoft looked unstoppable by the end of the millennium. Then they got big. Like really big. And organizations can be big and stay lean - but they weren’t. Leaders fought leaders, programmers fled, and the fiefdoms caused them to be slow to jump into new opportunities. Bill Gates had been an intense leader - but the Department of Justice filed an anti-trust case against Microsoft and between that and just managing hyper-growth along the way they lost a focus on customers and instead focused inward. And so by all accounts, the lost decade began in 2001. Vista was supposed to ship in 2003 but pushed all the way back to 2007. Bing was a dud, losing billions out of the gate. By 2011 Google released Chrome OS - an operating system that was basically a web browser bootstrapped on Linux and effectively what Netscape founder Marc Andreesen foreshadowed in a Time piece in the early days of the browser wars. Kurt Eichenwald of Vanity Fair wrote an article called MICROSOFT’S LOST DECADE in 2012, looking at what led to the lost decade. He pointed out the arrogance and the products that, even though they were initially developed at Microsoft, would be launched by others first. It was Bill Gates who turned down releasing the ebook, which would evolve into the tablet. The article explained that moving timelines around pushed developing new products back in the list of priorities. The Windows and Office divisions were making so much money for the company that they had all the power to make the decisions - even when the industry was moving in another direction. The original employees got rich when the company went public and much of the spunk left with them. The focus shifted to pushing up the stock price. Ballmer is infamously not a product guy and he became the president of the company in 1998 and moved to CEO in 2000. But Gates stayed on in product. As we see with companies when their stock price starts to fall, the finger pointing begins. Cost cutting begins. The more talented developers can work anywhere - and so companies like Amazon, Google, and Apple were able to fill their ranks with great developers. When organizations in a larger organization argue, new bureaucracies get formed. Those slow things down by replacing common sense with process. That is good to a point. Like really good to a point. Measure twice, cut once. Maybe even measure three times and cut once. But software doesn’t get built by committees, it gets built by humans. The closer engineers are to humans the more empathy will go into the code. We can almost feel it when we use tools that developers don’t fully understand. And further, developers write less code when they’re in more meetings. Some are good but when there are tiers of supervisors and managers and directors and VPs and Jr and Sr of each, their need to justify their existence leads to more meetings. The Vanity Fair piece also points out that times changed. He called the earlier employees “young hotshots from the 1980s” who by then were later career professionals and as personal computers became pervasive the way people use them changed. And a generation of people who grew up with computers now interacted with them differently. People were increasingly always online. Managers who don’t understand their users need to release control of products to those who do. They made the Zune 5 years after the iPod was released and had lit a fire at Apple. Less than two months later, Apple released the iPhone and the Zune was dead in the water, never eclipsing over 5 percent of the market and finally being discontinued in 2012. Ballmer had predicted that all of these Apple products would fail and in a quote from a source in the Vanity Fair article, a former manager at Microsoft said “he is hopelessly out of touch with reality or not listening to the tech staff around him”. One aspect the article doesn’t go into is the sheer number of products Microsoft was producing. They were competing with practically every big name in technology, from Apple to Oracle to Google to Facebook to Amazon to Salesforce. They’d gobbled up so many companies to compete in so many spaces that it was hard to say what Microsoft really was - and yet the Windows and Office divisions made the lions’ share of the money. They thought they needed to own every part of the ecosystem when Apple went a different route and opened a store to entice developers to go direct to market, making more margin with no acquisition cost to build a great ecosystem. The Vanity Fair piece ends with a cue from the Steve Jobs biography and to sum it up, Jobs said that Microsoft ended up being run by sales people because they moved the revenue needle - just as he watched it happen with Sculley at Apple. Jobs went on to say Microsoft would continue the course as long as Ballmer was at the helm. Back when they couldn’t ship Vista they were a 60,000 person company. By 2011 when the Steve Jobs biography was published, they were at 90,000 and had just rebounded from layoffs. By the end of 2012, the iPhone had overtaken Microsoft in sales. Steve Ballmer left as the CEO of Microsoft in 2014 and Satya Nadella replaces him. Under his leadership, half the company would be moved into research later that year. Nadella wrote a book about his experience turning things around called Hit Refresh. Just as the book Microsoft Rebooted told the story of how Ballmer was going to turn things around in 2004 - except Hit Refresh was actually a pretty good book. And the things seemed to work. The stock price had risen a little in 2014 but since then it’s shot up six times what it was. And all of the pivots to a more cloud-oriented company and many other moves seem to have been started under Ballmer’s regime, just as the bloated company they became started under the Gates regime. Each arguably did what was needed at the time. Let’s not forget the dot com bubble burst at the beginning of the Ballmer era and he had the 2008 financial crises. There be dragons that are micro-economic forces outside anyones control. But Nadella ran R&D and cloud offerings. He emphasized research - which means innovation. He changed the mission statement to “empower every person and every organization on the planet to achieve more.” He laid out a few strategies, to reinvent productivity and collaboration, power those with Microsoft’s cloud platform, and expand on Windows and gaming. And all of those things have been gangbusters ever since. They bought Mojang in 2014 and so are now the makers of Minecraft. They bought LinkedIn. They finally got Skype better integrated with the company so Teams could compete more effectively with Slack. Here’s the thing. I knew a lot of people who worked, and many who still work at Microsoft during that Lost Decade. And I think every one of them is really just top-notch. Looking at things as they’re unfolding you just see a weekly “patch Tuesday” increment. Everyone wanted to innovate - wanted to be their best self. And across every company we look at in this podcast, nearly every one has had to go through a phase of a lost few years or lost decade. The ones who don’t pull through can never turn the tide on culture and innovation. The two are linked. A bloated company with more layers of management inspires a sense of controlling managers who stifle innovation. At face value, the micro-aggressions seem plausible, especially to those younger in their career. We hear phrases like “we need to justify or analyze the market for each expense/initiative” and that’s true or you become a Xerox PARC or Digital Research where so many innovations never get to market effectively. We hear phrases like “we’re too big to do things like that any more” and yup, people running amuck can be dangerous - turns out move fast and break things doesn’t always work out. We hear “that requires approval” or “I’m their bosses bosses boss” or “you need to be a team player and run this by other leaders” or “we need more process” or “we need a center of excellence for that because too many teams are doing their own thing” or “we need to have routine meetings about this” or “how does that connect to the corporate strategy” or “we’re a public company now so no” or “we don’t have the resources to think about moon shots” or “we need a new committee for that” or “who said you could do that” and all of these taken as isolated comments would be fine here or there. But the aggregate of so many micro-aggressions comes from a place of control, often stemming from fear of change or being left behind and they come at the cost of innovation. Charles Simonyi didn’t leave Xerox PARC and go to Microsoft to write Microsoft Word to become a cog in a wheel that’s focused on revenue and not changing the world. Microsoft simply got out-innovated due to being crushed under the weight of too many layers of management and so overly exerting control over those capable of building cool stuff. I’ve watched those who stayed be allowed speak publicly again, engage with communities, take feedback, be humble, admit mistakes, and humanize the company. It’s a privilege to get to work with them and I’ve seen results like a change to a graphAPI endpoint one night when I needed a new piece of data. They aren’t running amuck. They are precise, targeted, and allowed to do what needs to be done. And it’s amazing how a chief molds the way a senior leadership team acts and they mold the way directors direct and they mold the way managers manage and down the line. An aspect of culture is a mission - another is values - and another is behaviors, which make up the culture. And these days I gotta’ say I’m glad to have witnessed a turnaround like they’ve had and every time I talk to a leader or an individual contributor at Microsoft I’m glad to feel their culture coming through. So here’s where I’d like to leave this. We can all help shape a great culture. Leaders aren’t the only ones who have an impact. We can all innovate. An innovative company isn’t one that builds a great innovative product (although that helps) but instead one who becomes an unstoppable force due to lots of small innovations at every level of the organization. Where are we allowing politics or a need for control and over-centralization stifle others? Let’s change that.
8/4/2021 • 21 minutes, 38 seconds
Babbage to Bush: An Unbroken Line Of Computing
The amount published in scientific journals has exploded over the past few hundred years. This helps in putting together a history of how various sciences evolved. And sometimes helps us revisit areas for improvement - or predict what’s on the horizon. The rise of computers often begins with stories of Babbage. As we’ve covered a lot came before him and those of the era were often looking to automate calculating increasingly complex mathematic tables. Charles Babbage was a true Victorian era polymath. A lot was happening as the world awoke to a more scientific era and scientific publications grew in number and size. Born in London, Babbage loved math from an early age and went away to Trinity College in Cambridge in 1810. There he helped form the Analytical Society with John Herschel - a pioneer of early photography and a chemist and invented of the blueprint. And George Peacock, who established the British arm of algebraic logic, which when picked up by George Boole would go on to form part of Boolean algebra, ushering in the idea that everything can be reduced to a zero or a one. Babbage graduated from Cambridge and went on to become a Fellow of the Royal Society and helped found the Royal Astronomical Society. He published works with Herschel on electrodynamics that went on to be used by Michael Faraday later and even dabbled in actuarial tables - possibly to create a data driven insurance company. His father passed away in 1827, leaving him a sizable estate. And after applying multiple times he finally became a professor at Cambridge in 1828. He and the others from the Analytical Society were tinkering with things like generalized polynomials and what we think of today as a formal power series, all of which an be incredibly tedious and time consuming. Because it’s iterative. Pascal and Leibnitz had pushed math forward and had worked on the engineering to automate various tasks, applying some of their science. This gave us Pascal’s calculator and Leibnitz’s work on information theory and his calculus ratiocinator added a stepped reckoner, now called the Leibniz wheel where he was able to perform all four basic arithmetic operations. Meanwhile, Babbage continued to bounce around between society, politics, science, mathematics, and even coining a book on manufacturing where he looked at rational design and profit sharing. He also looked at how tasks were handled and made observations about the skill level of each task and the human capital involved in carrying them out. Marx even picked up where Babbage left off and looked further into profitability as a motivator. He also invented the pilot for trains and was involved with lots of learned people of the day. Yet Babbage is best known for being the old, crusty gramps of the computer. Or more specifically the difference engine, which is different from a differential analyzer. A difference engine was a mechanical calculator that could perform polynomial functions. A differential analyzer on the other hand solves differential equations using wheels and disks. Babbage expanded on the ideas of Pascal and Leibniz and added to mechanical computing, making the difference engine, the inspiration of many a steampunk work of fiction. Babbage started work on the difference engine in 1819. Multiple engineers built different components for the engine and it was powered by a crank that spun a series of wheels, not unlike various clockworks available at the time. The project was paid for by the British Government who hoped it could save time calculating complex tables. Imagine doing all the work in spreadsheets manually. Each cell could take a fair amount of time and any mistake could be disastrous. But it was just a little before its time. The plans have been built and worked and while he did produce a prototype capable of raising numbers to the third power and perform some quadratic equations the project was abandoned in 1833. We’ll talk about precision in a future episode. Again, the math involved in solving differential equations at the time was considerable and the time-intensive nature was holding back progress. So Babbage wasn’t the only one working on such ideas. Gaspard-Gustave de Coriolis, known for the Coriolis effect, was studying the collisions of spheres and became a professor of mechanics in Paris. To aid in his works, he designed the first mechanical device to integrate differential equations in 1836. After Babbage scrapped his first, he moved on to the analytical engine, adding conditional branching, loops, and memory - and further complicating the machine. The engine borrowed the punchcard tech from the Jacquard loom and applied that same logic, along with the work of Leibniz, to math. The inputs would be formulas, much as Turing later described when concocting some of what we now call Artificial Intelligence. Essentially all problems could be solved given a formula and the output would be a printer. The analytical machine had 1,000 numbers worth of memory and a logic processor or arithmetic unit that he called a mill, which we’d call a CPU today. He even planned on a programming language which we might think of as assembly today. All of this brings us to the fact that while never built, it would have been a Turing-complete in that the simulation of those formulas was a Turing machine. Ada Lovelace contributed the concept of Bernoulli numbers in algorithms giving us a glimpse into what an open source collaboration might some day look like. And she was in many ways the first programmer - and daughter of Lord Byron and Anne Millbanke, a math whiz. She became fascinated with the engine and ended up becoming an expert at creating a set of instructions to punch on cards, thus the first programmer of the analytical engine and far before her time. In fact, there would be no programmer for 100 years with her depth of understanding. Not to make you feel inadequate, but she was 27 in 1843. Luigi Menabrea took the idea to France. And yet by the time Babbage died in 1871 without a working model. During those years, Per Georg Scheutz built a number of difference engines based on Babbage’s published works - also funded by the government and would evolve to become the first calculator that could print. Martin Wiberg picked up from there and was able to move to 20 digit processing. George Grant at Harvard developed calculating machines and published his designs by 1876, starting a number of companies to fabricate gears along the way. James Thomson built a differential analyzer in 1876 to predict tides. And that’s when his work on fluid dynamics and other technology seemed to be the connection between these machines and the military. Thomson’s work would Joe added to work done by Arthur Pollen and we got our first automated fire-control systems. Percy Ludgate and Leonardo Torres wrote about Babbages work in the early years the 1900s and other branches of math needed other types of mechanical computing. Burroughs built a difference engine in 1912 and another in 1929. The differential analyzer was picked up by a number of scientists in those early years. But Vaneevar Bush was perhaps one of the most important. He, with Harold Locke Hazen built one at MIT and published an article on it in 1931. Here’s where everything changes. The information was out there in academic journals. Bush published another in 1936 connecting his work to Babbage’s. Bush’s designs get used by a number of universities and picked up by the the Balistic Research Lab in the US. One of those installations was in the same basement ENIAC would be built in. Bush did more than inspire other mathematicians. Sometimes he paid them. His research assistant was Claude Shannon, who built the General Purpose Analog Computer in 1941 and went on to become founder of the whole concept of information theory, down to the bits to bytes. Shannon’s computer was important as it came shortly after Alan Turing’s work on Turing machines and so has been seen as a means to get to this concept of general, programmable computing - basically revisiting the Babbage concept of a thinking, or analytical machine. And Howard Aiken went a step further than mechanical computing and into electromechanical computing with he Mark I, where he referenced Babbage’s work as well. Then we got the Atanasoff-Berry Computer in 1942. By then, our friend Bush had gone on to chair the National Defense Research Committee where he would serve under Roosevelt and Truman and help develop radar and the Manhattan Project as an administrator where he helped coordinate over 5,000 research scientists. Some helped with ENIAC, which was completed in 1945, thus beginning the era of programmable, digital, general purpose computers. Seeing how computers helped break Enigma machine encryption and solve the equations, blow up targets better, and solve problems that held science back was one thing - but unleashing such massive and instantaneous violence as the nuclear bomb caused Bush to write an article for The Atlantic called As We May Think, that inspired generations of computer scientists. Here he laid out the concept of a Memex, or a general purpose computer that every knowledge worker could have. And thus began the era of computing. What we wanted to look at in this episode is how Babbage wasn’t an anomaly. Just as Konrad Zuse wasn’t. People published works, added to the works they read about, cited works, pulled in concepts from other fields, and we have unbroken chains in our understanding of how science evolves. Some, like Konrad Zuse, might have been operating outside of this peer reviewing process - but he eventually got around to publishing as well.
7/29/2021 • 14 minutes, 28 seconds
How Venture Capital Funded The Computing Industry
Investors have pumped capital into emerging markets since the beginning of civilization. Egyptians explored basic mathematics and used their findings to build larger structures and even granaries to allow merchants to store food and serve larger and larger cities. Greek philosophers expanded on those learnings and applied math to learn the orbits of planets, the size of the moon, and the size of the earth. Their merchants used the astrolabe to expand trade routes. They studied engineering and so learned how to leverage the six simple machines to automate human effort, developing mills and cranes to construct even larger buildings. The Romans developed modern plumbing and aqueducts and gave us concrete and arches and radiant heating and bound books and the postal system. Some of these discoveries were state sponsored; others from wealthy financiers. Many an early investment was into trade routes, which fueled humanities ability to understand the world beyond their little piece of it and improve the flow of knowledge and mix found knowledge from culture to culture. As we covered in the episode on clockworks and the series on science through the ages, many a scientific breakthrough was funded by religion as a means of wowing the people. And then autocrats and families who’d made their wealth from those trade routes. Over the centuries of civilizations we got institutions who could help finance industry. Banks loan money using an interest rate that matches the risk of their investment. It’s illegal, going back to the Bible to overcharge on interest. That’s called usury, something the Romans realized during their own cycles of too many goods driving down costs and too few fueling inflation. And yet, innovation is an engine of economic growth - and so needs to be nurtured. The rise of capitalism meant more and more research was done privately and so needed to be funded. And the rise of intellectual property as a good. Yet banks have never embraced startups. The early days of the British Royal Academy were filled with researchers from the elite. They could self-fund their research and the more doing research, the more discoveries we made as a society. Early American inventors tinkered in their spare time as well. But the pace of innovation has advanced because of financiers as much as the hard work and long hours. Companies like DuPont helped fuel the rise of plastics with dedicated research teams. Railroads were built by raising funds. Trade grew. Markets grew. And people like JP Morgan knew those markets when they invested in new fields and were able to grow wealth and inspire new generations of investors. And emerging industries ended up dominating the places that merchants once held in the public financial markets. Going back to the Venetians, public markets have required regulation. As banking became more a necessity for scalable societies it too required regulation - especially after the Great Depression. And yet we needed new companies willing to take risks to keep innovation moving ahead., as we do today And so the emergence of the modern venture capital market came in those years with a few people willing to take on the risk of investing in the future. John Hay “Jock” Whitney was an old money type who also started a firm. We might think of it more as a family office these days but he had acquired 15% in Technicolor and then went on to get more professional and invest. Jock’s partner in the adventure was fellow Delta Kappa Epsilon from out at the University of Texas chapter, Benno Schmidt. Schmidt coined the term venture capital and they helped pivot Spencer Chemicals from a musicians plant to fertilizer - they’re both nitrates, right? They helped bring us Minute Maid. and more recently have been in and out of Herbalife, Joe’s Crab Shack, Igloo coolers, and many others. But again it was mostly Whitney money and while we tend to think of venture capital funds as having more than one investor funding new and enterprising companies. And one of those venture capitalists stands out above the rest. Georges Doriot moved to the United States from France to get his MBA from Harvard. He became a professor at Harvard and a shrewd business mind led to him being tapped as the Director of the Military Planning Division for the Quartermaster General. He would be promoted to brigadier general following a number of massive successes in the research and development as part of the pre-World War II military industrial academic buildup. After the war Doriot created the American Research and Development Corporation or ARDC with the former president of MIT, Karl Compton, and engineer-turned Senator Ralph Flanders - all of them wrote books about finance, banking, and innovation. They proved that the R&D for innovation could be capitalized to great return. The best example of their success was Digital Equipment Corporation, who they invested $70,000 in in 1957 and turned that into over $350 million in 1968 when DEC went public, netting over 100% a year of return. Unlike Whitney, ARDC took outside money and so Doriot became known as the first true venture capitalist. Those post-war years led to a level of patriotism we arguably haven’t seen since. John D. Rockefeller had inherited a fortune from his father, who built Standard Oil. To oversimplify, that company was broken up into a variety of companies including what we now think of as Exxon, Mobil, Amoco, and Chevron. But the family was one of the wealthiest in the world and the five brothers who survived John Jr built an investment firm they called the Rockefeller Brothers Fund. We might think of the fund as a social good investment fund these days. Following the war in 1951, John D Rockefeller Jr endowed the fund with $58 million and in 1956, deep in the Cold War, the fund president Nelson Rockefeller financed a study and hired Henry Kissinger to dig into the challenges of the United States. And then came Sputnik in 1957 and a failed run for the presidency of the United States by Nelson in 1960. Meanwhile, the fund was helping do a lot of good but also helping to research companies Venrock would capitalize. The family had been investing since the 30s but Laurance Rockefeller had setup Venrock, a mashup of venture and Rockefeller. In Venrock, the five brothers, their sister, MIT’s Ted Walkowicz, and Harper Woodward banded together to sprinkle funding into now over 400 companies that include Apple, Intel, PGP, CheckPoint, 3Com, DoubleClick and the list goes on. Over 125 public companies have come out of the fund today with an unimaginable amount of progress pushing the world forward. The government was still doing a lot of basic research in those post-war years that led to standards and patents and pushing innovation forward in private industry. ARDC caught the attention of a number of other people who had money they needed to put to work. Some were family offices increasingly willing to make aggressive investments. Some were started by ARDC alumni such as Charlie Waite and Bill Elfers who with Dan Gregory founded Greylock Partners. Greylock has invested in everyone from Red Hat to Staples to LinkedIn to Workday to Palo Alto Networks to Drobo to Facebook to Zipcar to Nextdoor to OpenDNS to Redfin to ServiceNow to Airbnb to Groupon to Tumblr to Zenprise to Dropbox to IFTTT to Instagram to Firebase to Wandera to Sumo Logic to Okta to Arista to Wealthfront to Domo to Lookout to SmartThings to Docker to Medium to GoFundMe to Discord to Houseparty to Roblox to Figma. Going on 800 investments just since the 90s they are arguably one of the greatest venture capital firms of all time. Other firms came out of pure security analyst work. Hayden, Stone, & Co was co-founded by another MIT grad, Charles Hayden, who made his name mining copper to help wire up the world in what he expected to be an increasingly electrified world. Stone was a Wall Street tycoon and the two of them founded a firm that employed Joe Kennedy, the family patriarch, Frank Zarb, a Chairman of the NASDAQ and they gave us one of the great venture capitalists to fund technology companies, Arthur Rock. Rock has often been portrayed as the bad guy in Steve Jobs movies but was the one who helped the “Traitorous 8” leave Shockley Semiconductor and after their dad (who had an account at Hayden Stone) mentioned they needed funding, got serial entrepreneur Sherman Fairchild to fund Fairchild Semiconductor. He developed tech for the Apollo missions, flashes, spy satellite photography - but that semiconductor business grew to 12,000 people and was a bedrock of forming what we now call Silicon Valley. Rock ended up moving to the area and investing. Parlaying success in an investment in Fairchild to invest in Intel when Moore and Noyce left Fairchild to co-found it. Venture Capital firms raise money from institutional investors that we call limited partners and invest that money. After moving to San Francisco, Rock setup Davis and Rock, got some limited partners, including friends from his time at Harvard and invested in 15 companies, including Teledyne and Scientific Data Systems, which got acquired by Xerox, taking their $257,000 investment to a $4.6 million dollar valuation in 1970 and got him on the board of Xerox. He dialed for dollars for Intel and raised another $2.5 million in a couple of hours, and became the first chair of their board. He made all of his LPs a lot of money. One of those Intel employees who became a millionaire retired young. Mike Markulla invested some of his money and Rock put in $57,000 - growing it to $14 million and went on to launch or invest in companies and make billions of dollars in the process. Another firm that came out of the Fairchild Semiconductor days was Kleiner Perkins. They started in 1972, by founding partners Eugene Kleiner, Tom Perkins, Frank Caufield, and Brook Byers. Kleiner was the leader of those Traitorous 8 who left William Shockley and founded Fairchild Semiconductor. He later hooked up with former HP head of Research and Development and yet another MIT and Harvard grad, Bill Perkins. Perkins would help Corning, Philips, Compaq, and Genentech - serving on boards and helping them grow. Caufield came out of West Point and got his MBA from Harvard as well. He’d go on to work with Quantum, AOL, Wyse, Verifone, Time Warner, and others. Byers came to the firm shortly after getting his MBA from Stanford and started four biotech companies that were incubated at Kleiner Perkins - netting the firm over $8 Billion dollars. And they taught future generations of venture capitalists. People like John Doerr - who was a great seller at Intel but by 1980 graduated into venture capital bringing in deals with Sun, Netscape, Amazon, Intuit, Macromedia, and one of the best gambles of all time - Google. And his reward is a net worth of over $11 billion dollars. But more importantly to help drive innovation and shape the world we live in today. Kleiner Perkins was the first to move into Sand Hill Road. From there, they’ve invested in nearly a thousand companies that include pretty much every household name in technology. From there, we got the rise of the dot coms and sky-high rent, on par with Manhattan. Why? Because dozens of venture capital firms opened offices on that road, including Lightspeed, Highland, Blackstone, Accel-KKR, Silver Lake, Redpoint, Sequoia, and Andreesen Horowitz. Sequoia also started in the 70s, by Don Valentine and then acquired by Doug Leone and Michael Moritz in the 90s. Valentine did sales for Raytheon before joining National Semiconductor, which had been founded by a few Sperry Rand traitors and brought in some execs from Fairchild. They were venture backed and his background in sales helped propel some of their earlier investments in Apple, Atari, Electronic Arts, LSI, Cisco, and Oracle to success. And that allowed them to invest in a thousand other companies including Yahoo!, PayPal, GitHub, Nvidia, Instagram, Google, YouTube, Zoom, and many others. So far, most of the firms have been in the US. But venture capital is a global trend. Masayoshi Son founded Softbank in 1981 to sell software and then published some magazines and grew the circulation to the point that they were Japan’s largest technology publisher by the end of the 80s and then went public in 1994. They bought Ziff Davis publishing, COMDEX, and seeing so much technology and the money in technology, Son inked a deal with Yahoo! to create Yahoo! Japan. They pumped $20 million into Alibaba in 2000 and by 2014 that investment was worth $60 billion. In that time they became more aggressive with where they put their money to work. They bought Vodafone Japan, took over competitors, and then the big one - they bought Sprint, which they merged with T-Mobile and now own a quarter of the combined companies. An important aspect of venture capital and private equity is multiple expansion. The market capitalization of Sprint more than doubled with shares shooting up over 10%. They bought Arm Limited, the semiconductor company that designs the chips in so many a modern phone, IoT device, tablet and even computer now. As with other financial firms, not all investments can go great. SoftBank pumped nearly $5 billion into WeWork. Wag failed. 2020 saw many in staff reductions. They had to sell tens of billions in assets to weather the pandemic. And yet with some high profile losses, they sold ARM for a huge profit, Coupang went public and investors in their Vision Funds are seeing phenomenal returns across over 200 companies in the portfolios. Most of the venture capitalists we mentioned so far invested as early as possible and stuck with the company until an exit - be it an IPO, acquisition, or even a move into private equity. Most got a seat on the board in exchange for not only their seed capital, or the money to take products to market, but also their advice. In many a company the advice was worth more than the funding. For example, Randy Komisar, now at Kleiner Perkins, famously recommended TiVo sell monthly subscriptions, the growth hack they needed to get profitable. As the venture capital industry grew and more and more money was being pumped into fueling innovation, different accredited and institutional investors emerged to have different tolerances for risk and different skills to bring to the table. Someone who built an enterprise SaaS company and sold within three years might be better served to invest in and advise another company doing the same thing. Just as someone who had spent 20 years running companies that were at later stages and taking them to IPO was better at advising later stage startups who maybe weren’t startups any more. Here’s a fairly common startup story. After finishing a book on Lisp, Paul Graham decides to found a company with Robert Morris. That was Viaweb in 1995 and one of the earliest SaaS startups that hosted online stores - similar to a Shopify today. Viaweb had an investor named Julian Weber, who invested $10,000 in exchange for 10% of the company. Weber gave them invaluable advice and they were acquired by Yahoo! for about $50 million in stock in 1998, becoming the Yahoo Store. Here’s where the story gets different. 2005 and Graham decides to start doing seed funding for startups, following the model that Weber had established with Viaweb. He and Viaweb co-founders Robert Morris (the guy that wrote the Morris worm) and Trevor Blackwell start Y Combinator, along with Jessica Livingston. They put in $200,000 to invest in companies and with successful investments grew to a few dozen companies a year. They’re different because they pick a lot of technical founders (like themselves) and help the founders find product market fit, finish their solutions, and launch. And doing so helped them bring us Airbnb, Doordash, Reddit, Stripe, Dropbox and countless others. Notice that many of these firms have funded the same companies. This is because multiple funds investing in the same company helps distribute risk. But also because in an era where we’ve put everything from cars to education to healthcare to innovation on an assembly line, we have an assembly line in companies. We have thousands of angel investors, or humans who put capital to work by investing in companies they find through friends, family, and now portals that connect angels with companies. We also have incubators, a trend that began in the late 50s in New York when Jo Mancuso opened a warehouse up for small tenants after buying a warehouse to help the town of Batavia. The Batavia Industrial Center provided office supplies, equipment, secretaries, a line of credit, and most importantly advice on building a business. They had made plenty of money on chicken coops and though that maybe helping companies start was a lot like incubating chickens and so incubators were born. Others started incubating. The concept expanded from local entrepreneurs helping other entrepreneurs and now cities, think tanks, companies, and even universities, offer incubation in their walls. Keep in mind many a University owns a lot of patents developed there and plenty of companies have sprung up to commercialize the intellectual property incubated there. Seeing that and how technology companies needed to move faster we got accelerators like Techstars, founded by David Cohen, Brad Feld, David Brown, and Jared Polis in 2006 out of Boulder, Colorado. They have worked with over 2,500 companies and run a couple of dozen programs. Some of the companies fail by the end of their cohort and yet many like Outreach and Sendgrid grow and become great organizations or get acquired. The line between incubator and accelerator can be pretty slim today. Many of the earlier companies mentioned are now the more mature venture capital firms. Many have moved to a focus on later stage companies with YC and Techstars investing earlier. They attend the demos of companies being accelerated and invest. And the fact that founding companies and innovating is now on an assembly line, the companies that invest in an A round of funding, which might come after an accelerator, will look to exit in a B round, C round, etc. Or may elect to continue their risk all the way to an acquisition or IPO. And we have a bevy of investing companies focusing on the much later stages. We have private equity firms and family offices that look to outright own, expand, and either harvest dividends from or sell an asset, or company. We have traditional institutional lenders who provide capital but also invest in companies. We have hedge funds who hedge puts and calls or other derivatives on a variety of asset classes. Each has their sweet spot even if most will opportunistically invest in diverse assets. Think of the investments made as horizons. The Angel investor might have their shares acquired in order to clean up the cap table, or who owns which parts of a company, in later rounds. This simplifies the shareholder structure as the company is taking on larger institutional investors to sprint towards and IPO or an acquisition. People like Arthur Rock, Tommy Davis, Tom Perkins, Eugene Kleiner, Doerr, Masayoshi Son, and so many other has proven that they could pick winners. Or did they prove they could help build winners? Let’s remember that investing knowledge and operating experience were as valuable as their capital. Especially when the investments were adjacent to other successes they’d found. Venture capitalists invested more than $10 billion in 1997. $600 million of that found its way to early-stage startups. But most went to preparing a startup with a product to take it to mass market. Today we pump more money than ever into R&D - and our tax systems support doing so more than ever. And so more than ever, venture money plays a critical role in the life cycle of innovation. Or does venture money play a critical role in the commercialization of innovation? Seed accelerators, startup studios, venture builders, public incubators, venture capital firms, hedge funds, banks - they’d all have a different answer. And they should. Few would stick with an investment like Digital Equipment for as long as ARDC did. And yet few provide over 100% annualized returns like they did. As we said in the beginning of this episode, wealthy patrons from Pharaohs to governments to industrialists to now venture capitalists have long helped to propel innovation, technology, trade, and intellectual property. We often focus on the technology itself in computing - but without the money the innovation either wouldn’t have been developed or if developed wouldn’t have made it to the mass market and so wouldn’t have had an impact into our productivity or quality of life. The knowledge that comes with those who provide the money can be seen with irreverence. Taking an innovation to market means market-ing. And sales. Most generations see the previous generations as almost comedic, as we can see in the HBO show Silicon Valley when the cookie cutter industrialized approach goes too far. We can also end up with founders who learn to sell to investors rather than raising capital in the best way possible, selling to paying customers. But there’s wisdom from previous generations when offered and taken appropriately. A coachable founder with a vision that matches the coaching and a great product that can scale is the best investment that can be made. Because that’s where innovation can change the world.
7/24/2021 • 30 minutes, 14 seconds
Albert Cory Talks About His New Book, Inventing The Future
Author Albert Cory joins the podcast in this episode to talk about his new book, Inventing the Future. Inventing the Future was a breath of fresh air from an inspirational time and person. Other books have told the story of how the big names in computing were able to commercialize many of the innovations that came out of Xerox PARC. But Inventing the Future adds a really personal layer that ties in the culture of the day (music, food, geography, and even interpersonal relationships) to what was happening in computing - that within a couple of decades would wildly change how we live our lives. We’re lucky he made the time to discuss his take on a big evolution in modern technology through the lens of historical fiction. I would absolutely recommend the book to academics and geeks and just anyone looking to expand their minds. And we look forward to having him on again!
7/16/2021 • 45 minutes, 45 seconds
Where Fast Food Meets Point of Sale, Automation, and Computing
Roy Allen opened his first root beer stand in 1919, in Lodi, California. He’d bought a recipe for root beer and boy, it sure was a hit. He brought in people to help. One was Frank Wright, who would become a partner in the endeavor and they’d change the name to A&W Root Beer, for their names, and open a restaurant in 1923 in Sacramento, California. Allen bought Wright back out in 1925, but kept the name. Having paid for the root beer license he decided to franchise out the use of that - but let’s not call that the first fast food chain just yet. After all, it was just a license to make root beer just like he’d bought the recipe all those years ago. A&W’s Allen sold the company in 1950 to retire. The franchise agreements moved from a cash payment to royalties. But after Allen the ownership of the company bounced around until it landed with United Fruit which would become United Brands, who took A&W to the masses and the root beer company was split from the restaurant chain with the chain eventually owned by Yum! Brands now nearly 1,000 locations and over $300M in revenues. White Castle As A&W franchised, some experimented with other franchising options or with not going that route at all. Around the same time Wright opened his first stand, Walt Anderson was running a few food stands around Witchita. He met up with Billy Ingram and in 1921 they opened the first White Castle, putting in $700 of their own money. By 1927 they expanded out to Indianapolis. As is often the case, the original cook with the concept sold out his part of the business in 1933 when they moved their headquarters to Columbus, Ohio and the Ingram family expanded all over the United States. Many a fast food chain is franchised but White Castle has stayed family owned and operates profitably not taking on debt to grow. Kentucky Fried Chicken KFC îs fried chicken. They sell some other stuff I guess. They were started by Harland Sanders in 1930 but as we see with a lot of these they didn’t start franchising until after the war. His big hack was to realize he needed to cook chicken faster to serve more customers and so he converted a pressure cooker into a pressure fryer, completely revolutionizing how food is fried. He perfected his original recipe in 1940 and by 1952 was able to parlay the success of his early success into franchising out what is now the second largest fast food chain in the world. But the largest is McDonald’s. McDonalds 1940 comes around and Richard and Maurice McDonald open a little restaurant called McDonalds. It was a drive-up barbecue joint in San Bernadino. But drive-in restaurants were getting competitive and while looking back at the business, they realized that four fifths of the sales were hamburgers. So they shut down for a bit and got rid of the car hops that were popular at the time, simplified the menu and trimmed out everything they could - getting down to less than 10 items on the menu. They were able to get prices down to 15 cent hamburgers using something they called the Speedee Service System. That was an assembly-line of food preparation that became the standard in the fast food industry over the next few decades. They also looked at industrial equipment and used that to add french fries and shakes, which finally unlocked an explosion of sales and profits doubled. But then the milkshake mixer salesman payed a visit to them in San Bernadino to see why the brothers need 8 of his mixers and was amazed to find they were, in fact, cranking out 48 shakes at a time with them. The assembly-line opened his eyes and he bought the rights to franchise the McDonalds concept opening his first one in Des Plaines, Illinois. One of the best growth hacks for any company is just to have an amazing sales and marketing arm. OK, so not a hack but just good business. And Ray Kroc will go down as one of the greatest. From those humble beginnings selling milkshake mixers he moved from licensing to buying the company outright for $2.7 million dollars in 1961. Another growth hack was to realize, thanks to a former VP at Tastee-Freez, that owning the real estate brought yet another revenue stream. A low deposit and a 20% or higher increase in the monthly spend would grow into a nearly 38 billion dollar revenue stream. The highway system was paying dividends to the economy. People were moving out to the suburbs. Cars were shipping in the highest volumes ever. They added the filet-o-fish and were exploding internationally in the 60s and 70s and now sitting on over 39,000 stores with about a $175 billion market cap with over $5 billion dollars in revenue. Diners, Drive-ins, and Dives Those post-war years were good to fast food. Anyone that’s been to a 50s themed restaurant can see the car culture on display and drive-ins were certainly a part of that. People were living their lives at a new pace to match the speed of those cars and it was a golden age of growth in the United States. The computer industry was growing right along with those diners, drive-ins, and dives. One company that started before World War II and grew fast was Dairy Queen, started in 1940 by John Fremont McCullough. He’d invented soft-serve ice cream in 1938 and opened the first Dairy Queen in Joliet, Illinois with his friend Sherb Noble, who’d been selling his soft-serve ice cream out of his shop for a couple of years. During those post-war 1950s explosive years they introduced the Dilly Bar and have now expanded to 6,800 locations around the world. William Rosenberg opened a little coffee shop in in Quincy, Massachusetts. As with the others in this story, he parlayed quick successes and started to sell franchises in 1955 and Dunkin’ Donuts grew to 12,400 locations. In-N-Out Burger started in 1948 as well, by Harry and Esther Snyder and while they’ve only expanded around the west coast of the US, they’ve grown to around 350 locations and stay family owned. Pizza Hut was started in 1958 in Wichita, Kanas. While it was more of a restaurant for a long time, it’s now owned by Yum! Brands and operates well over 18,000 locations. Yum! Also owns KFC and Taco Bell. Glen Bell served as a cook in World War II and moved to San Bernardino to open a drive-in hot dog stand in 1948. He sold it and started a taco stand, selling them for 19 cents a piece, expanding to three locations by 1955 and went serial entrepreneur - selling those locations and opening four new ones he called El Tacos down in Long Beach. He sold that to his partner in 1962 and started his first Taco Bell, finally ready to start selling franchises in 1964 and grew it to 100 restaurants by 1967. They took Taco Bell public in 1970 when they had 325 locations. And Pepsi bought the 868 location in 1978 for $125 million in stock, eventually spinning the food business off to what is now called Yum! Brands and co-branding with cousin restaurants in that portfolio - Pizza Hut and Long John Silver’s. I haven’t been to a Long John Silver’s since I was a kid but they still have over a thousand locations and date back to a hamburger stand started in 1929 that over the years pivoted to a roast beef sandwich shop and pivoting many times until landing on the fish and chips concept in 1969. The Impact of Computing It’s hard to imagine that any of these companies could have grown the way they did without more than an assembly-line of human automation. Mechanical cash registers had been around since the Civil War in the United States, with early patents filed in 1883 by Charles Kettering and James Ritty. Arguably the abacus and counting frame goes back way further but the Ritty Model I patent was sparked the interest of Jacob Eckert who bought the patent, added some features and took on $10,000 in debt to take the cash register to market, forming National Manufacturing Company. That became National Cash Register still a more than 6 billion dollar market cap company. But the growth of IBM and other computing companies, the release of semiconductors, and the miniaturization and dropping costs of printed circuit boards helped lead to the advent of electronic cash registers. After all those are just purpose-built computers. IBM introduced the first point of sale system in 1973, bringing that cash register into the digital age. Suddenly a cash register could be in the front as a simplified terminal to send print outs or information to a screen in the back. Those IBM 3650s evolved to the first use of peer-to-peer client-server technology and ended up in Dillard’s in 1974. That same year McDonald’s had William Brobeck and Associates develop a microprocessor-based terminal. It was based on the Intel 8008 chip and used a simple push-button device to allow cashiers to enter orders. This gave us a queue of orders being sent by terminals in the front. And we got touchscreens registers in 1986, running on the Atari 520ST, with IBM introducing a 486-based system running on FlexOS. Credit Cards As we moved into the 90s, fast food chains were spreading fast and the way we payed for goods was starting to change. All these electronic registers could suddenly send the amount owed over an electronic link to a credit card processing machine. John Biggins launched the Charg-it card in 1946 and it spread to Franklin National Bank a few years later. Diners Club Card picked up on the trend and launched the Diners Club Card in 1950, growing to 20,000 cardholders in 1951. American Express came along in 1958 with their card and in just five years grew to a million cards. Bank of America released their BankAmericard in 1958, which became the first general-purpose credit card. They started in California and went national in the first ten years. That would evolve into Visa by 1966 and by 1966 we got MasterCard as well. THat’s also the same year the Barclaycard brought credit cards outside the US for the first time, showing up first in England. Then Carte bleue in 67 in France and the Eurocard as a collaboration between the Wallenberg family and Interbank in 1968 to serve the rest of Europe. Those spread and by the 90s we had enough people using them to reach a critical mass where fast food needed to take them as well. Whataburger and Carl’s Jr added the option in 1989, Arby’s in 1990, and while slower to adopt taking cards, McDonald’s finally did so in 2002. We were well on our way to becoming a cashless society. And the rise of the PC led to POS systems moving a little down-market and systems from and others like Aloha, designed in 1998 (now owned by NCR). And lots of other brands of devices as well as home-brewed tooling from large vendors. And computers helped revolutionize the entire organization. Chains could automate supply lines to stores with computerized supply chain management. Desktop computers also led to management functions being computerized in the back office, like scheduling and time clocks and so less managers were needed. That was happening all over post-War America by the 90s. Post-War America In that era after World War II people were fascinated with having the same experiences over and over - and having them be identical. Think about it, before the war life was slower and every meal required work. After it was fast and the food always came out hot and felt like a suburban life, wherever you were. Even when that white flight was destroying city centers and the homogeneity leading to further centralized organizations dividing communities. People flocked to open these restaurants. They could make money, it was easier to get a loan to open a store with a known brand, there were high profit margins, and in a lot of cases, there was a higher chance of success than many other industries. This leads to even more homogeneity. That rang true for other types of franchising on the rise as well. Fast food became a harbinger of things to come and indicative of other business trends as well. These days we think of high fructose corn syrup, fried food, and GMOs when we think of fast food. And that certainly led to the rise. People who eat fast food want that. Following the first wave of fast food we got other brands rising as well. Arby’s was founded in 1964, Subway in 1965, Wendy’s in 1969, Jack in the Box in 1961, Chick-fil-A in 1946, just a few miles from where I was born. And newer chains like Quiznos in 1981, Jimmy John’s in 1983, and Chipotle in 1993. These touch other areas of the market focusing on hotter, faster, or spicier. From the burger craze to the drive-in craze to just plain fast, fast food has been with us since long before anyone listening to this episode was born and is likely to continue on long after we’re gone. Love it or hate it, it’s a common go-to when we’re working on systems - especially far from home. And the industry continues to evolve. A barrier to opening any type of retail chain was once the point of sale system. Another was finding a way to accept credit cards. Stripe emerged to help with the credit cards and a cadre of tablet and app-based solutions for the iPhone, Android, and tablets emerged to help make taking credit cards simple for new businesses. A lot of the development was once put into upmarket solutions but these days downmarket is so much more approachable. And various fraud prevention machine learning algorithms and chip and pin technologies makes taking a credit card for a simple transaction safer than ever. The Future The fast food and retail in general continues to evolve. The next evolution seems to be self-service. This is well underway but a number of companies are looking at kiosks to take orders and all those cashiers might find RFID tags as another threat to their jobs. If a machine can see what’s in a cart on the way out of a store there’s no need for cashiers. Here, we see the digitization as one wave of technology but given the inexpensive cost of labor we are just now seeing the cost of the technology come down to where it’s cheaper. Much as the cost of clockworks and then industrialization caused first the displacement of Roman slave labor and then workers in factories. Been to a parking ramp recently? That’s a controlled enough environment where the people were some of the first to be replaced with simple computers that processed first magnetic stripes and now license plates using simple character recognition technology. Another revolution that has already begun is how we get the food. Grubhub launched in 2004, we got Postmates in 2011, and DoorDash came in 2013 to make it where we don’t even have to leave the house to get our burger fix. We can just open an app, use our finger print to check out, and have items show up at our homes often in less time than if we’d of gone to pick it up. And given that they have a lot of drivers and know exactly where they are, Uber attempted to merge with DoorDash in 2019, but that’s fine because they’d already launched Uber Eats in 2014. But DoorDash has about half that market at $2.9 billion in revenues for 2020 and that’s just with 18 million users - still less than 10% of US households. I guess that’s why DoorDash enjoys a nearly $60 billion market cap. We are in an era of technology empires. And yet McDonald’s is only worth about three times what DoorDash is worth and guess which one is growing faster. Empires come and go. The ability to manage an empire that scales larger than the technology and communications capabilities allows for was a downfall of many an empire - from Rome to Poland to the Russian Czarist empire. Each was profoundly changed by splitting up the empire as with Rome, becoming a pawn between neighboring empires, or even the development of an entirely new system of governance, as with Russia. Fast food employs four and a half million people in the US today, with another almost 10 million people employed globally. About half of those are adults. An industry that’s grown from revenues of just $6 billion to a half trillion dollar industry since just 1970. And those employees often make minimum wage. Think about this, that’s over twice the number of slaves as there were in the Roman Empire. Many of whom rose up to conquer the empire. And the name of the game is automation. Has been since that McDonald’s Speedee Service System that enthralled Ray Kroc. But the human labor will some day soon be drastically cut. Just as the McDonald brothers cut car hops from their roster all those years ago. And that domino will knock down others in every establishment we walk into to pay for goods. Probably not in the next 5 years, but certainly in my lifetime. Job displacement due to technology is nothing new. It goes back past the Romans. But it is accelerating faster than at other points in history. And you have to wonder what kinds of socio, political, and economical repercussions we’ll have. Add in other changes around the world and the next few decades will be interesting to watch.
7/16/2021 • 24 minutes, 59 seconds
A broad overview of how the Internet happened
The Internet is not a simple story to tell. In fact, every sentence here is worthy of an episode if not a few. Many would claim the Internet began back in 1969 when the first node of the ARPAnet went online. That was the year we got the first color pictures of earthen from Apollo 10 and the year Nixon announced the US was leaving Vietnam. It was also the year of Stonewall, the moon landing, the Manson murders, and Woodstock. A lot was about to change. But maybe the story of the Internet starts before that, when the basic research to network computers began as a means of networking nuclear missile sites with fault-tolerant connections in the event of, well, nuclear war. Or the Internet began when a T3 backbone was built to host all the datas. Or the Internet began with the telegraph, when the first data was sent over electronic current. Or maybe the Internet began when the Chinese used fires to send messages across the Great Wall of China. Or maybe the Internet began when drums sent messages over long distances in ancient Africa, like early forms of packets flowing over Wi-Fi-esque sound waves. We need to make complex stories simpler in order to teach them, so if the first node of the ARPAnet in 1969 is where this journey should end, feel free to stop here. To dig in a little deeper, though, that ARPAnet was just one of many networks that would merge into an interconnected network of networks. We had dialup providers like CompuServe, America Online, and even The WELL. We had regional timesharing networks like the DTSS out of Dartmouth University and PLATO out of the University of Illinois, Champaign-Urbana. We had corporate time sharing networks and systems. Each competed or coexisted or took time from others or pushed more people to others through their evolutions. Many used their own custom protocols for connectivity. But most were walled gardens, unable to communicate with the others. So if the story is more complicated than that the ARPAnet was the ancestor to the Internet, why is that the story we hear? Let’s start that journey with a memo that we did an episode on called “Memorandum For Members and Affiliates of the Intergalactic Computer Network” sent by JCR Licklider in 1963 and can be considered the allspark that lit the bonfire called The ARPANet. Which isn’t exactly the Internet but isn’t not. In that memo, Lick proposed a network of computers available to research scientists of the early 60s. Scientists from computing centers that would evolve into supercomputing centers and then a network open to the world, even our phones, televisions, and watches. It took a few years, but eventually ARPA brought in Larry Roberts, and by late 1968 ARPA awarded an RFQ to build a network to a company called Bolt Beranek and Newman (BBN) who would build Interface Message Processors, or IMPs. The IMPS were computers that connected a number of sites and routed traffic. The first IMP, which might be thought of more as a network interface card today, went online at UCLA in 1969 with additional sites coming on frequently over the next few years. That system would become ARPANET. The first node of ARPAnet went online at the University of California, Los Angeles (UCLA for short). It grew as leased lines and more IMPs became more available. As they grew, the early computer scientists realized that each site had different computers running various and random stacks of applications and different operating systems. So we needed to standardize certain aspects connectivity between different computers. Given that UCLA was the first site to come online, Steve Crocker from there began organizing notes about protocols and how systems connected with one another in what they called RFCs, or Request for Comments. That series of notes was then managed by a team that included Elizabeth (Jake) Feinler from Stanford once Doug Engelbart’s project on the “Augmentation of Human Intellect” at Stanford Research Institute (SRI) became the second node to go online. SRI developed a Network Information Center, where Feinler maintained a list of host names (which evolved into the hosts file) and a list of address mappings which would later evolve into the functions of Internic which would be turned over to the US Department of Commerce when the number of devices connected to the Internet exploded. Feinler and Jon Postel from UCLA would maintain those though, until his death 28 years later and those RFCs include everything from opening terminal connections into machines to file sharing to addressing and now any place where the networking needs to become a standard. The development of many of those early protocols that made computers useful over a network were also being funded by ARPA. They funded a number of projects to build tools that enabled the sharing of data, like file sharing and some advancements were loosely connected by people just doing things to make them useful and so by 1971 we also had email. But all those protocols needed to flow over a common form of connectivity that was scalable. Leonard Kleinrock, Paul Baran, and Donald Davies were independently investigating packet switching and Roberts brought Kleinrock into the project as he was at UCLA. Bob Kahn entered the picture in 1972. He would team up with Vint Cerf from Stanford who came up with encapsulation and so they would define the protocol that underlies the Internet, TCP/IP. By 1974 Vint Cerf and Bob Kahn wrote RFC 675 where they coined the term internet as shorthand for internetwork. The number of RFCs was exploding as was the number of nodes. The University of California Santa Barbara then the University of Utah to connect Ivan Sutherland’s work. The network was national when BBN connected to it in 1970. Now there were 13 IMPs and by 1971, 18, then 29 in 72 and 40 in 73. Once the need arose, Kleinrock would go on to work with Farouk Kamoun to develop the hierarchical routing theories in the late 70s. By 1976, ARPA became DARPA. The network grew to 213 hosts in 1981 and by 1982, TCP/IP became the standard for the US DOD and in 1983, ARPANET moved fully over to TCP/IP. And so TCP/IP, or Transport Control Protocol/Internet Protocol is the most dominant networking protocol on the planet. It was written to help improve performance on the ARPAnet with the ingenious idea to encapsulate traffic. But in the 80s, it was just for researchers still. That is, until NSFNet was launched by the National Science Foundation in 1986. And it was international, with the University College of London connecting in 1971, which would go on to inspire a British research network called JANET that built their own set of protocols called the Colored Book protocols. And the Norwegian Seismic Array connected over satellite in 1973. So networks were forming all over the place, often just time sharing networks where people dialed into a single computer. Another networking project going on at the time that was also getting funding from ARPA as well as the Air Force was PLATO. Out of the University of Illinois, was meant for teaching and began on a mainframe in 1960. But by the time ARPAnet was growing PLATO was on version IV and running on a CDC Cyber. The time sharing system hosted a number of courses, as they referred to programs. These included actual courseware, games, convent with audio and video, message boards, instant messaging, custom touch screen plasma displays, and the ability to dial into the system over lines, making the system another early network. In fact, there were multiple CDC Cybers that could communicate with one another. And many on ARPAnet also used PLATO, cross pollinating non-defense backed academia with a number of academic institutions. The defense backing couldn’t last forever. The Mansfield Amendment in 1973 banned general research by defense agencies. This meant that ARPA funding started to dry up and the scientists working on those projects needed a new place to fund their playtime. Bob Taylor split to go work at Xerox, where he was able to pick the best of the scientists he’d helped fund at ARPA. He helped bring in people from Stanford Research Institute, where they had been working on the oNLineSystem, or NLS and people like Bob Metcalfe who brought us Ethernet and better collusion detection. Metcalfe would go on to found 3Com a great switch and network interface company during the rise of the Internet. But there were plenty of people who could see the productivity gains from ARPAnet and didn’t want it to disappear. And the National Science Foundation (NSF) was flush with cash. And the ARPA crew was increasingly aware of non-defense oriented use of the system. So the NSF started up a little project called CSNET in 1981 so the growing number of supercomputers could be shared between all the research universities. It was free for universities that could get connected and from 1985 to 1993 NSFNET, surged from 2,000 users to 2,000,000 users. Paul Mockapetris made the Internet easier than when it was an academic-only network by developing the Domain Name System, or DNS, in 1983. That’s how we can call up remote computers by names rather than IP addresses. And of course DNS was yet another of the protocols in Postel at UCLAs list of protocol standards, which by 1986 after the selection of TCP/IP for NSFnet, would become the standardization body known as the IETF, or Internet Engineering Task Force for short. Maintaining a set of protocols that all vendors needed to work with was one of the best growth hacks ever. No vendor could have kept up with demand with a 1,000x growth in such a small number of years. NSFNet started with six nodes in 1985, connected by LSI-11 Fuzzball routers and quickly outgrew that backbone. They put it out to bid and Merit Network won out in a partnership between MCI, the State of Michigan, and IBM. Merit had begun before the first ARPAnet connections went online as a collaborative effort by Michigan State University, Wayne State University, and the University of Michigan. They’d been connecting their own machines since 1971 and had implemented TCP/IP and bridged to ARPANET. The money was getting bigger, they got $39 million from NSF to build what would emerge as the commercial Internet. They launched in 1987 with 13 sites over 14 lines. By 1988 they’d gone nationwide going from a 56k backbone to a T1 and then 14 T1s. But the growth was too fast for even that. They re-engineered and by 1990 planned to add T3 lines running in parallel with the T1s for a time. By 1991 there were 16 backbones with traffic and users growing by an astounding 20% per month. Vint Cerf ended up at MCI where he helped lobby for the privatization of the internet and helped found the Internet Society in 1988. The lobby worked and led to the the Scientific and Advanced-Technology Act in 1992. Before that, use of NSFNET was supposed to be for research and now it could expand to non-research and education uses. This allowed NSF to bring on even more nodes. And so by 1993 it was clear that this was growing beyond what a governmental institution whose charge was science could justify as “research” for any longer. By 1994, Vent Cerf was designing the architecture and building the teams that would build the commercial internet backbone at MCI. And so NSFNET began the process of unloading the backbone and helped the world develop the commercial Internet by sprinkling a little money and know-how throughout the telecommunications industry, which was about to explode. NSFNET went offline in 1995 but by then there were networks in England, South Korea, Japan, Africa, and CERN was connected to NSFNET over TCP/IP. And Cisco was selling routers that would fuel an explosion internationally. There was a war of standards and yet over time we settled on TCP/IP as THE standard. And those were just some of the nets. The Internet is really not just NSFNET or ARPANET but a combination of a lot of nets. At the time there were a lot of time sharing computers that people could dial into and following the release of the Altair, there was a rapidly growing personal computer market with modems becoming more and more approachable towards the end of the 1970s. You see, we talked about these larger networks but not hardware. The first modulator demodulator, or modem, was the Bell 101 dataset, which had been invented all the way back in 1958, loosely based on a previous model developed to manage SAGE computers. But the transfer rate, or baud, had stopped being improved upon at 300 for almost 20 years and not much had changed. That is, until Hayes Hayes Microcomputer Products released a modem designed to run on the Altair 8800 S-100 bus in 1978. Personal computers could talk to one another. And one of those Altair owners was Ward Christensen met Randy Suess at the Chicago Area Computer Hobbyists’ Exchange and the two of them had this weird idea. Have a computer host a bulletin board on one of their computers. People could dial into it and discuss their Altair computers when it snowed too much to meet in person for their club. They started writing a little code and before you know it we had a tool they called Computerized Bulletin Board System software, or CBBS. The software and more importantly, the idea of a BBS spread like wildfire right along with the Atari, TRS-80, Commodores and Apple computers that were igniting the personal computing revolution. The number of nodes grew and as people started playing games, the speed of those modems jumped up with the v.32 standard hitting 9600 baud in 84, and over 25k in the early 90s. By the early 1980s, we got Fidonet, which was a network of Bulletin Board Systems and by the early 90s we had 25,000 BBS’s. And other nets had been on the rise. And these were commercial ventures. The largest of those dial-up providers was America Online, or AOL. AOL began in 1985 and like most of the other dial-up providers of the day were there to connect people to a computer they hosted, like a timesharing system, and give access to fun things. Games, news, stocks, movie reviews, chatting with your friends, etc. There was also CompuServe, The Well, PSINet, Netcom, Usenet, Alternate, and many others. Some started to communicate with one another with the rise of the Metropolitan Area Exchanges who got an NSF grant to establish switched ethernet exchanges and the Commercial Internet Exchange in 1991, established by PSINet, UUNet, and CERFnet out of California. Those slowly moved over to the Internet and even AOL got connected to the Internet in 1989 and thus the dial-up providers went from effectively being timesharing systems to Internet Service Providers as more and more people expanded their horizons away from the walled garden of the time sharing world and towards the Internet. The number of BBS systems started to wind down. All these IP addresses couldn’t be managed easily and so IANA evolved out of being managed by contracts from research universities to DARPA and then to IANA as a part of ICANN and eventually the development of Regional Internet Registries so AFRINIC could serve Africa, ARIN could serve Antarctica, Canada, the Caribbean, and the US, APNIC could serve South, East, and Southeast Asia as well as Oceania LACNIC could serve Latin America and RIPE NCC could serve Europe, Central Asia, and West Asia. By the 90s the Cold War was winding down (temporarily at least) so they even added Russia to RIPE NCC. And so using tools like WinSOCK any old person could get on the Internet by dialing up. Modems for dial-ups transitioned to DSL and cable modems. We got the emergence of fiber with regional centers and even national FiOS connections. And because of all the hard work of all of these people and the money dumped into it by the various governments and research agencies, life is pretty darn good. When we think of the Internet today we think of this interconnected web of endpoints and content that is all available. Much of that was made possible by the development of the World Wide Web by Tim Berners-Lee in in 1991 at CERN, and Mosaic came out of the National Center for Supercomputing applications, or NCSA at the University of Illinois, quickly becoming the browser everyone wanted to use until Mark Andreeson left to form Netscape. Netscape’s IPO is probably one of the most pivotal moments where investors from around the world realized that all of this research and tech was built on standards and while there were some patents, the standards were freely useable by anyone. Those standards let to an explosion of companies like Yahoo! from a couple of Stanford grad students and Amazon, started by a young hedge fund Vice President named Jeff Bezos who noticed all the money pouring into these companies and went off to do his own thing in 1994. The companies that arose to create and commercialize content and ideas to bring every industry online was ferocious. And there were the researchers still writing the standards and even commercial interests helping with that. And there were open source contributors who helped make some of those standards easier to implement by regular old humans. And tools for those who build tools. And from there the Internet became what we think of today. Quicker and quicker connections and more and more productivity gains, a better quality of life, better telemetry into all aspects of our lives and with the miniaturization of devices to support wearables that even extends to our bodies. Yet still sitting on the same fundamental building blocks as before. The IANA functions to manage IP addressing has moved to the private sector as have many an onramp to the Internet. Especially as internet access has become more ubiquitous and we are entering into the era of 5g connectivity. And it continues to evolve as we pivot due to new needs and threats a globally connected world represent. IPv6, various secure DNS options, options for spam and phishing, and dealing with the equality gaps surfaced by our new online world. We have disinformation so sometimes we might wonder what’s real and what isn’t. After all, any old person can create a web site that looks legit and put whatever they want on it. Who’s to say what reality is other than what we want it to be. This was pretty much what Morpheus was offering with his choices of pills in the Matrix. But underneath it all, there’s history. And it’s a history as complicated as unraveling the meaning of an increasingly digital world. And it is wonderful and frightening and lovely and dangerous and true and false and destroying the world and saving the world all at the same time. This episode is pretty simplistic and many of the aspects we cover have entire episodes of the podcast dedicated to them. From the history of Amazon to Bob Taylor to AOL to the IETF to DNS and even Network Time Protocol. It’s a story that leaves people out necessarily; otherwise scope creep would go all the way back to to include Volta and the constant electrical current humanity received with the battery. But hey, we also have an episode on that! And many an advance has plenty of books and scholarly works dedicated to it - all the way back to the first known computer (in the form of clockwork), the Antikythera Device out of Ancient Greece. Heck even Louis Gerschner deserves a mention for selling IBM’s stake in all this to focus on things that kept the company going, not moonshots. But I’d like to dedicate this episode to everyone not mentioned due to trying to tell a story of emergent networks. Just because they were growing fast and our modern infrastructure was becoming more and more deterministic doesn’t mean that whether it was writing a text editor or helping fund or pushing paper or writing specs or selling network services or getting zapped while trying to figure out how to move current that there aren’t so, so, so many people that are a part of this story. Each with their own story to be told. As we round the corner into the third season of the podcast we’ll start having more guests. If you have a story and would like to join us use the email button on thehistoryofcomputing.net to drop us a line. We’d love to chat!
7/12/2021 • 29 minutes, 45 seconds
The History of Plastics in Computing
Nearly everything is fine in moderation. Plastics exploded as an industry in the post World War II boom of the 50s and on - but goes back far further. A plastic is a category of materials called a polymer. These are materials comprised of long chains of molecules that can be easily found in nature because cellulose, the cellular walls of plants, comes in many forms. But while the word plastics comes from easily pliable materials, we don’t usually think of plant-based products as plastics. Instead, we think of the synthetic polymers. But documented uses go back thousands of years, especially with early uses of natural rubbers, milk proteins, gums, and shellacs. But as we rounded the corner into the mid-1800s with the rise of chemistry things picked up steam. That’s when Charles Goodyear wanted to keep tires from popping and so discovered vulcanization as a means to treat rubber. Vulcanization is when rubber is heated and mixed with other chemicals like sulphur. Then in 1869 John Wesley Hyatt looked for an alternative to natural ivory for things like billiards. He found that cotton fibers could be treated with camphor, which came from the waxy wood of camphor laurels. The substance could be shaped, dried, and then come off as most anything nature produced. When Wesley innovated plastics most camphor was extracted from trees, but today most camphor is synthetically produced from petroleum-based products, further freeing humans from needing natural materials to produce goods. Not only could we skip killing elephants but we could avoid chopping down forests to meet our needs for goods. Leo Baekeland gave us Bakelite in 1907. By then we were using other materials and the hunt was on for all kinds of materials. Shellac had been used as a moisture sealant for centuries and came from the female lac bugs in trees around India but could also be used to insulate electrical components. Baekeland created a phenol and formaldehyde solution he called Novolak but as with the advent of steel realized that he could change the temperature and how much pressure was applied to the solution that he could make it harder and more moldable - thus Bakelite became the first fully synthetic polymer. Hermann Staudinger started doing more of the academic research to explain why these reactions were happening. In 1920, he wrote a paper that looked at rubber, starch, and other polymers, explaining how their long chains of molecular units were linked by covalent bonds. Thus their high molecular weights. He would go on to collaborate with his wife Magda Voita, who was a bonanist and his polymer theories proven. And so plastics went from experimentation to science. Scientists and experimenters alike continued to investigate uses and by 1925 there was even a magazine called Plastics. They could add filler to Bakelite and create colored plastics for all kinds of uses and started molding jewelry, gears, and other trinkets. They could heat it to 300 degrees and then inject it into molds. And so plastic manufacturing was born. As with many of the things we interact with in our modern world, use grew through the decades and there were other industries that started to merge, evolve, and diverge. Éleuthère Irénée du Pont had worked with gunpowder in France and his family immigrated to the United States after the French Revolution. He’d worked with chemist Antoine Lavoisier while a student and started producing gunpowder in the early 1800s. That company, which evolved into the modern DuPont, always excelled in various materials sciences and through the 1920s also focused on a number of polymers. One of their employees, Wallace Carothers, invented neoprene and so gave us our first super polymer in 1928. He would go on to invent nylon as a synthetic form of silk in 1935. DuPont also brought us Teflon and insecticides in 1935. Acrylic acid went back to the mid-1800s but as people were experimenting with combining chemicals around the same time we saw British chemists John Crawford and Rowland Hill and independently German Otto Röhm develop products based on polymathy methacrylate. Here, they were creating clear, hard plastic to be used like glass. The Brits called theirs Perspex and the Germans called theirs Plexiglas when they went to market, with our friends back at DuPont creating yet another called Lucite. The period between World War I and World War II saw advancements in nearly every science - from mechanical computing to early electrical switching and of course, plastics. The Great Depression saw a slow-down in the advancements but World War II and some of the basic research happening around the world caused an explosion as governments dumped money into build-ups. That’s when DuPont cranked out parachutes and tires and even got involved in building the Savannah Hanford plutonium plant as a part of the Manhattan Project. This took them away from things like nylon, which led to riots. We were clearly in the era of synthetics used in clothing. Leading up to the war and beyond, every supply chain of natural goods got constrained. And so synthetic replacements for these were being heavily researched and new uses were being discovered all over the place. Add in assembly lines and we were pumping out things to bring joy or improve lives at a constant clip. BASF had been making dyes since the 1860s but chemicals are chemicals and had developed polystyrene in the 1930s and continued to grow and benefit from both licensing and developing other materials like Styropor insulating foam. Dow Chemical had been founded in the 1800s by Herbert Henry Dow, but became an important part of the supply chain for the growing synthetics businesses, working with Corning to produce silicones and producing styrene and magnesium for light parts for aircraft. They too would help in nuclear developments, managing the Rocky Flats plutonium triggers plant and then napalm, Agent Orange, breast implants, plastic bottles, and anything else we could mix chemicals with. Expanded polystyrene led to plastics in cups, packaging, and anything else. By the 60s we were fully in a synthetic world. A great quote from 1967’s “The Graduate” was “I want to say one word to you. Just one word. Are you listening? Plastics.” The future was here. And much of that future involved injection molding machines, now more and more common. Many a mainframe was encased in metal but with hard plastics we could build faceplates out of plastic. The IBM mainframes had lots of blinking lights recessed into holes in plastic with metal switches sticking out. Turns out people get shocked less when the whole thing isn’t metal. The minicomputers were smaller but by the time of the PDP-11 there were plastic toggles and a plastic front on the chassis. The Altair 8800 ended up looking a lot like that, but bringing that technology to the hobbyist. By the time the personal computer started to go mainstream, the full case was made of injection molding. The things that went inside computers were increasingly plastic as well. Going back to the early days of mechanical computing, gears were made out of metal. But tubes were often mounted on circuits screwed to wooden boards. Albert Hanson had worked on foil conductors that were laminated to insulating boards going back to 1903 but Charles Ducas patented electroplating circuit patterns in 1927 and Austrian Paul Eisler invented printed circuits for radio sets in the mid-1930s. John Sargrove then figured out he could spray metal onto plastic boards made of Bakelite in the late 1930s and uses expanded to proximity fuzes in World War II and then Motorola helped bring them into broader consumer electronics in the early 1950s. Printed circuit boards then moved to screen printing metallic paint onto various surfaces and Harry Rubinstein patented printing components, which helped pave the way for integrated circuits. Board lamination and etching was added to the process and conductive inks used in the creation might be etched copper, plated substrates or even silver inks as are used in RFID tags. We’ve learned over time to make things easier and with more precise machinery we were able to build smaller and smaller boards, chips, and eventually 3d printed electronics - even the Circuit Scribe to draw circuits. Doug Engelbart’s first mouse was wood but by the time Steve Jobs insisted they be mass produceable they’d been plastic for Englebart and then the Alto. Computer keyboards had evolved out of the flexowriter and so become plastic as well. Even the springs that caused keys to bounce back up eventually replaced with plastic and rubberized materials in different configurations. Plastic is great for insulating electronics, they are poor conductors of heat, they’re light, they’re easy to mold, they’re hardy, synthetics require less than 5% of the oil we use, and they’re recyclable. Silicone, another polymer, is a term coined by the English chemist F.S. Kipping in 1901. His academic work while at University College, Nottingham would kickstart the synthetic rubber and silicone lubricant industries. But that’s not silicon. That’s an element and a tetravalent metalloid at that. Silicon was discovered in 1787 by Antoine Lavoisier. Yup the same guy that taught Du Pont. While William Shockley started off with germanium and silicon when he was inventing the transistor, it was Jack Kilby and Robert Noyce who realized how well it acted as an insulator or a semiconductor it ended up used in what we now think of as the microchip. But again, that’s not a plastic… Plastic of course has its drawbacks. Especially since we don’t consume plastics in moderation. It takes 400 to a thousand years do decompose many plastics. The rampant use in every aspect of our lives has led to animals dying after eating plastic, or getting caught in islands of it as plastic is all over the oceans and other waterways around the world. That’s 5 and a quarter trillion pieces of plastic in the ocean that weighs a combined 270,000 tons with another 8 million pieces flowing in there each and every day. In short, the overuse of plastics is hurting our environment. Or at least our inability to control our rampant consumerism is leading to their overuse. They do melt at low temperatures, which can work as a good or bad thing. When they do, they can release hazardous fumes like PCBs and dioxins. Due to many of the chemical compounds they often rely on fossil fuels and so are derived from non-renewable resources. But they’re affordable and represent a trillion dollar industry. And we can all do better at recycling - which of course requires energy and those bonds break down over time so we can’t recycle forever. Oh and the byproducts from the creation of products is downright toxic. We could argue that plastic is one of the most important discoveries in the history of humanity. That guy from The Graduate certainly would. We could argue it’s one of the worst. But we also just have to realize that our modern lives, and especially all those devices we carry around, wouldn’t be possible without plastics and other synthetic polymers. There’s a future where instead of running out to the store for certain items, we just 3d print them. Maybe we even make filament from printed materials we no longer need. The move to recyclable materials for packaging helps reduce the negative impacts of plastics. But so does just consuming less. Except devices. We obviously need the latest and greatest of each of those all the time! Here’s the thing, half of plastics are single-purpose. Much of it is packaging like containers and wrappers. But can you imagine life without the 380 million tons of plastics the world produces a year? Just look around right now. Couldn’t tell you how many parts of this microphone, computer, and all the cables and adapters are made of it. How many couldn’t be made by anything else. There was a world without plastics for thousands of years of human civilization. We’ll look at one of those single-purpose plastic-heavy industries called fast food in an episode soon. But it’s not the plastics that are such a problem. It’s the wasteful rampant consumerism. When I take out my recycling I can’t help but think that what goes in the recycling versus compost versus garbage is as much a symbol of who I want to be as what I actually end up eating and relying on to live. And yet, I remain hopeful for the world in that these discoveries can actually end up bringing us back into harmony with the world around us without reverting to luddites and walking back all of these amazing developments like we see in the science fiction dystopian futures.
7/5/2021 • 19 minutes, 21 seconds
The Laws And Court Cases That Shaped The Software Industry
The largest global power during the rise of intellectual property was England, so the world adopted her philosophies. The US had the same impact on software law. Most case law that shaped the software industry is based on copyright law. Our first real software laws appeared in the 1970s and now have 50 years of jurisprudence to help guide us. This episode looks at the laws, supreme court cases, and some circuit appeals cases that shaped the software industry. -------- In our previous episode we went through a brief review of how the modern intellectual property laws came to be. Patent laws flowed from inventors in Venice in the 1400s, royals gave privileges to own a monopoly to inventors throughout the rest of Europe over the next couple of centuries, transferred to panels and academies during and after the Age of Revolutions, and slowly matured for each industry as technology progressed. Copyright laws formed similarly, although they were a little behind patent laws due to the fact that they weren’t really necessary until we got the printing press. But when it came to data on a device, we had a case in 1908 we covered in the previous episode that led Congress to enact the 1909 Copyright Act. Mechanical music boxes evolved into mechanical forms of data storage and computing evolved from mechanical to digital. Following World War II there was an explosion in new technologies, with those in computing funded heavily by US government. Or at least, until we got ourselves tangled up in a very unpopular asymmetrical war in Vietnam. The Mansfield Amendment of 1969, was a small bill in the 1970 Military Authorization Act that ended the US military from funding research that didn’t have a direct relationship to a specific military function. Money could still flow from ARPA into a program like the ARPAnet because we wanted to keep those missiles flying in case of nuclear war. But over time the impact was that a lot of those dollars the military had pumped into computing to help develop the underlying basic sciences behind things like radar and digital computing was about to dry up. This is a turning point: it was time to take the computing industry commercial. And that means lawyers. And so we got the first laws pertaining to software shortly after the software industry emerged from more and more custom requirements for these mainframes and then minicomputers and the growing collection of computer programmers. The Copyright Act of 1976 was the first major overhaul to the copyright laws since the 1909 Copyright Act. Since then, the US had become a true world power and much as the rest of the world followed the British laws from the Statute of Anne in 1709 as a template for copyright protections, the world looked on as the US developed their laws. Many nations had joined the Berne Convention for international copyright protections, but the publishing industry had exploded. We had magazines, so many newspapers, so many book publishers. And we had this whole weird new thing to deal with: software. Congress didn’t explicitly protect software in the Copyright Act of 1976. But did add cards and tape as mediums and Congress knew this was an exploding new thing that would work itself out in the courts if they didn’t step in. And of course executives from the new software industry were asking their representatives to get in front of things rather than have the unpredictable courts adjudicate a weird copyright mess in places where technology meets copy protection. So in section 117, Congress appointed the National Commission on New Technological Uses of Copyrighted Works, or CONTU) to provide a report about software and added a placeholder in the act that empaneled them. CONTU held hearings. They went beyond just software as there was another newish technology changing the world: photocopying. They presented their findings in 1978 and recommended we define a computer program as a set of statements or instructions to be used directly or indirectly in a computer in order to bring about a certain result. They also recommended that copies be allowed if required to use the program and that those be destroyed when the user no longer has rights to the software. This is important because this is an era where we could write software into memory or start installing compiled code onto a computer and then hand the media used to install it off to someone else. At the time the hobbyist industry was just about to evolve into the PC industry, but hard disks were years out for most of those machines. It was all about floppies. But up-market there was all kinds of storage and the righting was on the wall about what was about to come. Install software onto a computer, copy and sell the disk, move on. People would of course do that, but not legally. Companies could still sign away their copyright protections as part of a sales agreement but the right to copy was under the creator’s control. But things like End User License Agreements were still far away. Imagine how ludicrous the idea that a piece of software if a piece of software went bad that it could put a company out of business in the 1970s. That would come as we needed to protect liability and not just restrict the right to copy to those who, well, had the right to do so. Further, we hadn’t yet standardized on computer languages. And yet companies were building complicated logic to automate business and needed to be able to adapt works for other computers and so congress looked to provide that right at the direction of CONTU as well, if only to the company doing the customizations and not allowing the software to then be resold. These were all hashed out and put into law in 1980. And that’s an important moment as suddenly the party who owned a copy was the rightful owner of a piece of software. Many of the provisions read as though we were dealing with book sellers selling a copy of a book, not dealing with the intricate details of the technology, but with technology those can change so quickly and those who make laws aren’t exactly technologists, so that’s to be expected. Source code versus compiled code also got tested. In 1982 Williams Electronics v Artic International explored a video game that was in a ROM (which is how games were distributed before disks and cassette tapes. Here, the Third Circuit weighed in on whether if the ROM was built into the machine, if it could be copied as it was utilitarian and therefore not covered under copyright. The source code was protected but what about what amounts to compiled code sitting on the ROM. They of course found that it was indeed protected. They again weighed in on Apple v Franklin in 1983. Here, Franklin Computer was cloning Apple computers and claimed it couldn’t clone the computer without copying what was in the ROMs, which at the time was a remedial version of what we think of as an operating system today. Franklin claimed the OS was in fact a process or method of operation and Apple claimed it was novel. At the time the OS was converted to a binary language at runtime and that object code was a task called AppleSoft but it was still a program and thus still protected. One and two years later respectively, we got Mac OS 1 and Windows 1. 1986 saw Whelan Associates v Jaslow. Here, Elaine Whelan created a management system for a dental lab on the IBM Series One, in EDL. That was a minicomputer and when the personal computer came along she sued Jaslow because he took a BASIC version to market for the PC. He argued it was a different language and the set of commands was therefore different. But the programs looked structurally similar. She won, as while some literal elements were the same, “the copyrights of computer programs can be infringed even absent copying of the literal elements of the program.” This is where it’s simple to identify literal copying of software code when it’s done verbatim but difficult to identify non-literal copyright infringement. But this was all professional software. What about those silly video games all the kids wanted? Well, Atari applied for a copyright for one of their games, Breakout. Here, Register of Copyrights, Ralph Oman chose not to Register the copyright. And so Atari sued, winning in the appeal. There were certainly other dental management packages on the market at the time. But the court found that “copyrights do not protect ideas – only expressions of ideas.” Many found fault with the decision and the Second Circuit heard Computer Associates v Altai in 1992. Here, the court applied a three-step test of Abstraction-Filtration-Comparison to determine how similar products were and held that Altai's rewritten code did not meet the necessary requirements for copyright infringement. There were other types of litigation surrounding the emerging digital sphere at the time as well. The Computer Fraud and Abuse Act came along in 1986 and would be amended in 89, 94, 96, and 2001. Here, a number of criminal offenses were defined - not copyright but they have come up to criminalize activities that should have otherwise been copyright cases. And the Copyright Act of 1976 along with the CONTU findings were amended to cover the rental market came to be (much as happened with VHS tapes and Congress established provisions to cover that in 1990. Keep in mind that time sharing was just ending by then but we could rent video games over dial-up and of course VHS rentals were huge at the time. Here’s a fun one, Atari infringed on Nintendo’s copyright by claiming they were a defendant in a case and applying to the Copyright Office to get a copy of the 10NES program so they could actually infringe on their copyright. They tried to claim they couldn’t infringe because they couldn’t make games unless they reverse engineered the systems. Atari lost that one. But Sega won a similar one soon thereafter because playing more games on a Sega was fair use. Sony tried to sue Connectix in a similar case where you booted the PlayStation console using a BIOS provided by Connectix. And again, that was reverse engineering for the sake of fair use of a PlayStation people payed for. Kinda’ like jailbreaking an iPhone, right? Yup, apps that help jailbreak, like Cydia, are legal on an iPhone. But Apple moves the cheese so much in terms of what’s required to make it work so far that it’s a bigger pain to jailbreak than it’s worth. Much better than suing everyone. Laws are created and then refined in the courts. MAI Systems Corp. v. Peak Computer made it to the Ninth Circuit Court of Appeals in 1993. This involved Eric Francis leaving MAI and joining Peak. He then loaded MAI’s diagnostics tools onto computers. MAI thought they should have a license per computer, but yet Peak used the same disk in multiple computers. The crucial change here was that the copy made, while ephemeral, was decided to be a copy of the software and so violated the copyright. We said we’d bring up that EULA though. In 1996, the Seventh Circuit found in ProCD v Zeidenberg, that the license preempted copyright thus allowing companies to use either copyright law or a license when seeking damages and giving lawyers yet another reason to answer any and all questions with “it depends.” One thing was certain, the digital world was coming fast in those Clinton years. I mean, the White House would have a Gopher page and Yahoo! would be on display at his second inauguration. So in 1998 we got the Digital Millennium Copyright Act (DMCA). Here, Congress added to Section 117 to allow for software copies if the software was required for maintenance of a computer. And yet software was still just a set of statements, like instructions in a book, that led the computer to a given result. The DMCA did have provisions to provide treatment to content providers and e-commerce providers. It also implemented two international treaties and provided remedies for anti-circumvention of copy-prevention systems since by then cracking was becoming a bigger thing. There was more packed in here. We got MAI Systems v Peak Computer reversed by law, refinement to how the Copyright Office works, modernizing audio and movie rights, and provisions to facilitate distance education. And of course the DMCA protected boat hull designs because, you know, might as well cram some stuff into a digital copyright act. In addition to the cases we covered earlier, we had Mazer v Stein, Dymow v Bolton, and even Computer Associates v Altai, which cemented the AFC method as the means most courts determine copyright protection as it extends to non-literal components such as dialogue and images. Time and time again, courts have weighed in on what fair use is because the boundaries are constantly shifting, in part due to technology, but also in part due to shifting business models. One of those shifting business models was ripping songs and movies. RealDVD got sued by the MPAA for allowing people to rip DVDs. YouTube would later get sued by Viacom but courts found no punitive damages could be awarded. Still, many online portals started to scan for and filter out works they could know were copy protected, especially given the rise of machine learning to aid in the process. But those were big, major companies at the time. IO Group, Inc sued Veoh for uploaded video content and the judge found Veoh was protected by safe harbor. Safe Harbor mostly refers to the Online Copyright Infringement Liability Limitation Act, or OCILLA for short, which shields online portals and internet service providers from copyright infringement. This would be separate from Section 230, which protects those same organizations from being sued for 3rd party content uploaded on their sites. That’s the law Trump wanted overturned during his final year in office but given that the EU has Directive 2000/31/EC, Australia has the Defamation Act of 2005, Italy has the Electronic Commerce Directive 2000, and lots of other countries like England and Germany have had courts find similarly, it is now part of being an Internet company. Although the future of “big tech” cases (and the damage many claim is being done to democracy) may find it refined or limited. In 2016, Cisco sued Arista for allegedly copying the command line interfaces to manage switches. Cisco lost but had claimed more than $300 million in damages. Here, the existing Cisco command structure allowed Arista to recruit seasoned Cisco administrators to the cause. Cisco had done the mental modeling to evolve those commands for decades and it seemed like those commands would have been their intellectual property. But, Arista hadn’t copied the code. Then in 2017, in ZeniMax vs Oculus, ZeniMax wan a half billion dollar case against Oculus for copying their software architecture. And we continue to struggle with what copyright means as far as code goes. Just in 2021, the Supreme Court ruled in Google v Oracle America that using application programming interfaces (APIs) including representative source code can be transformative and fall within fair use, though did not rule if such APIs are copyrightable. I’m sure the CP/M team, who once practically owned the operating system market would have something to say about that after Microsoft swooped in with and recreated much of the work they had done. But that’s for another episode. And traditional media cases continue. ABS Entertainment vs CBS looked at whether digitally remastering works extended copyright. BMG vs Cox Communications challenged peer-to-peer file-sharing in safe harbor cases (not to mention the whole Napster testifying before congress thing). You certainly can’t resell mp3 files the way you could drop off a few dozen CDs at Tower Records, right? Capitol Records vs ReDigi said nope. Perfect 10 v Amazon, Goldman v Breitbart, and so many more cases continued to narrow down who and how audio, images, text, and other works could have the right to copy restricted by creators. But sometimes it’s confusing. Dr. Seuss vs ComicMix found that merging Star Trek and “Oh, the Places You’ll Go” was enough transformativeness to break the copyright of Dr Seuss, or was that the Fair Use Doctrine? Sometimes I find conflicting lines in opinions. Speaking of conflict… Is the government immune from copyright? Allen v Cooper, Governor of North Carolina made it to the Supreme Court, where they applied blanket copyright protections. Now, this was a shipwreck case but extended to digital works and the Supreme Court seemed to begrudgingly find for the state, and looked to a law as remedy rather than awarding damages. In other words, the “digital Blackbeards” of a state could pirate software at will. Guess I won’t be writing any software for the state of North Carolina any time soon! But what about content created by a state? Well, the state of Georgia makes various works available behind a paywall. That paywall might be run by a third party in exchange for a cut of the proceeds. So Public.Resource goes after anything where the edict of a government isn’t public domain. In other words, court decision, laws, and statutes should be free to all who wish to access them. The “government edicts doctrine” won in the end and so access to the laws of the nation continue to be free. What about algorithms? That’s more patent territory when they are actually copyrightable, which is rare. Gottschalk v. Benson was denied a patent for a new way to convert binary-coded decimals to numerals while Diamond v Diehr saw an algorithm to run a rubber molding machine was patentable. And companies like Intel and Broadcom hold thousands of patents for microcode for chips. What about the emergence of open source software and the laws surrounding social coding? We’ll get to the emergence of open source and the consequences in future episodes! One final note, most have never heard of the names in early cases. Most have heard of the organizations listed in later cases. Settling issues in the courts has gotten really, really expensive. And it doesn’t always go the way we want. So these days, whether it’s Apple v Samsung or other tech giants, the law seems to be reserved for those who can pay for it. Sure, there’s the Erin Brockovich cases of the world. And lady justice is still blind. We can still represent ourselves, case and notes are free. But money can win cases by having attorneys with deep knowledge (which doesn’t come cheap). And these cases drag on for years and given the startup assembly line often halts with pending legal actions, not many can withstand the latency incurred. This isn’t a “big tech is evil” comment as much as “I see it and don’t know a better rubric but it’s still a thing” kinda’ comment. Here’s something better that we’d love to have a listener take away from this episode. Technology is always changing. Laws usually lag behind technology change as (like us) they’re reactive to innovation. When those changes come, there is opportunity. Not only has the technological advancement gotten substantial enough to warrant lawmaker time, but the changes often create new gaps in markets that new entrants can leverage. Either leaders in markets adapt quickly or see those upstarts swoop in, having no technical debt and being able to pivot faster than those who previously might have enjoyed a first user advantage. What laws are out there being hashed out, just waiting to disrupt some part of the software market today?
6/13/2021 • 28 minutes, 56 seconds
Origins of the Modern Patent And Copyright Systems
Once upon a time, the right to copy text wasn’t really necessary. If one had a book, one could copy the contents of the book by hiring scribes to labor away at the process and books were expensive. Then came the printing press. Now, the printer of a work would put a book out and another printer could set their press up to reproduce the same text. More people learned to read and information flowed from the presses at the fastest pace in history. The printing press spread from Gutenberg’s workshop in the 1440s throughout Germany and then to the rest of Europe and appearing in England when William Caxton built the first press there in 1476. It was a time of great change, causing England to retreat into protectionism, and Henry VIII tried to restrict what could be printed in the 1500s. But Parliament needed to legislate further. England was first to establish copyright when Parliament passed the Licensing of the Press Act in 1662, which regulated what could be printed. This was more to prevent printing scandalous materials and basically gave a monopoly to The Stationers’ Company to register, print, copy, and publish books. They could enter another printer and destroy their presses. That went on for a few decades until the act was allowed to lapse in 1694 but began the 350 year journey of refining what copyright and censorship means to a modern society. The next big step came in England when the Statute of Anne was passed in 1710. It was named for the reigning last Queen of the House of Stuart. While previously a publisher could appeal to have a work censored by others because the publisher had created it, this statute took a page out of the patent laws and granted a right of protection against copying a work for 14 years. Reading through the law and further amendments it is clear that lawmakers were thinking far more deeply about the balance between protecting the license holder of a work and how to get more books to more people. They’d clearly become less protectionist and more concerned about a literate society. There are examples in history of granting exclusive rights to an invention from the Greeks to the Romans to Papal Bulls. These granted land titles, various rights, or a status to people. Edward the Confessor started the process of establishing the Close Rolls in England in the 1050s, where a central copy of all those granted was kept. But they could also be used to grant a monopoly, with the first that’s been found being granted by Edward III to John Kempe of Flanders as a means of helping the cloth industry in England to flourish. Still, this wasn’t exactly an exclusive right but instead a right to emigrate. And the letters were personal and so letters patent evolved to royal grants, which Queen Elizabeth was providing in the late 1500s. That emerged out of the need for patent laws proven by Venicians in the late 1400s, when they started granting exclusive rights by law to inventions for 10 years. King Henry II of France established a royal patent system in France and over time the French Academy of Sciences was put in charge of patent right review. English law evolved and perpetual patents granted by monarchs were stifling progress. Monarchs might grant patents to raise money and so allow a specific industry to turn into a monopoly to raise funds for the royal family. James I was forced to revoke the previous patents, but a system was needed. And so the patent system was more formalized and those for inventions got limited to 14 years when the Statue of Monopolies was passed in England in 1624. The evolution over the next few decades is when we started seeing drawings added to patent requests and sometimes even required. We saw forks in industries and so the addition of medical patents, and an explosion in various types of patents requested. They weren’t just in England. The mid-1600s saw the British Colonies issuing their own patents. Patent law was evolving outside of England as well. The French system was becoming larger with more discoveries. By 1729 there were digests of patents being printed in Paris and we still keep open listings of them so they’re easily proven in court. Given the maturation of the Age of Enlightenment, that clashed with the financial protectionism of patent laws and intellectual property as a concept emerged but borrowed from the patent institutions bringing us right back to the Statute of Anne, which established the modern Copyright system. That and the Statue of Monopolies is where the British Empire established the modern copyright and patent systems respectively, which we use globally today. Apparently they were worth keeping throughout the Age of Revolution, mostly probably because they’d long been removed from the monarchal control and handed to various public institutions. The American Revolution came and went. The French Revolution came and went. The Latin American wars of independence, revolutions throughout the 1820s , the end of Feudalism, Napoleon. But the wars settled down and a world order of sorts came during the late 1800s. One aspect of that world order was the Berne Convention, which was signed in 1886. This established the bilateral recognition of copyrights among sovereign nations that signed onto the treaty, rather than have various nations enter into pacts between one another. Now, the right to copy works were automatically in force at creation, so authors no longer had to register their mark in Berne Convention countries. Following the Age of Revolutions, there was also an explosion of inventions around the world. Some ended up putting copyrighted materials onto reproducible forms. Early data storage. Previously we could copyright sheet music but the introduction of the player piano led to the need to determine the copyright ability of piano rolls in White-Smith Music v. Apollo in 1908. Here we saw the US Supreme Court find that these were not copies as interpreted in the US Copyright Act because only a machine could read them and they basically told congress to change the law. So Congress did. The Copyright Act of 1909 then specified that even if only a machine can use information that’s protected by copyright, the copyright protection remains. And so things sat for a hot minute as we learned first mechanical computing, which is patentable under the old rules and then electronic computing which was also patentable. Jacquard patented his punch cards in 1801. But by the time Babbage and Lovelace used them in his engines that patent had expired. And the first digital computer to get a patent was the Eckert-Mauchly ENIAC, which was filed in 1947, granted in 1964, and because there was a prior unpatented work, overturned in 1973. Dynamic RAM was patented in 1968. But these were for physical inventions. Software took a little longer to become a legitimate legal quandary. The time it took to reproduce punch cards and the lack of really mass produced software didn’t become an issue until after the advent of transistorized computers with Whirlwind, the DEC PDP, and the IBM S/360. Inventions didn’t need a lot of protections when they were complicated and it took years to build one. I doubt the inventor of the Antikythera Device in Ancient Greece thought to protect their intellectual property because they’d of likely been delighted if anyone else in the world would have thought to or been capable of creating what they created. Over time, the capabilities of others rises and our intellectual property becomes more valuable because progress moves faster with each generation. Those Venetians saw how technology and automation was changing the world and allowed the protection of inventions to provide a financial incentive to invent. Licensing the commercialization of inventions then allows us to begin the slow process of putting ideas on a commercialization assembly line. Books didn’t need copyright until they could be mass produced and were commercially viable. That came with mass production. A writer writes, or creates intellectual property and a publisher prints and distributes. Thus we put the commercialization of literature and thoughts and ideas on an assembly line. And we began doing so far before the Industrial Revolution. Once there were more inventions and some became capable of mass producing the registered intellectual property of others, we saw a clash in copyrights and patents. And so we got the Copyright Act of 1909. But with digital computers we suddenly had software emerging as an entire industry. IBM had customized software for customers for decades but computer languages like FORTRAN and mass storage devices that could be moved between computers allowed software to be moved between computers and sometimes entire segments of business logic moved between companies based on that software. By the 1960s, companies were marketing computer programs as a cottage industry. The first computer program was deposited at the US Copyright Office in 1961. It was a simple thing. A tape with a computer program that had been filed by North American Aviation. Imagine the examiners looking at it with their heads cocked to the side a bit. “What do we do with this?” They hadn’t even figured it out when they got three more from General Dynamics and two more programs showed up from a student at Columbia Law. A punched tape held a bunch of punched cards. A magnetic tape just held more punched tape that went faster. This was pretty much what those piano rolls from the 1909 law had on them. Registration was added for all five in 1964. And thus software copyright was born. But of course it wasn’t just a metallic roll that had impressions for when a player piano struck a hammer. If someone found a roll on the ground, they could put it into another piano and hit play. But the likelihood that they could put reproduce the piano roll was low. The ability to reproduce punch cards had been there. But while it likely didn’t take the same amount of time it took to reproduce a copy Plato’s Republic before the advent of the printing press, the occurrences weren’t frequent enough to encounter a likely need for adjudication. That changed with high speed punch devices and then the ability to copy magnetic tape. Contracts (which we might think of as EULAs today in a way) provided a license for a company to use software, but new questions were starting to form around who was bound to the contract and how protection was extended based on a number of factors. Thus the LA, or License Agreement part of EULA rather than just a contract when buying a piece of software. And this brings us to the forming of the modern software legal system. That’s almost a longer story than the written history we have of early intellectual property law, so we’ll pick that up in the next episode of the podcast!
6/7/2021 • 17 minutes, 3 seconds
A History Of Text Messages In A Few More Than 160 Characters
Texts are sent and received using SMS, or Short Message Service. Due to the amount of bandwidth available on second generation networks, they were limited to 160 characters initially. You know the 140 character max from Twitter, we are so glad you chose to join us on this journey where we weave our way from the topmast of the 1800s to the skinny jeans of San Francisco with Twitter. What we want you to think about through this episode is the fact that this technology has changed our lives. Before texting we had answering machines, we wrote letters, we sent more emails but didn’t have an expectation of immediate response. Maybe someone got back to us the next day, maybe not. But now, we rely on texting to coordinate gatherings, pick up the kids, get a pin on a map, provide technical support, send links, send memes, convey feelings in ways that we didn’t do when writing letters. I mean including an animated gif in a letter meant melty peanut butter. Wait, that’s jif. Sorry. And few technologies have sprung into our every day use so quickly in the history of technology. It took generations if not 1,500 years for bronze working to migrate out of the Vinča Culture and bring an end to the Stone Age. It took a few generations if not a couple of hundred years for electricity to spread throughout the world. The rise of computing took a few generations to spread from first mechanical then to digital and then to personal computing and now to ubiquitous computing. And we’re still struggling to come to terms with job displacement and the productivity gains that have shifted humanity more rapidly than any other time including the collapse of the Bronze Age. But the rise of cellular phones and then the digitization of them combined with globalization has put instantaneous communication in the hands of everyday people around the world. We’ve decreased our reliance on paper and transporting paper and moved more rapidly into a digital, even post-PC era. And we’re still struggling to figure out what some of this means. But did it happen as quickly as we identify? Let’s look at how we got here. Bell Telephone introduced the push button phone in 1963 to replace the rotary dial telephone that had been invented in 1891 and become a standard. And it was only a matter of time before we’d find a way to associate letters to it. Once we could send bits over devices instead of just opening up a voice channel it was only a matter of time before we’d start sending data as well. Some of those early bits we sent were things like typing our social security number or some other identifier for early forms of call routing. Heck the fax machine was invented all the way back in 1843 by a Scottish inventor called Alexander Bain. So given that we were sending different types of data over permanent and leased lines it was only a matter of time before we started doing so over cell phones. The first cellular networks were analog in what we now think of as first generation, or 1G. GSM, or Global System for Mobile Communications is a standard that came out of the European Telecommunications Standards Institue and started getting deployed in 1991. That became what we now think of as 2G and paved the way for new types of technologies to get rolled out. The first text message simply said “Merry Christmas” and was sent on December 3rd, 1992. It was sent to Richard Jarvis at Vodafone by Neil Papworth. As with a lot of technology it was actually thought up eight years earlier by Bernard Ghillabaert and Friedhelm Hillebrand. From there, the use cases moved to simply alerting devices of various statuses, like when there was a voice mail. These days we mostly use push notification services for that. To support using SMS for that, carriers started building out SMS gateways and by 1993 Nokia was the first cell phone maker to actually support end-users sending text messages. Texting was expensive at first, but adoption slowly increased. We could text in the US by 1995 but cell phone subscribers were sending less than 6 texts a year on average. But as networks grew and costs came down, adoption increased up to a little over one a day by the year 2000. Another reason adoption was slow was because using multi-tap to send a message sucked. Multi-tap was where we had to use the 10-key pad on a device to type out messages. You know, ABC are on a 2 key so the first type you tap two it’s the number the next time it’s an A, the next a B, the next a C. And the 3 key is D, E, and F. The 4 is G, H, and I and the 5 is J, K, and L. The 6 is M, N, and O and the 7 is P, Q, R, and S. The 8 is T, U, and V and the 9 is W, X, Y, and Z. This layout goes back to old bell phones that had those letters printed under the numbers. That way if we needed to call 1-800-PODCAST we could map which letters went to what. A small company called Research in Motion introduced an Inter@active Pager in 1996 to do two-way paging. Paging services went back decades. My first was a SkyTel, which has its roots in Mississippi when John N Palmer bought a 300 person paging company using an old-school radio paging service. That FCC license he picked up evolved to more acquisitions through Alabama, Loisiana, New York and by the mid-80s growing nationally to 30,000 subscribers in 1989 and over 200,000 less than four years later. A market validated, RIM introduced the BlackBerry on the DataTAC network in 2002, expanding from just text to email, mobile phone services, faxing, and now web browsing. We got the Treo the same year. But that now iconic Blackberry keyboard. Nokia was the first cellular device maker to make a full keyboard for their Nokia 9000i Communicator in 1997, so it wasn’t an entirely new idea. But by now, more and more people were thinking of what the future of Mobility would look like. The 3rd Generation Partnership Project, or 3GPP was formed in 1998 to dig into next generation networks. They began as an initiative at Nortel and AT&T but grew to include NTT DoCoMo, British Telecom, BellSouth, Ericsson, Telnor, Telecom Italia, and France Telecom - a truly global footprint. With a standards body in place, we could move faster and they began planning the roadmap for 3G and beyond (at this point we’re on 5G). Faster data transfer rates let us do more. We weren’t just sending texts any more. MMS, or Multimedia Messaging Service was then introduced and use grow to billions and then hundreds of millions of photos sent encoded using technology like what we do with MIME for multimedia content on websites. At this point, people were paying a fee for every x number of messages and ever MMS. Phones had cameras now so in a pre-Instagram world this was how we were to share them. Granted they were blurry by modern standards, but progress. Devices became more and more connected as data plans expanded to eventually often be unlimited. But SMS was still slow to evolve in a number of ways. For example, group chat was not really much of a thing. That is, until 2006 when a little company called Twitter came along to make it easy for people to post a message to their friends. Initially it worked over text message until they moved to an app. And texting was used by some apps to let users know there was data waiting for them. Until it wasn’t. Twilio was founded in 2008 to make it easy for developers to add texting to their software. Now every possible form of text integration was as simple as importing a framework. Apple introduced the Apple Push Notification service, or APNs in 2009. By then devices were always connected to the Internet and the send and receive for email and other apps that were fine on desktops were destroying battery life. APNs then allowed developers to build apps that could only establish a communication channel when they had data. Initially we used 256 bytes in push notifications but due to the popularity and different implementation needs, notifications could grow to 2 kilobytes in 2015 and moved to an HTTP/2 interface and a 4k payload in 2015. This is important because it paved the way for iChat, now called iMessage or just Messages - and then other similar services for various platforms that moved instant messaging off SMS and over to the vendor who builds a device rather than using SMS or MMS messaging. Facebook Messenger came along in 2011, and now the kids use Instagram messaging, Snapchat, Signal or any number of other messaging apps. Or they just text. It’s one of a billion communications tools that also include Discord, Slack, Teams, LinkedIn, or even the in-game options in many a game. Kinda’ makes restricting communications a bit of a challenge at this point and restricting spam. My kid finishes track practice early. She can just text me. My dad can’t make it to dinner. He can just text me. And of course I can get spam through texts. And everyone can message me on one of about 10 other apps on my phone. And email. On any given day I receive upwards of 300 messages, so sometimes it seems like I could just sit and respond to messages all day every day and still never be caught up. And get this - we’re better for it all. We’re more productive, we’re more well connected, and we’re more organized. Sure, we need to get better at having more meaningful reactions when we’re together in person. We need to figure out what a smaller, closer knit group of friends is like and how to be better at being there for them rather than just sending a sad face in a thread where they’re indicating their pain. But there’s always a transition where we figure out how to embrace these advances in technology. There are always opportunities in the advancements and there are always new evolutions built atop previous evolutions. The rate of change is increasing. The reach of change is increasing. And the speed changes propagate are unparalleled today. Some will rebel against changes, seeking solace in older ways. It’s always been like that - the Amish can often be seen on a buggy pulled by a horse so a television or phone capable of texting would certainly be out of the question. Others embrace technology faster than some of us are ready for. Like when I realized some people had moved away from talking on phones and were pretty exclusively texting. Spectrums. I can still remember picking up the phone and hearing a neighbor on with a friend. Party lines were still a thing in Dahlonega, Georgia when I was a kid. I can remember the first dedicated line and getting in trouble for running up a big long distance bill. I can remember getting our first answering machine and changing messages on it to be funny. Most of that was technology that moved down market but had been around for a long time. The rise of messaging on the cell phone then smart phone though - that was a turning point that started going to market in 1993 and within 20 years truly revolutionized human communication. How can we get messages faster than instant? Who knows, but I look forward to finding out.
5/16/2021 • 16 minutes, 9 seconds
Project Xanadu
Java, Ruby, PHP, Go. These are web applications that dynamically generate code then interpreted as a file by a web browser. That file is rarely static these days and the power of the web is that an app or browser can reach out and obtain some data, get back some xml or json or yaml, and provide an experience to a computer, mobile device, or even embedded system. The web is arguably the most powerful, transformational technology in the history of technology. But the story of the web begins in philosophies that far predate its inception. It goes back to a file, which we can think of as a document, on a computer that another computer reaches out to and interprets. A file comprised of hypertext. Ted Nelson coined the term hypertext. Plenty of others put the concepts of linking objects into the mainstream of computing. But he coined the term that he’s barely connected to in the minds of many. Why is that? Tim Berners-Lee invented the World Wide Web in 1989. Elizabeth Feinler developed a registry of names that would evolve into DNS so we could find computers online and so access those web sites without typing in impossible to remember numbers. Bob Kahn and Leonard Kleinrock were instrumental in the Internet Protocol, which allowed all those computers to be connected together, providing the schemes for those numbers. Some will know these names; most will not. But a name that probably doesn’t come up enough is Ted Nelson. His tale is one of brilliance and the early days of computing and the spread of BASIC and an urge to do more. It’s a tale of the hacker ethic. And yet, it’s also a tale of irreverence - to be used as a warning for those with aspirations to be remembered for something great. Or is it? Steve Jobs famously said “real artists ship.” Ted Nelson did ship. Until he didn’t. Let’s go all the way back to 1960, when he started Project Xanadu. Actually, let’s go a little further back first. Nelson was born to TV directory Ralph Nelson and Celeste Holm, who won an Academy Award for her role in Gentleman’s Agreement in 1947 and took home another pair of nominations through her career, and for being the original Ado Annie in Oklahoma. His dad worked on The Twilight Zone - so of course he majored in philosophy at Swarthmore College and then went off to the University of Chicago and then Harvard for graduate school, taking a stab at film after he graduated. But he was meant for an industry that didn’t exist yet but would some day eclipse the film industry: software. While in school he got exposed to computers and started to think about this idea of a repository of all the world’s knowledge. And it’s easy to imagine a group of computing aficionados sitting in a drum circle, smoking whatever they were smoking, and having their minds blown by that very concept. And yet, it’s hard to imagine anyone in that context doing much more. And yet he did. Nelson created Project Xanadu in 1960. As we’ll cover, he did a lot of projects during the remainder of his career. The Journey is what is so important, even if we never get to the destination. Because sometimes we influence the people who get there. And the history of technology is as much about failed or incomplete evolutions as it is about those that become ubiquitous. It began with a project while he was enrolled in Harvard grad school. Other word processors were at the dawn of their existence. But he began thinking through and influencing how they would handle information storage and retrieval. Xanadu was supposed to be a computer network that connected humans to one another. It was supposed to be simple and a scheme for world-wide electronic publishing. Unlike the web, which would come nearly three decades later, it was supposed to be bilateral, with broken links self-repairing, much as nodes on the ARPAnet did. His initial proposal was a program in machine language that could store and display documents. Being before the advent of Markdown, ePub, XML, PDF, RTF, or any of the other common open formats we use today, it was rudimentary and would evolve over time. Keep in mind. It was for documents and as Nelson would say later, the web - which began as a document tool, was a fork of the project. The term Xanadu was borrowed from Samuel Taylor Coleridge’s Kubla Khan, itself written after some opium fueled dreams about a garden in Kublai Khan’s Shangdu, or Xanadu.In his biography, Coleridge explained the rivers in the poem supply “a natural connection to the parts and unity to the whole” and he said a “stream, traced from its source in the hills among the yellow-red moss and conical glass-shaped tufts of bent, to the first break or fall, where its drops become audible, and it begins to form a channel.” Connecting all the things was the goal and so Xanadu was the name. He gave a talk and presented a paper called “A File Structure for the Complex, the Changing and the Indeterminate” at the Association for Computing Machinery in 1965 that laid out his vision. This was the dawn of interactivity in computing. Digital Equipment had launched just a few years earlier and brought the PDP-8 to market that same year. The smell of change was in the air and Nelson was right there. After that, he started to see all these developments around the world. He worked on a project at Brown University to develop a word processor with many of his ideas in it. But the output of that project, as with most word processors since - was to get things printed. He believed content was meant to be created and live its entire lifecycle in the digital form. This would provide perfect forward and reverse citations, text enrichment, and change management. And maybe if we all stand on the shoulders of giants, it would allow us the ability to avoid rewriting or paraphrasing the works of others to include them in own own writings. We could do more without that tedious regurgitation. He furthered his counter-culture credentials by going to Woodstock in 1969. Probably not for that reason, but it happened nonetheless. And he traveled and worked with more and more people and companies, learning and engaging and enriching his ideas. And then he shared them. Computer Lib/Dream Machines was a paperback book. Or two. It had a cover on each side. Originally published in 1974, it was one of the most important texts of the computer revolution. Steven Levy called it an epic. It’s rare to find it for less than a hundred bucks on eBay at this point because of how influential it was and what an amazing snapshot in time it represents. Xanadu was to be a hypertext publishing system in the form of Xanadocs, or files that could be linked to from other files. A Xanadoc used Xanalinks to embed content from other documents into a given document. These spans of text would become transclusions and change in the document that included the content when they changed in the live document. The iterations towards working code were slow and the years ticked by. That talk in 1965 gave way to the 1970s, then 80s. Some thought him brilliant. Others didn’t know what to make of it all. But many knew of his ideas for hypertext and once known it became deterministic. Byte Magazine published many of his thoughts in 1988 called “Managing Immense Storage” and by then the personal computer revolution had come in full force. Tim Berners-Lee put the first node of the World Wide Web online the next year, using a protocol they called Hypertext Transfer Protocol, or http. Yes, the hypertext philosophy was almost a means of paying homage to the hard work and deep thinking Nelson had put in over the decades. But not everyone saw it as though Nelson had made great contributions to computing. “The Curse of Xanadu” was an article published in Wired Magazine in 1995. In the article, the author points out the fact that the web had come along using many of the ideas Nelson and his teams had worked on over the years but actually shipped - whereas Nelson hadn’t. Once shipped, the web rose in popularity becoming the ubiquitous technology it is today. The article looked at Xanadu as vaporware. But there is a deeper, much more important meaning to Xanadu in the history of computing. Perhaps inspired by the Wired article, the group released an incomplete version of Xanadu in 1998. But by then, other formats - including PDF which was invented in 1993 and .doc for Microsoft Word, were the primary mechanisms we stored documents and first gopher and then the web were spreading to interconnect humans with content. https://www.youtube.com/watch?v=72M5kcnAL-4 The Xanadu story isn’t a tragedy. Would we have had hypertext as a part of Douglas Engelbart’s oNLine System without it? Would we have object-oriented programming or later the World Wide Web without it? The very word hypertext is almost an homage, even if they don’t know it, to Nelson’s work. And the look and feel of his work lives on in places like GitHub, whether directly influenced or not, where we can see changes in code side-by-side with actual production code, changes that are stored and perhaps rolled back forever. Larry Tessler coined the term Cut and Paste. While Nelson calls him a friend in Werner Herzog’s Lo and Behold, Reveries of the Connected World, he also points out that Tessler’s term is flawed. And I think this is where we as technologists have to sometimes trim down our expectations of how fast evolutions occur. We take tiny steps because as humans we can’t keep pace with the rapid rate of technological change. We can look back and see a two steps forward and one step back approach since the dawn of written history. Nelson still doesn’t think the metaphors that harken back to paper have any place in the online written word. Here’s another important trend in the history of computing. As we’ve transitioned to more and more content living online exclusively, the content has become diluted. One publisher I wrote online pieces for asked that they all be +/- 700 words and asked that paragraphs be no more than 4 sentences long (preferably 3) and the sentences should be written at about a 5th or 6th grade level. Maybe Nelson would claim that this de-evolution of writing is due to search engine optimization gamifying the entirety of human knowledge and that a tool like Xanadu would have been the fix. After all, if we could borrow the great works of others we wouldn’t have to paraphrase them. But I think as with most things, it’s much more nuanced than that. Our always online, always connected brains can only accept smaller snippets. So that’s what we gravitate towards. Actually, we have plenty of capacity for whatever we actually choose to immerse ourselves into. But we have more options than ever before and we of course immerse ourselves into video games or other less literary pursuits. Or are they more literary? Some generations thought books to be dangerous. As do all oppressors. So who am I to judge where people choose to acquire knowledge or what kind they indulge themselves in. Knowledge is power and I’m just happy they have it. And they have it in part because others were willing to water own the concepts to ship a product. Because the history of technology is about evolutions, not revolutions. And those often take generations. And Nelson is responsible for some of the evolutions that brought us the ht in http or html. And for that we are truly grateful! As with the great journey from Lord of the Rings, rarely is greatness found alone. The Xanadu adventuring party included Cal Daniels, Roger Gregory, Mark Miller, Stuart Greene, Dean Tribble, Ravi Pandya, became a part of Autodesk in the 80s, got rewritten in Smalltalk, was considered a rival to the web, but really is more of an evolutionary step on that journey. If anything it’s a divergence then convergence to and from Vannevar Bush’s Memex. So let me ask this as a parting thought? Are the places you are not willing to sacrifice any of your core designs or beliefs worth the price being paid? Are they worth someone else ending up with a place in the history books where (like with this podcast) we oversimplify complex topics to make them digestible? Sometimes it’s worth it. In no way am I in a place to judge the choices of others. Only history can really do that - but when it happens it’s usually an oversimplification anyways… So the building blocks of the web lie in irreverence - in hypertext. And while some grew out of irreverence and diluted their vision after an event like Woodstock, others like Nelson and his friend Douglas Englebart forged on. And their visions didn’t come with commercial success. But as an integral building block to the modern connected world today they represent as great a mind as practically anyone else in computing.
5/13/2021 • 19 minutes
An Abridged History Of Instagram
This was a hard episode to do. Because telling the story of Instagram is different than explaining the meaning behind it. You see, on the face of it - Instagram is an app to share photos. But underneath that it’s much more. It’s a window into the soul of the Internet-powered culture of the world. Middle schoolers have always been stressed about what their friends think. It’s amplified on Instagram. People have always been obsessed with and copied celebrities - going back to the ages of kings. That too is on Instagram. We love dogs and cute little weird animals. So does Instagram. Before Instagram, we had photo sharing apps. Like Hipstamatic. Before Instagram, we had social networks - like Twitter and Facebook. How could Instagram do something different and yet, so similar? How could it offer that window into the world when the lens photos are snapped with are as though through rose colored glasses? Do they show us reality or what we want reality to be? Could it be that the food we throw away or the clothes we donate tell us more about us as humans than what we eat or keep? Is the illusion worth billions of dollars a year in advertising revenue while the reality represents our repressed shame? Think about that as we go through this story. If you build it, they will come. Everyone who builds an app just kinda’ automatically assumes that throngs of people will flock to the App Store, download the app, and they will be loved and adored and maybe even become rich. OK, not everyone thinks such things - and with the number of apps on the stores these days, the chances are probably getting closer to those that a high school quarterback will play in the NFL. But in todays story, that is exactly what happened. And Kevin Systrom had already seen it happen. He was offered a job as one of the first employees at Facebook while still going to Stanford. That’ll never be a thing. Then while on an internship he was asked to be one of the first Twitter employees. That’ll never be a thing either. But they were things, obviously! So in 2010, Systrom started working on an app he called Burbn and within two years sold the company, then called Instagram for one billion dollars. In doing so he and his co-founder Mike Krieger helped forever changing the deal landscape for mergers and acquisitions of apps, and more profoundly giving humanity lenses with which to see a world we want to see - if not reality. Systrom didn’t have a degree in computer science. In fact, he taught himself to code after working hours, then during working hours, and by osmosis through working with some well-known founders. Burbn was an app to check in and post plans and photos. It was written in HTML5 and in a Cinderella story, he was able to raise half a million dollars in funding from Baseline Ventures and Andreesen Horowitz, bringing in Mike Krieger as a co-founder. At the time, Hipstamatic was the top photo manipulation and filtering app. Given that the iPhone came with a camera on-par (if not better) than most digital point and shoots at the time, the pair re-evaluated the concept and instead leaned further into photo sharing, while still maintaining the location tagging. The original idea was to swipe right and left, as we do in apps like Tinder. But instead they chose to show photos in chronological order and used a now iconic 1:1 aspect ratio, or the photos were square, so there was room on the screen to show metadata and a taste of the next photo - to keep us streaming. The camera was simple, like the Holga camera Systrom had been given while stying abroad when at Stanford. That camera made pictures a little blurry and in an almost filtered way made them loo almost artistic. After System graduated from Stanford in 2006, he worked at Google, then NextStop, and then got the bug to make his own app. And boy did he. One thing though, even his wife Nicole didn’t think she could take good photos having seen those from a friend of Systrom’s. He said the photos were so good because the filters. And so we got the first filter, X-Pro 2, so she could take great photos on the iPhone 3G. Krieger shared the first post on Instagram on July 16, 2010 and Systrom followed up within a few hours with a picture of a dog. The first of probably a billion dog photos (including a few of my own). And they officially published Instagram on the App Store in October of 2010. After adding more and more filters, Systrom and Krieger closed in on one of the greatest growth hacks of any app: they integrated with Facebook, Twitter, and Foursquare so you could take the photo in Instagram and shoot it out to one of those apps - or all three. At the time Facebook was more of a browser tool. Few people used the mobile app. And for those that did try and post photos on Facebook, doing so was laborious, using a mobile camera roll in the app and taking more steps than needed. Instagram became the perfect glue to stitch other apps together. And rather than always needing to come up with something witty to say like on Twitter, we could just point the camera on our phone at something and hit a button. The posts had links back to the photo on Instagram. They hit 100,000 users in the first week and a million users by the end of the year. Their next growth hack was to borrow the hashtag concept from Twitter and other apps, which they added in January of 2011. Remember how Systrom interned at Odeo and turned down the offer to go straight to Twitter after college? Twitter didn’t have photo sharing at the time, but Twitter co-founder Jack Dorsey had showed System plenty of programming techniques and the two stayed in touch. He became an angel investor in a $7 million Series A and the first real influencer on the platform, sending that link to every photo to all of his Twitter followers every time he posted. The growth continued. June, 2011 they hit 5 million users, and doubled to 10 million by September of 2011. I was one of those users, posting the first photo to @krypted in the fall - being a nerd it was of the iOS 5.0.1 update screen and according to the lone comment on the photo my buddy @acidprime apparently took the same photo. They spent the next few months just trying to keep the servers up and running and released an Android of the app in April of 2012, just a couple of days before taking on $50 million dollars in venture capital. But that didn’t need to last long - they sold the company to Facebook for a billion dollars a few days later, effectively doubling each investor in that last round of funding and shooting up to 50 million users by the end of the month. At 13 employees, that’s nearly $77 million dollars per employee. Granted, much of that went to Systrom and the investors. The Facebook acquisition seemed great at first. Instagram got access to bigger resources than even a few more rounds of funding would have provided. Facebook helped them scale up to 100 million users within a year and following Facebook TV, and the brief but impactful release of Vine at Twitter, Instagram added video sharing, photo tagging, and the ability to add links in 2013. Looking at a history of their feature releases, they’re slow and steady and probably the most user-centered releases I’ve seen. And in 2013, they grew to 150 million users, proving the types of rewards that come from doing so. With that kind of growth it might seem that it can’t last forever - and yet on the back of new editing tools, a growing team, and advertising tools, they managed to hit a staggering 300 million users in 2014. While they released thoughtful, direct, human sold advertising before, they opened up the ability to buy ads to all advertisers, piggy backing on the Facebook ad selling platform in 2015. That’s the same year they introduced Boomerang, which looped photos in forward and reverse. It was cute for a hot minute. 2016 saw the introduction of analytics that included demographics, impressions, likes, reach, and other tools for businesses to track performance not only of ads, but of posts. As with many tools, it was built for the famous influencers that had the ear of the founders and management team - and made available to anyone. They also introduced Instagram Stories, which was a huge development effort and they owned that they copied it from Snapchat - a surprising and truly authentic move for a Silicon Valley startup. And we could barely call them a startup any longer, shooting over half a billion users by the middle of the year and 600 million by the end of the year. That year, they also brought us live video, a Windows client, and one of my favorite aspects with a lot of people posting in different languages, they could automatically translate posts. But something else happened in 2016. Donald Trump was elected to the White House. This is not a podcast about politics but it’s safe to say that it was one of the most divisive elections in recent US history. And one of the first where social media is reported to have potentially changed the outcome. Disinformation campaigns from foreign actors combined with data illegally obtained via Cambridge Analytica on the Facebook network, combined with increasingly insular personal networks and machine learning-driven doubling down on only seeing things that appealed to our world view led to many being able to point at networks like Facebook and Twitter as having been party to whatever they thought the “other side” in an election had done wrong. Yet Instagram was just a photo sharing site. They put the users at the center of their decisions. They promoted the good things in life. While Zuckerberg claimed that Facebook couldn’t have helped change any outcomes and that Facebook was just an innocent platform that amplified human thoughts - Systrom openly backed Hillary Clinton. And yet, even with disinformation spreading on Instagram, they seemed immune from accusations and having to go to Capital Hill to be grilled following the election. Being good to users apparently has its benefits. However, some regulation needed to happen. 2017, the Federal Trade Commission steps in to force influencers to be transparent about their relationship with advertisers - Instagram responded by giving us the ability to mark a post as sponsored. Still, Instagram revenue spiked over 3 and a half billion dollars in 2017. Instagram revenue grew past 6 billion dollars in 2018. Systrom and Krieger stepped away from Instagram that year. It was now on autopilot. Although I think all chief executives have a Instagram revenue shot over 9 billion dollars in 2019. In those years they released IGTV and tried to get more resources from Facebook, contributing far more to the bottom line than they took. 2020 saw Instagram ad revenue close in on 13.86 billion dollars with projected 2021 revenues growing past 18 billion. In The Picture of Dorian Gray from 1890, Lord Henry describes the impact of influence as destroying our genuine and true identity, taking away our authentic motivations, and as Shakespeare would have put it - making us servile to the influencer. Some are famous and so become influencers on the product naturally, like musicians, politicians, athletes, and even the Pope. . Others become famous due to getting showcased by the @instagram feed or some other prominent person. These influencers often stage a beautiful life and to be honest, sometimes we just need that as a little mind candy. But other times it can become too much, forcing us to constantly compare our skin to doctored skin, our lifestyle to those who staged their own, and our number of friends to those who might just have bought theirs. And seeing this obvious manipulation gives some of us even more independence than we might have felt before. We have a choice: to be or not to be. The Instagram story is one with depth. Those influencers are one of the more visible aspects, going back to the first that posted sponsored photos from Snoop Dogg. And when Mark Zuckerberg decided to buy the company for a billion dollars, many thought he was crazy. But once they turned on the ad revenue machine, which he insisted Systrom wait on until the company had enough users, it was easy to go from 3 to 6 to 9 to over 13 and now likely over 18 billion dollars. That’s a greater than 30:1 return on investment, helping to prove that such lofty acquisitions aren’t crazy. It’s also a story of monopoly, or at least of suspected monopolies. Twitter tried to buy Instagram and Systrom claims to have never seen a term sheet with a legitimate offer. Then Facebook swooped in and helped fast-track regulatory approval of the acquisition. With the acquisition of WhatsApp, Facebook owns four of the top 6 social media sites, with Facebook, WhatsApp, Facebook Messenger, and Instagram all over a billion users and YouTube arguably being more of a video site than a true social network. And they tried to buy Snapchat - only the 17th ranked network. More than 50 billion photos have been shared through Instagram. That’s about a thousand a second. Many are beautiful...
4/24/2021 • 21 minutes, 16 seconds
Before the iPhone Was Apple's Digital Hub Strategy
Steve Jobs returned to Apple in 1996. At the time, most people had a digital camera, like the Canon Elph that was released that year and maybe a digital video camera and probably a computer and about 16% of Americans had a cell phone at the time. Some had a voice recorder, a Diskman, some in the audio world had a four track machine. Many had CD players and maybe even a laser disk player. But all of this was changing. Small, cheap microprocessors were leading to more and more digital products. The MP3 was starting to trickle around after being patented in the US that year. Netflix would be founded the next year, as DVDs started to spring up around the world. Ricoh, Polaroid, Sony, and most other electronics makers released digital video cameras. There were early e-readers, personal digital assistants, and even research into digital video recorders that could record your favorite shows so you could watch them when you wanted. In other words we were just waking up to a new, digital lifestyle. But the industries were fragmented. Jobs and the team continued the work begun under Gil Amelio to reduce the number of products down from 350 to about a dozen. They made products that were pretty and functional and revitalized Apple. But there was a strategy that had been coming together in their minds and it centered around digital media and the digital lifestyle. We take this for granted today, but mostly because Apple made it ubiquitous. Apple saw the iMac as the centerpiece for a whole new strategy. But all this new type of media and the massive files needed a fast bus to carry all those bits. That had been created back in 1986 and slowly improved on one the next few years in the form of IEEE 1394, or Firewire. Apple started it - Toshiba, Sony, Panasonic, Hitachi, and others helped bring it to device they made. Firewire could connect 63 peripherals at 100 megabits, later increased to 200 and then 400 before increasing to 3200. Plenty fast enough to transfer those videos, songs, and whatever else we wanted. iMovie was the first of the applications that fit into the digital hub strategy. It was originally released in 1999 for the iMac DV, the first iMac to come with built-in firewire. I’d worked on Avid and SGI machines dedicated to video editing at the time but this was the first time I felt like I was actually able to edit video. It was simple, could import video straight from the camera, allow me to drag clips into a timeline and then add some rudimentary effects. Simple, clean, and with a product that looked cool. And here’s the thing, within a year Apple made it free. One catch. You needed a Mac. This whole Digital Hub Strategy idea was coming together. Now as Steve Jobs would point out in a presentation about the Digital Hub Strategy at Macworld 2001, up to that point, personal computers had mainly been about productivity. Automating first the tasks of scientists, then with the advent of the spreadsheet and databases, moving into automating business and personal functions. A common theme in this podcast is that what drives computing is productivity, telemetry, and quality of life. The telemetry gains came with connecting humanity through the rise of the internet in the later 1990s. But these new digital devices were what was going to improve our quality of life. And for anyone that could get their hands on an iMac they were now doing so. But it still felt like a little bit of a closed ecosystem. Apple released a tool for making DVDs in 2001 for the Mac G4, which came with a SuperDrive, or Apple’s version of an optical drive that could read and write CDs and DVDs. iDVD gave us the ability to add menus, slideshows (later easily imported as Keynote presentations when that was released in 2003), images as backgrounds, and more. Now we could take those videos we made and make DVDs that we could pop into our DVD player and watch. Families all over the world could make their vacation look a little less like a bunch of kids fighting and a lot more like bliss. And for anyone that needed more, Apple had DVD Studio Pro - which many a film studio used to make the menus for movies for years. They knew video was going to be a thing because going back to the 90s, Jobs had tried to get Adobe to release Premiere for the iMac. But they’d turned him down, something he’d never forget. Instead, Jobs was able to sway Randy Ubillos to bring a product that a Macromedia board member had convinced him to work on called Key Grip, which they’d renamed to Final Cut. Apple acquired the source code and development team and released it as Final Cut Pro in 1999. And iMovie for the consumer and Final Cut Pro for the professional turned out to be a home run. But another piece of the puzzle was coming together at about the same time. Jeff Robbin, Bill Kincaid, and Dave Heller built a tool called SoundJam in 1998. They had worked on the failed Copeland project to build a new OS at Apple and afterwards, Robbin made a great old tool (that we might need again with the way extensions are going) called Conflict Catcher while Kincaid worked on the drivers for a MP3 player called the Diamond Rio. He saw these cool new MP3 things and tools like Winamp, which had been released in 1997, so decided to meet back up with Robbin for a new tool, which they called SoundJam and sold for $50. Just so happens that I’ve never met anyone at Apple that didn’t love music. Going back to Jobs and Wozniak. So of course they would want to do something in digital music. So in 2000, Apple acquired SoundJam and the team immediately got to work stripping out features that were unnecessary. They wanted a simple aesthetic. iMovie-esque, brushed metal, easy to use. That product was released in 2001 as iTunes. iTunes didn’t change the way we consumed music.That revolution was already underway. And that team didn’t just add brushed metal to the rest of the operating system. It had begun with QuickTime in 1991 but it was iTunes through SoundJam that had sparked brushed metal. SoundJam gave the Mac music visualizers as well. You know, those visuals on the screen that were generated by sound waves from music we were listening to. And while we didn’t know it yet, would be the end of software coming in physical boxes. But something else big. There was another device coming in the digital hub strategy. iTunes became the de facto tool used to manage what songs would go on the iPod, released in 2001 as well. That’s worthy of its own episode which we’ll do soon. You see, another aspect about SoundJam is that users could rip music off of CDs and into MP3s. The deep engineering work done to get the codec into the system survives here and there in the form of codecs accessible using APIs in the OS. And when combined with spotlight to find music it all became more powerful to build playlists, embed metadata, and listen more insightfully to growing music libraries. But Apple didn’t want to just allow people to rip, find, sort, and listen to music. They also wanted to enable users to create music. So in 2002, Apple also acquired a company called Emagic. Emagic would become Logic Pro and Gerhard Lengeling would in 2004 release a much simpler audio engineering tool called Garage Band. Digital video and video cameras were one thing. But cheap digital point and shoot cameras were everwhere all of a sudden. iPhoto was the next tool in the strategy, dropping in 2002 Here, we got a tool that could import all those photos from our cameras into a single library. Now called Photos, Apple gave us a taste of the machine learning to come by automatically finding faces in photos so we could easily make albums. Special services popped up to print books of our favorite photos. At the time most cameras had their own software to manage photos that had been developed as an after-thought. iPhoto was easy, worked with most cameras, and was very much not an after-thought. Keynote came in 2003, making it easy to drop photos into a presentation and maybe even iDVD. Anyone who has seen a Steve Jobs presentation understands why Keynote had to happen and if you look at the difference between many a Power Point and Keynote presentation it makes sense why it’s in a way a bridge between the making work better and doing so in ways we made home better. That was the same year that Apple released the iTunes Music Store. This seemed like the final step in a move to get songs onto devices. Here, Jobs worked with music company executives to be able to sell music through iTunes - a strategy that would evolve over time to include podcasts, which the moves effectively created, news, and even apps - as explored on the episode on the App Store. And ushering in an era of creative single-purpose apps that drove down the cost and made so much functionality approachable for so many. iTunes, iPhoto, and iMovie were made to live together in a consumer ecosystem. So in 2003, Apple reached that point in the digital hub strategy where they were able to take our digital life and wrap them up in a pretty bow. They called that product iLife - which was more a bundle of these services, along with iDVD and Garage Band. Now these apps are free but at the time the bundle would set you back a nice, easy, approachable $49. All this content creation from the consumer to the prosumer to the professional workgroup meant we needed more and more storage. According to the codec, we could be running at hundreds of megabytes per second of content. So Apple licensed the StorNext File System in 2004 to rescue a company called ADIC and release a 64-bit clustered file system over fibre channel. Suddenly all that new high end creative content could be shared in larger and larger environments. We could finally have someone cutting a movie in Final Cut then hand it off to someone else to cut without unplugging a firewire drive to do it. Professional workflows in a pure-Apple ecosystem were a thing. Now you just needed a way to distribute all this content. So iWeb in 2004, which allowed us to build websites quickly and bring all this creative content in. Sites could be hosted on MobileMe or files uploaded to a web host via FTP. Apple had dabbled in web services since the 80s with AppleLink then eWorld then iTools, .Mac, and MobileMe, the culmination of the evolutions of these services now referred to as iCloud. And iCloud now syncs documents and more. Pages came in 2005, Numbers came in 2007, and they were bundled with Keynote to become Apple iWork, allowing for a competitor of sorts to Microsoft Office. Later made free and ported to iOS as well. iCloud is a half-hearted attempt at keeping these synchronized between all of our devices. Apple had been attacking the creative space from the bottom with the tools in iLife but at the top as well. Competing with tools like Avid’s Media Composer, which had been around for the Mac going back to 1989, Apple bundled the professional video products into a single suite called Final Cut Studio. Here, Final Cut Pro, Motion, DVD Studio Pro, Soundtrack Pro, Color (obtained when Apple acquired SiliconColor and renamed it from FinalTouch), Compressor, Cinema Tools, and Qmaster for distributing the processing power for the above tools came in one big old box. iMovie and Garage Band for the consumer market and Final Cut Studio and Logic for the prosumer to professional market. And suddenly I was running around the world deploying Xsan’s into video shops, corporate taking head editing studios, and ad agencies Another place where this happened was with photos. Aperture was released in 2005 and offered the professional photographer tools to manage their large collection of images. And that represented the final pieces of the strategy. It continued to evolve and get better over the years. But this was one of the last aspects of the Digital Hub Strategy. Because there was a new strategy underway. That’s the year Apple began the development of the iPhone. And this represents a shift in the strategy. Released in 2007, then followed up with the first iPad in 2010, we saw a shift from the growth of new products in the digital hub strategy to migrating them to the mobile platforms, making them stand-alone apps that could be sold on App Stores, integrated with iCloud, and killing off those that appealed to more specific needs in higher-end creative environments, like Aperture, which went ended in 2014, and integrating some into other products, like Color becoming a part of Final Cut Pro. But the income from those products has now been eclipsed by mobile devices. Because when we see the returns from one strategy begin to crest - you know, like when the entire creative industry loves you, it’s time to move to another, bolder strategy. And that mobile strategy opened our eyes to always online (or frequently online) synchronization between products and integration with products, like we get with Handoff and other technologies today. In 2009 Apple acquired a company called Lala, which would later be added to iCloud - but the impact to the Digital Hub Strategy was that it paved the way for iTunes Match, a cloud service that allowed for syncing music from a local library to other Apple devices. It was a subscription and more of a stop-gap for moving people to a subscription to license music than a lasting stand-alone product. And other acquisitions would come over time and get woven in, such as Redmatia, Beats, and Swell. Steve Jobs said exactly what Apple was going to do in 2001. In one of the most impressive implementations of a strategy, Apple had slowly introduced quality products that tactically ushered in a digital lifestyle since the late 90s and over the next few years. iMovie, iPhoto, iTunes, iDVD, iLife, and in a sign of the changing times - iPod, iPhone, iCloud. To signal the end of that era because it was by then ubiquitous. - then came the iPad. And the professional apps won over the creative industries. Until the strategy had been played out and Apple began laying the groundwork for the next strategy in 2005. That mobile revolution was built in part on the creative influences of Apple. Tools that came after, like Instagram, made it even easier to take great photos, connect with friends in a way iWeb couldn’t - because we got to the point where “there’s an app for that”. And as the tools weren’t needed, Apple cancelled some one-by-one, or even let Adobe Premiere eclipse Final Cut in many ways. Because you know, sales of the iMac DV were enough to warrant building the product on the Apple platform and eventually Adobe decided to do that. Apple built many of these because there was a need and there weren’t great alternatives. Once there were great alternatives, Apple let those limited quantities of software engineers go work on other things they needed done. Like building frameworks to enable a new generation of engineers to build amazing tools for the platform! I’ve always considered the release of the iPad to be the end of era where Apple was introducing more and more software. From the increased services on the server platform to tools that do anything and everything. But 2010 is just when we could notice what Jobs was doing. In fact, looking at it, we can easily see that the strategy shifted about 5 years before that. Because Apple was busy ushering in the next revolution in computing. So think about this. Take an Apple, a Microsoft, or a Google. The developers of nearly every single operating system we use today. What changes did they put in place 5 years ago that are just coming to fruition today. While the product lifecycles are annual releases now, that doesn’t mean that when they have billions of devices out there that the strategies don’t unfold much, much slower. You see, by peering into the evolutions over the past few years, we can see where they’re taking computing in the next few years. Who did they acquire? What products will they release? What gaps does that create? How can we take those gaps and build products that get in front of them? This is where magic happens. Not when we’re too early like a General Magic was. But when we’re right on time. Unless we help set strategy upstream. Or, is it all chaos and not in the least bit predictable? Feel free to send me your thoughts! And thank you…
3/29/2021 • 24 minutes, 15 seconds
The WELL, an Early Internet Community
The Whole Earth ‘lectronic Link, or WELL, was started by Stewart Brand and Larry Brilliant in 1985, and is still available at well.com. We did an episode on Stewart Brand: Godfather of the Interwebs and he was a larger than life presence amongst many of the 1980s former hippies that were shaping our digital age. From his assistance producing The Mother Of All Demos to the Whole Earth Catalog inspiring Steve Jobs and many others to his work with Ted Nelson, there’s probably only a few degrees separating him from anyone else in computing. Larry Brilliant is another counter-culture hero. He did work as a medical professional for the World Health Organization to eradicate smallpox and came home to teach at the University of Michigan. The University of Michigan had been working on networked conferencing since the 70s when Bob Parnes wrote CONFER, which would be used at Wayne State where Brilliant got his MD. But CONFER was a bit of a resource hog. PicoSpan was written by Marcus Watts in 1983. Pico is a small text editor in many a UNIX variant and network is network. Why small, well, modems that dialed into bulletin boards were pretty slow back then. Marcus worked at NETI, who then bought the rights for PicoSpan to take to market. So Brilliant was the chairman of NETI at the time and approached Brand about starting up a bulletin-board system (BBS). Brilliant proposed NETI would supply the gear and software and that Brand would use his, uh, brand - and Whole Earth following, to fill the ranks. Brand’s non-profit The Point Foundation would own half and NETI would own the other half. It became an early online community outside of academia, and an important part of the rise of the splinter-nets and a holdout to the Internet. For a time, at least. PicoSpan gave users conferences. These were similar to PLATO Notes files, where a user could create a conversation thread and people could respond. These were (and still are) linear and threaded conversations. Rather than call them Notes like PLATO did, PicSpan referred to them as “conferences” as “online conferencing” was a common term used to describe meeting online for discussions at the time. EIES had been around going back to the 1970s, so Brand had some ideas abut what an online community could be - having used it. Given the sharp drop in the cost of storage there was something new PicoSpan could give people: the posts could last forever. Keep in mind, the Mac still didn’t ship with a hard drive in 1984. But they were on the rise. And those bits that were preserved were manifested in words. Brand brought a simple mantra: You Own Your Own Words. This kept the hands of the organization clean and devoid of liability for what was said on The WELL - but also harkened back to an almost libertarian bent that many in technology had at the time. Part of me feels like libertarianism meant something different in that era. But that’s a digression. Whole Earth Review editor Art Kleiner flew up to Michigan to get the specifics drawn up. NETI’s investment had about a quarter million dollar cash value. Brand stayed home and came up with a name. The Whole Earth ‘lectronic Link, or WELL. The WELL was not the best technology, even at the time. The VAX was woefully underpowered for as many users as The WELL would grow to, and other services to dial into and have discussions were springing up. But it was one of the most influential of the time. And not because they recreated the extremely influential Whole Earth catalog in digital form like Brilliant wanted, which would have been similar to what Amazon reviews are like now probably. But instead, the draw was the people. The community was fostered first by Matthew McClure, the initial director who was a former typesetter for the Whole Earth Catalog. He’d spent 12 years on a commune called The Farm and was just getting back to society. They worked out that they needed to charge $8 a month and another couple bucks an hour to make minimal a profit. So McClure worked with NETI to get the Fax up and they created the first conference, General. Kevin Kelly from the Whole Earth Review and Brand would start discussions and Brand mentioned The WELL in some of his writings. A few people joined, and then a few more. Others from The Farm would join him. Cliff Figallo, known as Cliff, was user 19 and John Coate, who went by Tex, came in to run marketing. In those first few years they started to build up a base of users. It started with hackers and journalists, who got free accounts. And from there great thinkers joined up. People like Tom Mandel from Stanford Research Institute, or SRI. He would go on to become the editor of Time Online. His partner Nana. Howard Rheingold, who would go on to write a book called The Virtual Community. And they attracted more. Especially Dead Heads, who helped spread the word across the country during the heyday of the Grateful Dead. Plenty of UNIX hackers also joined. After all, the community was finding a nexus in the Bay Area at the time. They added email in 1987 and it was one of those places you could get on at least one part of this whole new internet thing. And need help with your modem? There’s a conference for that. Need to talk about calling your birth mom who you’ve never met because you were adopted? There’s a conference for that as well. Want to talk sexuality with a minister? Yup, there’s a community for that. It was one of the first times that anyone could just reach out and talk to people. And the community that was forming also met in person from time to time at office parties, furthering the cohesion. We take Facebook groups, Slack channels, and message boards for granted today. We can be us or make up a whole new version of us. We can be anonymous and just there to stir up conflict like on 4Chan or we can network with people in our industry like on LinkedIn. We can chat real time, which is similar to the Send option on The WELL. Or we can post threaded responses to other comments. But the social norms and trends were proving as true then as now. Communities grow, they fragment, people create problems, people come, people go. And sometimes, as we grow, we inspire. Those early adopters of The WELL inspired Craig Newmark of Craigslist to the growing power of the Internet. And future developers of Apple. Hippies versus nerds but not really versus, but coming to terms with going from “computers are part of the military industrial complex keeping us down” philosophy to more of a free libertarian information superhighway that persisted for decades. The thought that the computer would set us free and connect the world into a new nation, as John Perry Barlow would sum up perfectly in “A Declaration of the Independence of Cyberspace”. By 1990 people like Barlow could make a post on The WELL from Wyoming and have Mitch Kapor, the founder of Lotus, makers of Lotus 1-2-3 show up at his house after reading the post - and they could join forces with the 5th employee of Sun Microsystems and GNU Debugging Cypherpunk John Gilmore to found the Electronic Foundation. And as a sign of the times that’s the same year The WELL got fully connected to the Internet. By 1991 they had grown to 5,000 subscribers. That was the year Bruce Katz bought NETI’s half of the well for $175,000. Katz had pioneered the casual shoe market, changing the name of his families shoe business to Rockport and selling it to Reebok for over $118 million. The WELL had posted a profit a couple of times but by and large was growing slower than competitors. Although I’m not sure any o the members cared about that. It was a smaller community than many others but they could meet in person and they seemed to congeal in ways that other communities didn’t. But they would keep increasing in size over the next few years. In that time Fig replaced himself with Maurice Weitman, or Mo - who had been the first person to sign up for the service. And Tex soon left as well. Tex would go to become an early webmaster of The Gate, the community from the San Francisco Chronicle. Fig joined AOL’s GNN and then became director of community at Salon. But AOL. You see, AOL was founded in the same year. And by 1994 AOL was up to 1.25 million subscribers with over a million logging in every day. CompuServe, Prodigy, Genie, Dephi were on the rise as well. And The WELL had thousands of posts a day by then but was losing money and not growing like the others. But I think the users of the service were just fine with that. The WELL was still growing slowly and yet for many, it was too big. Some of those left. Some stayed. Other communities, like The River, fragmented off. By then, The Point Foundation wanted out so sold their half of The WELL to Katz for $750,000 - leaving Katz as the first full owner of The WELL. I mean, they were an influential community because of some of the members, sure, but more because the quality of the discussions. Academics, drugs, and deeply personal information. And they had always complained about figtex or whomever was in charge - you know, the counter-culture is always mad at “The Management.” But Katz was not one of them. He honestly seems to have tried to improve things - but it seems like everything he tried blew up in his face. So Katz further alienated the members and fired Mo and brought on Maria Wilhelm, but they still weren’t hitting that hyper-growth, with membership getting up to around 10,000 - but by then AOL was jumping from 5,000,000 to 10,000,000. But again, I’ve not found anyone who felt like The WELL should have been going down that same path. The subscribers at The WELL were looking for an experience of a completely different sort. By 1995 Gail Williams allowed users to create their own topics and the unruly bunch just kinda’ ruled themselves in a way. There was staff and drama and emotions and hurt feelings and outrage and love and kindness and, well, community. By the late 90s, the buzz word at many a company were all about building communities, and there were indeed plenty of communities growing. But none like The WELL. And given that some of the founders of Salon had been users of The WELL, Salon bought The WELL in 1999 and just kinda’ let it fly under the radar. The influence continued with various journalists as members. The web came. And the members of The WELL continued their community. Award winning but a snapshot in time in a way. Living in an increasingly secluded corner of cyberspace, a term that first began life in a present tense on The WELL, if you got it, you got it. In 2012, after trying to sell The WELL to another company, Salon finally sold The WELL to a group of members who had put together enough money to buy it. And The WELL moved into the current, more modern form of existence. To quote the site: Welcome to a gathering that’s like no other. The WELL, launched back in 1985 as the Whole Earth ‘Lectronic Link, continues to provide a cherished watering hole for articulate and playful thinkers from all walks of life. For more about why conversation is so treasured on The WELL, and why members of the community banded together to buy the site in 2012, check out the story of The WELL. If you like what you see, join us! It sounds pretty inviting. And it’s member supported. Like National Public Radio kinda’. In what seems like an antiquated business model, it’s $15 per month to access the community. And make no mistake, it’s a community. You Own Your Own Words. If you pay to access a community, you don’t sign the ownership of your words away in a EULA. You don’t sign away rights to sell your data to advertisers along with having ads shown to you in increasing numbers in a hunt for ever more revenue. You own more than your words, you own your experience. You are sovereign. This episode doesn’t really have a lot of depth to it. Just as most online forums lack the kind of depth that could be found on the WELL. I am a child of a different generation, I suppose. Through researching each episode of the podcast, I often read books, conduct interviews (a special thanks to Help A Reporter Out), lurk in conferences, and try to think about the connections, the evolution, and what the most important aspects of each are. There is a great little book from Katie Hafner called The Well: A Story Of Love, Death, & Real Life. I recommend it. There’s also Howard Rheingold’s The Virtual Community and John Seabrook’s Deeper: Adventures on the Net. Oh, and From Counterculture to Cyberculture: Stewart Brand, the Whole Earth Network, And the Rise of Digital Utopianism from Fred Turner and Siberia by Douglas Rushkoff. At a minimum, I recommend reading Katie Hafner’s wired article and then her most excellent book! Oh, and to hear about other ways the 60s Counterculture helped to shape the burgeoning technology industry, check out What the Dormouse Said by John Markoff. And The WELL comes up in nearly every book as one of the early commercial digital communities. It’s been written about in Wired, in The Atlantic, makes appearances in books like Broad Band by Claire Evans, and The Internet A Historical Encyclopedia. The business models out there to build and run and grow a company have seemingly been reduced to a select few. Practically every online community has become free with advertising and data being the currency we parlay in exchange for a sense of engagement with others. As network effects set in and billionaires are created, others own our words. They think the lifestyle business is quaint - that if you aren’t outgrowing a market segment that you are shrinking. And a subscription site that charges a monthly access fee to cgi code with a user experience that predates the UX field on the outside might affirm that philosophy -especially since anyone can see your real name. But if we look deeper we see a far greater truth: that these barriers keep a small corner of cyberspace special - free from Russian troll farms and election stealing and spam bots. And without those distractions we find true engagement. We find real connections that go past the surface. We find depth. It’s not lost after all. Thank you for being part of this little community. We are so lucky to have you. Have a great day.
3/12/2021 • 19 minutes, 9 seconds
Tesla: From Startup To... Startup...
Tesla Most early stage startups have, and so seemingly need, heroic efforts from brilliant innovators working long hours to accomplish impossible goals. Tesla certainly had plenty of these as an early stage startup and continues to - as do the other Elon Musk startups. He seems to truly understand and embrace that early stage startup world and those around him seem to as well. As a company grows we have to trade those sprints of heroic output for steady streams of ideas and quality. We have to put development on an assembly line. Toyota famously put the ideas of Deming and other post-World War II process experts into their production lines and reaped big rewards - becoming the top car manufacturer in the process. Not since the Ford Model T birthed the assembly line had auto makers seen as large an increase in productivity. And make no mistake, technology innovation is about productivity increases. We forget this sometimes when young, innovative startups come along claiming to disrupt industries. Many of those do, backed by seemingly endless amounts of cash to get them to the next level in growth. And the story of Tesla is as much about productivity in production as it is about innovative and disruptive ideas. And the story is as much about a cult of personality as it is about massive valuations and quality manufacturing. The reason we’re covering Tesla in a podcast about the history of computers is at the heart of it, it’s a story about the startup culture clashing head-on with decades-old know-how in an established industry. This happens with nearly every new company: there are new ideas, an organization is formed to support the new ideas, and as the organization grows, the innovators are forced to come to terms with the fact that they have greatly oversimplified the world. Tesla realized this. Just as Paypal had realized it before. But it took a long time to get there. The journey began much further back. Rather than start with the discovery of the battery or the electric motor, let’s start with the GM Impact. It was initially shown off at the 1990 LA Auto Show. It’s important because Alan Cocconi was able to help take some of what GM learned from the 1987 World Solar Challenge race using the Sunraycer and start putting it into a car that they could roll off the assembly lines in the thousands. They needed to do this because the California Air Resources Board, or CARB, was about to require fleets to go 2% zero-emission, or powered by something other than fossil fuels, by 1998 with rates increasing every few years after that. And suddenly there was a rush to develop electric vehicles. GM may have decided that the Impact, later called the EV1, proved that the electric car just wasn’t ready for prime time, but the R&D was accelerating faster than it ever had before then. That was the same year that NuvoMedia was purchased by Gemstar-TVGuide International for $187 million. They’d made the Rocket eBook e-reader. That’s important because the co-founders of that company were Martin Eberhard, a University of Illinois Champaign Urbana grad, and Marc Tarpenning. Alan Cocconi was able to take what he’d learned and form a new company, called AC Propulsion. He was able to put together a talented group and they built a couple of different cars, including the tZero. Many of the ideas that went into the first Tesla car came from the tZero, and Eberhard and Tarpenning tried to get Tom Gage and Cocconi to take their tZero into production. The tZero was a sleek sportscar that began life powered by lead-acid batteries that could get from zero to 60 in just over four seconds and run for 80-100 miles. They used similar regenerative braking that can be found in the Prius (to oversimplify it) and the car took about an hour to charge. The cars were made by hand and cost about $80,000 each. They had other projects so couldn’t focus on trying to mass produce the car. As Tesla would learn later, that takes a long time, focus, and a quality manufacturing process. While we think of Elon Musk as synonymous with Tesla Motors, it didn’t start that way. Tesla Motors was started in 2003 by Eberhard, who would serve as Tesla’s first chief executive officer (CEO) and Tarpenning, who would become the first chief financial officer (CFO), when AC Propulsion declined to take that tZero to market. Funding for the company was obtained from Elon Musk and others, but they weren’t that involved at first. Other than the instigation and support. It was a small shop, with a mission - to develop an electric car that could be mass produced. The good folks at AC Propulsion gave Eberhard and Tarpenning test drives in the tZero, and even agreed to license their EV Power System and reductive charging patents. And so Tesla would develop a motor and work on their own power train so as not to rely on the patents from AC Propulsion over time. But the opening Eberhard saw was in those batteries. The idea was to power a car with battery packs made of lithium ion cells, similar to those used in laptops and of course the Rocket eBooks that NuvoMedia had made before they sold the company. They would need funding though. So Gage was kind enough to put them in touch with a guy who’d just made a boatload of money and had also recommended commercializing the car - Elon Musk. This guy Musk, he’d started a space company in 2002. Not many people do that. And they’d been trying to buy ICBMs in Russia and recruiting rocket scientists. Wild. But hey, everyone used PayPal, where he’d made his money. So cool. Especially since Eberhard and Tarpenning had their own successful exit. Musk signed on to provide $6.5 million in the Tesla Series A and they brought in another $1m to bring it to $7.5 million. Musk became the chairman of the board and they expanded to include Ian Wright during the fundraising and J.B. Straubel in 2004. Those five are considered the founding team of Tesla. They got to work building up a team to build a high-end electric sports car. Why? Because that’s one part of the Secret Tesla Motors Master Plan. That’s the title of a blog post Musk wrote in 2006. You see, they were going to build a high-end hundred thousand dollar plus car. But the goal was to develop mass market electric vehicles that anyone could afford. They unveiled the prototype in 2006, selling out the first hundred in three weeks. Meanwhile, Elon Musk’s cousins, Peter and Lyndon Rive started a company called SolarCity in 2006, which Musk also funded. They merged with Tesla in 2016 to provide solar roofs and other solar options for Tesla cars and charging stations. SolarCity, as with Tesla, was able to capitalize on government subsidies and growing to become the third most solar installations in homes with just a little over 6 percent of the market share. But we’re still in 2006. You see, they won a bunch of awards, got a lot of attention - now it was time to switch to general production. They worked with Lotus, a maker of beautiful cars that make up for issues with quality production in status, beauty, and luxury. They started with the Lotus Elise, increased the wheelbase and bolstered the chassis so it could hold the weight of the batteries. And they used a carbon fiber composite for the body to bring the weight back down. The process was slower than it seems anyone thought it would be. Everyone was working long hours, and they were burning through cash. By 2007, Eberhard stepped down as CEO. Michael Marks came in to run the company and later that year Ze’ev Drori was made CEO - he has been given the credit by many for tighting things up so they could get to the point that they could ship the Roadster. Tarpenning left in 2008. As did others, but the brain drain didn’t seem all that bad as they were able to ship their first car in 2008, after ten engineering prototypes. The Roadster finally shipped in 2008, with the first car going to Musk. It could go for 245 miles a charge. 0 to 60 in less than 4 seconds. A sleek design language. But it was over $100,000. They were in inspiration and there was a buzz everywhere. The showmanship of Musk paired with the beautiful cars and the elites that bought them drew a lot of attention. As did the $1 million in revenue profit they earned in July of 2009, off 109 cars shipped. But again, burning through cash. They sold 10% of the company to Daimler AG and took a $465 million loan from the US Department of Energy. They were now almost too big to fail. They hit 1,000 cars sold in early 2010. They opened up to orders in Canada. They were growing. But they were still burning through cash. It was time to raise some serious capital. So Elon Musk took over as CEO, cut a quarter of the staff, and Tesla filed for an IPO in 2010, raising over $200 million. But there was something special in that S-1 (as there often is when a company opens the books to go public): They would cease production of the Roadster making way for the next big product. Tesla cancelled the Roadster in 2012. By then they’d sold just shy of 2,500 Roadsters and been thinking through and developing the next thing, which they’d shown a prototype of in 2011. The Model S started at $76,000 and went into production in 2012. It could go 300 miles, was a beautiful car, came with a flashy tablet-inspired 17 inch display screen on the inside to replace buttons. It was like driving an iPad. Every time I’ve seen another GPS since using the one in a Model S, I feel like I’ve gotten in a time machine and gone back a decade. But it had been announced in 2007to ship in 2009. And then the ship date dropped back to 2011 and 2012. Let’s call that optimism and scope creep. But Tesla has always eventually gotten there. Even if the price goes up. Such is the lifecycle of all technology. More features, more cost. There are multiple embedded Ubuntu operating systems controlling various parts of car, connected on a network in the car. It’s a modern marvel and Tesla was rewarded with tons of awards and, well, sales. Charging a car that runs on batteries is a thing. So Tesla released the Superchargers in 2012, shipping 7 that year and growing slowly until now shipping over 2,500 per quarter. Musk took some hits because it took longer than anticipated to ship them, then to increase production, then to add solar. But at this point, many are solar and I keep seeing panels popping up above the cars to provide shade and offset other forms of powering the chargers. The more ubiquitous chargers become, the more accepting people will be of the cars. Tesla needed to produce products faster. The Nevada Gigafactory was begun in 2013, to mass produce battery packs and components. Here’s one of the many reason for the high-flying valuation Tesla enjoys: it would take dozens if not a hundred factories like this to transition to sustanable energy sources. But it started with a co-investment between Tesla and Panasonic, with the two dumping billions into building a truly modern factory that’s now pumping out close tot he goal set back in 2014. As need increased, Gigafactories started to crop up with Gigafactory 5 being built to supposedly go into production in 2021 to build the Semi, Cybertruck (which should begin production in 2021) and Model Y. Musk first mentioned the truck in 2012 and projected a 2018 or 2019 start time for production. Close enough. Another aspect of all that software is that they can get updates over the air. Tesla released Autopilot in 2014. Similar to other attempts to slowly push towards self-driving cars, Autopilot requires the driver to stay alert, but can take on a lot of the driving - staying within the lines on the freeway, parking itself, traffic-aware cruise control, and navigation. But it’s still the early days for self-driving cars and while we make think that because the number of integrated circuits doubles every year that it paves the way to pretty much anything, no machine learning project I’ve ever seen has gone as fast as we want because it takes years to build the appropriate algorithms and then rethink industries based on the impact of those. But Tesla, Google through Waymo, and many others have been working on it for a long time (hundreds of years in startup-land) and it continues to evolve. By 2015, Tesla had sold over 100,000 cars in the life of the company. They released the Model X that year, also in 2015. This was their first chance to harness the power of the platform - which in the auto industry is when there are multiple cars of similar size and build. Franz von Holzhausen designed it and it is a beautiful car, with falcon-wing doors, up to a 370 mile range on the battery and again with the Autopilot. But harnessing the power of the platform was a challenge. You see, with a platform of cars you want most of the parts to be shared - the differences are often mostly cosmetic. But the Model X only shared a little less than a third of the parts of the Model S. But it’s yet another technological marvel, with All Wheel Drive as an option, that beautiful screen, and check this out - a towing capacity of 5,000 pounds - for an electric automobile! By the end of 2016, they’d sold over 25,000. To a larger automaker that might seem like nothing, but they’d sell over 10,000 in every quarter after that. And it would also become the platform for a mini-bus. Because why not. So they’d gone lateral in the secret plan but it was time to get back at it. This is where the Model 3 comes in. The Model 3 was released in 2017 and is now the best-selling electric car in the history of the electric car. The Model 3 was first shown off in 2016 and within a week, Tesla had taken over 300,000 reservations. Everyone I talked to seemed to want in on an electric car that came in at $35,000. This was the secret plan. That $35,000 model wouldn’t be available until 2019 but they started cranking them out. Production was a challenge with Musk famously claiming Tesla was in “Production Hell” and sleeping on an air mattress at the factory to oversee the many bottlenecks that came. Musk thought they could introduce more robotics than they could and so they’ slowly increased production to first a few hundred per week then a few thousand until finally almost hitting that half a million mark in 2020. This required buying Grohmann Engineering in 2017, now called Tesla Advanced Automation Germany - pumping billions into production. But Tesla added the Model Y in 2020, launching a crossover on the Model 3 platform, producing over 450,000 of them. And then of course they decided to the Tesla Semi, selling for between $150,000 and $200,000. And what’s better than a Supercharger to charge those things? A Megacharger. As is often the case with ambitious projects at Tesla, it didn’t ship in 2020 as projected but is now supposed to ship, um, later. Tesla also changed their name from Tesla Motors to Tesla, Inc. And if you check out their website today, solar roofs and solar panels share the top bar with the Models S, 3, X, and Y. SolarCity and batteries, right? Big money brings big attention. Some good. Some bad. Some warranted. Some not. Musk’s online and sometimes nerd-rockstar persona was one of the most valuable assets at Tesla - at least in the fundraising, stock pumping popularity contest that is the startup world. But on August 7, 2018, he tweeted “Am considering taking Tesla private at $420. Funding secured.” The SEC would sue him for that, causing him to step down as chairman for a time and limit his Twitter account. But hey, the stock jumped up for a bit. But Tesla kept keeping on, slowly improving things and finally hit about the half million cars per year mark in 2020. Producing cars has been about quality for a long time. And it needs to be with people zipping around as fast as we drive - especially on modern freeways. Small batches of cars are fairly straight-forward. Although I could never build one. The electric car is good for the environment, but the cost to offset carbon for Tesla is still far greater than, I don’t know, making a home more energy efficient. But the improvements in the technology continue to increase rapidly with all this money and focus being put on them. And the innovative designs that Tesla has deployed has inspired others, which often coincides with the rethinking of entire industries. But there are tons of other reasons to want electric cars. The average automobile manufactured these days has about 30,000 parts. Teslas have less than a third of that. One hopes that will some day be seen in faster and higher quality production. They managed to go from producing just over 18,000 cars in 2015 to over 26,000 in 2016 to over 50,000 in 2017 to the 190,000s in 2018 and 2019 to a whopping 293,000 in 2020. But they sold nearly 500,000 cars in 2020 and seem to be growing at a fantastic clip. Here’s the thing, though. Ford exceeded half a million cars in 1916. It took Henry Ford from 1901 to 1911 to get to producing 34,000 cars a year but only 5 more years to hit half a million. I read a lot of good and a lot of bad things about Tesla. Ford currently has a little over a 46 and a half billion dollar market cap. Tesla’s crested at nearly $850 billion and has since dropped to just shy of 600. Around 64 million cars are sold each year. Volkswagen is the top, followed by Toyota. Combined, they are worth less than Tesla on paper despite selling over 20 times the number of cars. If Tesla was moving faster, that might make more sense. But here’s the thing. Tesla is about to get besieged by competitors at every side. Nearly every category of car has an electric alternative with Audi, BMW, Volvo, and Mercedes releasing cars at the higher ends and on multiple platforms. Other manufacturers are releasing cars to compete with the upper and lower tiers of each model Tesla has made available. And miniature cars, scooters, bikes, air taxis, and other modes of transportation are causing us to rethink the car. And multi-tenancy of automobiles using ride sharing apps and the potential that self driving cars can have on that are causing us to rethink automobile ownership. All of this will lead some to rethink that valuation Tesla enjoyed. But watching the moves Tesla makes and scratching my head over some certainly makes me think to never under, or over-estimate Tesla or Musk. I don’t want anything to do with Tesla Stock. Far too weird for me to grok. But I do wish them the best. I highly doubt the state of electric vehicles and the coming generational shifts in transportation in general would be where they are today if Tesla hadn’t done all the good and bad that they’ve done. They deserve a place in the history books when we start looking back at the massive shifts to come. In the meantime, I’l’ just call this episode part 1 and wait to see if Tesla matches Ford production levels some day, crashes and burns, gets acquired by another company, or who knows, packs up and heads to Mars.
3/9/2021 • 29 minutes, 19 seconds
PayPal Was Just The Beginning
We can look around at distributed banking, crypto-currencies, Special Purpose Acquisition Companies, and so many other innovative business strategies as new and exciting and innovative. And they are. But paving the way for them was simplifying online payments to what I’ve heard Elon Musk call just some rows in a database. Peter Thiel, Max Levchin, and former Netscaper Luke Nosek had this idea in 1998. Levchin and Nosek has worked together on a startup called SponsorNet New Media while at the University of Illinois Champagne-Urbana where PLATO and Mosaic had come out of. And SponsorNet was supposed to sell online banner ads but would instead be one of four failed startups before zeroing in on this new thing, where they would enable digital payments for businesses and make it simple for consumers to buy things online. They called the company Confinity and setup shop in beautiful Mountain View, California. It was an era when a number of organizations were doing things in taking payments online that weren’t so great. Companies would cache credit card numbers on sites, many had weak security, and the rush to sell everything in the bubble forming around dot-coms fueled a knack for speed over security, privacy, or even reliability. Confinity would store the private information in its own banking vaults, keep it secure, and provide access to vendors - taking a small charge per-transaction. Where large companies had been able to build systems to take online payments, now small businesses and emerging online stores could compete with the big boys. Thiel and Levchin had hit on something when they launched a service called PayPal, to provide a digital wallet and enable online transactions. They even accepted venture funding, taking $3 million from banks like Deutsche Bank over Palm Pilots. One of those funders was Nokia, investing in PayPal expanding into digital services for the growing mobile commerce market. And by 2000 they were up to 1,000,000 users. They saw an opening to make a purchase from a browser on a phone or a browser or app on a cell phone using one of those new smart phone ideas. And they were all rewarded with over 10 million people using the site in just three short years, processing a whopping $3 billion in transactions. Now this was the heart of the dot-com bubble. In that time, Elon Musk managed to sell his early startup Zip2, which made city guides on the early internet, to Compaq for around $300 million, pocketing $22 million for himself. He parlayed that payday into X.com, another online payment company. X.com exploded to over 200,000 customers quickly and as happens frequently with rapid acceleration, a young Musk found himself with a new boss - Bill Harris, the former CEO of Intuit. And they helped invent many of the ways we do business online at that time. One of my favorite of Levchin’s contributions to computing, the Gausebeck-Levchin test, is one of the earliest implementations of what we now call CAPTCHA - you know when you’re shown a series of letters and asked to type them in to eliminate bots. Harris helped the investors de-risk by merging with Confinity to form X.com. Peter Thiel and Elon Musk are larger than life minds in Silicon Valley. The two were substantially different. Musk took on the CEO role but Musk and Thiel were at heads. Thiel believed in a Linux ecosystem and Musk believed in a Windows ecosystem. Thiel wanted to focus on money transfers, similar to the PayPal of today. Given that those were just rows in a database, it was natural that that kind of business would become a red ocean and indeed today there are dozens of organizations focused on it. But Paypal remains the largest. So Musk also wanted to become a full online banking system - much more ambitious. Ultimately Thiel won and assumed the title of CEO. They remained a money transmitter and not a full bank. This means they keep funds that have been sent and not picked up, in an interest bearing account at a bank. They renamed the company to PayPal in 2001 and focused on taking the company public, with an IPO as PYPL in 2002. The stock shot up 50% in the first day of trading, closing at $20 per share. Yet another example of the survivors of the dot com bubble increasing the magnitude of valuations. By then, most eBay transactions accepted PayPal and seeing an opportunity, eBay acquired PayPal for $1.5 billion later in 2002. Suddenly PayPal was the default option for closed auctions and would continue their meteoric rise. Musk is widely reported to have made almost $200 million when eBay bought PayPal and Thiel is reported to have made over $50 million. Under eBay, PayPal would grow and as with most companies that IPO, see a red ocean form in their space. But they brought in people like Ken Howery, who serve as the VP of corporate development, would later cofound investment firm Founders Fund with Thiel, and then become the US Ambassador to Sweden under Trump. And he’s the first of what’s called the PayPal Mafia, a couple dozen extremely influential personalities in tech. By 2003, PayPal had become the largest payment processor for gambling websites. Yet they walked away from that business to avoid some of the complicated regulations until various countries that could verify a license for online gambling venues. In 2006 they added security keys and moved to sending codes to phones for a second factor of security validation. In 2008 they bought Fraud Sciences to gain access to better online risk management tools and Bill Me Later. As the company grew, they setup a company in the UK and began doing business internationally. They moved their EU presence to Luxembourg 2007. They’ve often found themselves embroiled in politics, blocking the any political financing accounts, Alex Jones show InfoWars, and one of the more challenging for them, WikiLeaks in 2010. This led to them being attacked by members of Anonymous for a series of denial of service attacks that brought the PayPal site down. OK, so that early CAPTCHA was just one way PayPal was keeping us secure. It turns out that moving money is complicated, even the $3 you paid for that special Golden Girls t-shirt you bought for a steal on eBay. For example, US States require reporting certain transactions, some countries require actual government approval to move money internationally, some require a data center in the country, like Turkey. So on a case-by-case basis PayPal has had to decide if it’s worth it to increase the complexity of the code and spend precious development cycles to support a given country. In some cases, they can step in and, for example, connect the Baidu wallet to PayPal merchants in support of connecting China to PayPal. They were spun back out of eBay in 2014 and acquired Xoom for $1 billion in 2015, iZettle for $2.2 billion, who also does point of sales systems. And surprisingly they bought online coupon aggregator Honey for $4B in 2019. But their best acquisition to many would be tiny app payment processor Venmo for $26 million. I say this because a friend claimed they prefer that to PayPal because they like the “little guy.” Out of nowhere, just a little more than 20 years ago, the founders of PayPal and they and a number of their initial employees willed a now Fortune 500 company into existence. While they were growing, they had to learn about and understand so many capital markets and regulations. This sometimes showed them how they could better invest money. And many of those early employees went on to have substantial impacts in technology. That brain drain helped fuel the Web 2.0 companies that rose. One of the most substantial ways was with the investment activities. Thiel would go on to put $10 million of his money into Clarium Capital Management, a hedge fund, and Palantir, a big data AI company with a focus on the intelligence industry, who now has a $45 billion market cap. And he funded another organization who doesn’t at all use our big private data for anything, called Facebook. He put half a million into Facebook as an angel investor - an investment that has paid back billions. He’s also launched the Founders Fund, Valar Venture, and is a partner at Y Combinator, in capacities where he’s funded everyone from LinkedIn and Airbnb to Stripe to Yelp to Spotify, to SpaceX to Asana and the list goes on and on and on. Musk has helped take so many industries online. Why not just apply that startup modality to space - so launched SpaceX and to cars, so helped launch (and backed financially) Tesla and solar power, so launched Solar City and building tunnels so launched The Boring Company. He dabbles in Hyperloops (thus the need for tunnels) and OpenAI and well, whatever he wants. He’s even done cameos in movies like Iron Man. He’s certainly a personality. Max Levchin would remain the CTO and then co-found and become the CEO of Affirm, a public fintech company. David Sacks was the COO at PayPal and founded Yammer. Roelof Botha is the former CFO at PayPal who became a partner at Sequoia Capital, one of the top venture capital firms. Yishan Wong was an engineering manager at PayPal who became the CEO of Reddit. Steve Chen left to join Facebook but hooked back up with Jawed Karim for a new project, who he studied computer science at the University of Illinois at Champaign-Urbana with. They were joined by Chad Hurley, who had created the original PayPal logo, to found YouTube. They sold it to Google for $1.65 billion in 2006. Hurley now owns part of the Golden State Warriors, the MLS Los Angeles team, and Leeds United. Reid Hoffman was another COO at PayPal, who Thiel termed the “firefighter-in-chief” and left to found LinkedIn. After selling LinkedIn to Microsoft for over $26 billion he become a partner at venture capital firm, Greylock Partners. Jeremy Stoppelman and Russel Simmons co-founded Yelp with $1 million in funding from Max Levchin, taking the company public in 2011. And the list goes on. PayPal paved the way for small transactions on the Internet. A playbook repeated in different parts of the sector by the likes of Square, Stripe, Dwolla, Due, and many others - including Apple Pay, Amazon Payments, and Google Wallet. We live in an era now, where practically every industry has been taken online. Heck, even cars. In the next episode we’ll look at just that, exploring the next steps in Elon Musk’s career after leaving PayPal.
3/6/2021 • 17 minutes, 16 seconds
Playing Games and E-Learning on PLATO: 1960 to 2015
PLATO (Programmed Logic for Automatic Teaching Operations) was an educational computer system that began at the University of Illinois Champaign Urbana in 1960 and ran into the 2010s in various flavors. Wait, that’s an oversimplification. PLATO seemed to develop on an island in the corn fields of Champaign Illinois, and sometimes precedes, sometimes symbolizes, and sometimes fast-follows what was happening in computing around the world in those decades. To put this in perspective - PLATO began on ILLIAC in 1960 - a large classic vacuum tube mainframe. Short for the Illinois Automatic Computer, ILLIAC was built in 1952, around 7 years after ENIAC was first put into production. As with many early mainframe projects PLATO 1 began in response to a military need. We were looking for new ways to educate the masses of veterans using the GI Bill. We had to stretch the reach of college campuses beyond their existing infrastructures. Computerized testing started with mechanical computing, got digitized with the introduction of Scantron by IBM in 1935, and a number of researchers were looking to improve the consistency of education and bring in new technology to help with quality teaching at scale. The post-World War II boom did this for industry as well. Problem is, following the launch of Sputnik by the USSR in 1957, many felt the US began lagging behind in education. So grant money to explore solutions flowed and CERL was able to capitalize on grants from the US Army, Navy, and Air Force. By 1959, physicists at Illinois began thinking of using that big ILLIAC machine they had access to. Daniel Alpert recruited Don Bitzer to run a project, after false starts with educators around the campus. Bitzer shipped the first instance of PLATO 1 in 1960. They used a television to show images, stored images in Raytheon tubes, and a make-shift keyboard designed for PLATO so users could provide input in interactive menus and navigate. They experimented with slide projectors when they realized the tubes weren’t all that reliable and figured out how to do rudimentary time sharing, expanding to a second concurrent terminal with the release of PLATO II in 1961. Bitzer was a classic Midwestern tinkerer. He solicited help from local clubs, faculty, high school students, and wherever he could cut a corner to build more cool stuff, he was happy to move money and resources to other important parts of the system. This was the age of hackers and they hacked away. He inspired but also allowed people to follow their own passions. Innovation must be decentralized to succeed. They created an organization to support PLATO in 1966 - as part of the Graduate College. CERL stands for the Computer-Based Education Research Laboratory (CERL). Based on early successes, they got more and more funding at CERL. Now that we were beyond a 1:1 ratio of users to computers and officially into Time Sharing - it was time for Plato III. There were a number of enhancements in PLATO III. For starters, the system was moved to a CDC 1604 that CEO of Control Data William Norris donated to the cause - and expanded to allow for 20 terminals. But it was complicated to create new content and the team realized that content would be what drove adoption. This was true with applications during the personal computer revolution and then apps in the era of the App Store as well. One of many lessons learned first on PLATO. Content was in the form of applications that they referred to as lessons. It was a teaching environment, after all. They emulated the ILLIAC for existing content but needed more. People were compiling applications in a complicated language. Professors had day jobs and needed a simpler way to build content. So Paul Tenczar on the team came up with a language specifically tailored to creating lessons. Similar in some ways to BASIC, it was called TUTOR. Tenczar released the manual for TUTOR in 1969 and with an easier way of getting content out, there was an explosion in new lessons, and new features and ideas would flourish. We would see simulations, games, and courseware that would lead to a revolution in ideas. In a revolutionary time. The number of hours logged by students and course authors steadily increased. The team became ever more ambitious. And they met that ambition with lots of impressive achievements. Now that they were comfortable with the CDC 1604 they new that the new content needed more firepower. CERL negotiated a contract with Control Data Corporation (CDC) in 1970 to provide equipment and financial support for PLATO. Here they ended up with a CDC Cyber 6400 mainframe, which became the foundation of the next iteration of PLATO, PLATO IV. PLATO IV was a huge leap forward on many levels. They had TUTOR but with more resources could produce even more interactive content and capabilities. The terminals were expensive and not so scalable. So in preparation for potentially thousands of terminals in PLATO IV they decided to develop their own. This might seem a bit space age for the early 1970s, but what they developed was a touch flat panel plasma display. It was 512x512 and rendered 60 lines per second at 1260 baud. The plasma had memory in it, which was made possible by the fact that they weren’t converting digital signals to analog, as is done on CRTs. Instead, it was a fully digital experience. The flat panel used infrared to see where a user was touching, allowing users some of their first exposure to touch screens. This was a grid of 16 by 16 rather than 512 but that was more than enough to take them over the next decade. The system could render basic bitmaps but some lessons needed more rich, what we might call today, multimedia. The Raytheon tubes used in previous systems proved to be more of a CRT technology but also had plenty of drawbacks. So for newer machines they also included a microfiche machine that produced images onto the back of the screen. The terminals were a leap forward. There were other programs going on at about the same time during the innovative bursts of PLATO, like the Dartmouth Time Sharing System, or DTSS, project that gave us BASIC instead of TUTOR. Some of these systems also had rudimentary forms of forums, such as EIES and the emerging BBS Usenet culture that began in 1973. But PLATO represented a unique look into the splintered networks of the Time Sharing age. Combined with the innovative lessons and newfound collaborative capabilities the PLATO team was about to bring about something special. Or lots of somethings that culminated in more. One of those was Notes. Talkomatic was created by Doug Brown and David R. Woolley in 1973. Tenczar asked the 17-year old Woolley to write a tool that would allow users to report bugs with the system. There was a notes file that people could just delete. So they added the ability for a user to automatically get tagged in another file when updating and store notes. He expanded it to allow for 63 responses per note and when opened, it showed the most recent notes. People came up with other features and so a menu was driven, providing access to System Announcements, Help Notes, and General Notes. But the notes were just the start. In 1973, seeing the need for even more ways to communicate with other people using the system, Doug Brown wrote a prototype for Talkomatic. Talkomatic was a chat program that showed when people were typing. Woolley helped Brown and they added channels with up to five people per channel. Others could watch the chat as well. It would be expanded and officially supported as a tool called Term-Talk. That was entered by using the TERM key on a console, which allowed for a conversation between two people. You could TERM, or chat a person, and then they could respond or mark themselves as busy. Because the people writing this stuff were also the ones supporting users, they added another feature, the ability to monitor another user, or view their screen. And so programmers, or consultants, could respond to help requests and help get even more lessons going. And some at PLATO were using ARPANET, so it was only a matter of time before word of Ray Tomlinson’s work on electronic mail leaked over, leading to the 1974 addition of personal notes, a way to send private mail engineered by Kim Mast. As PLATO grew, the amount of content exploded. They added categories to Notes in 1975 which led to Group Notes in 1976, and comments and linked notes and the ability to control access. But one of the most important innovations PLATO will be remembered for is games. Anyone that has played an educational game will note that school lessons and games aren’t always all that different. Since Rick Blomme had ported Spacewar! to PLATO in 1969 and added a two-player option, multi-player games had been on the rise. They made leader boards for games like Dogfight so players could get early forms of game rankings. Games like airtight and airace and Galactic Attack would follow those. MUDs were another form of games that came to PLATO. Collosal Cave Adventure had come in 1975 for the PDP, so again these things were happening in a vacuum but where there were influences and where innovations were deterministic and found in isolation is hard to say. But the crawlers exploded on PLATO. We got Moria, Oubliette by Jim Schwaiger, Pedit5, crypt, dungeon, avatar, and drygulch. We saw the rise of intense storytelling, different game mechanics that were mostly inspired by Dungeons and Dragons, As PLATO terminals found their way in high schools and other universities, the amount of games and amount of time spent on those games exploded, with estimates of 20% of time on PLATO being spent playing games. PLATO IV would grow to support thousands of terminals around the world in the 1970s. It was a utility. Schools (and even some parents) leased lines back to Champagne Urbana and many in computing thought that these timesharing systems would become the basis for a utility model in computing, similar to the cloud model we have today. But we had to go into the era of the microcomputer to boomerang back to timesharing first. That microcomputer revolution would catch many, who didn’t see the correlation between Moore’s Law and the growing number of factories and standardization that would lead to microcomputers, off guard. Control Data had bet big on the mainframe market - and PLATO. CDC would sell mainframes to other schools to host their own PLATO instance. This is where it went from a timesharing system to a network of computers that did timesharing. Like a star topology. Control Data looked to PLATO as one form of what the future of the company would be. Here, he saw this mainframe with thousands of connections as a way to lease time on the computers. CDC took PLATO to market as CDC Plato. Here, schools and companies alike could benefit from distance education. And for awhile it seemed to be working. Financial companies and airlines bought systems and the commercialization was on the rise, with over a hundred PLATO systems in use as we made our way to the middle of the 1980s. Even government agencies like the Depart of Defense used them for training. But this just happened to coincide with the advent of the microcomputer. CDC made their own terminals that were often built with the same components that would be found in microcomputers but failed to capitalize on that market. Corporations didn’t embrace the collaboration features and often had these turned off. Social computing would move to bulletin boards And CDC would release versions of PLATO as micro-PLATO for the TRS-80, Texas Instruments TI-99, and even Atari computers. But the bureaucracy at CDC had slowed things down to the point that they couldn’t capitalize on the rapidly evolving PC industry. And prices were too high in a time when home computers were just moving from a hobbyist market to the mainstream. The University of Illinois spun PLATO out into its own organization called University Communications, Inc (or UCI for short) and closed CERL in 1994. That was the same year Marc Andreessen co-founded Mosaic Communications Corporation, makers of Netscape -successor to NCSA Mosaic. Because NCSA, or The National Center for Supercomputing Applications, had also benefited from National Science Foundation grants when it was started in 1982. And all those students who flocked to the University of Illinois because of programs like PLATO had brought with them more expertise. UCI continued PLATO as NovaNet, which was acquired by National Computer Systems and then Pearson corporation, finally getting shut down in 2015 - 55 years after those original days on ILLIAC. It evolved from the vacuum tube-driven mainframe in a research institute with one terminal to two terminals, to a transistorized mainframe with hundreds and then over a thousand terminals connected from research and educational institutions around the world. It represented new ideas in programming and programming languages and inspired generations of innovations. That aftermath includes: The ideas. PLATO developers met with people from Xerox PARC starting in the 70s and inspired some of the work done at Xerox. Yes, they seemed isolated at times but they were far from it. They also cross-pollinated ideas to Control Data. One way they did this was by trading some commercialization rights for more mainframe hardware. One of the easiest connections to draw from PLATO to the modern era is how the notes files evolved. Ray Ozzie graduated from Illinois in 1979 and went to work for Data General and then Software Arts, makers of VisiCalc. The corporate world had nothing like the culture that had evolved out of the notes files in PLATO Notes. Today we take collaboration tools for granted but when Ozzie was recruited by Lotus, the makers of 1-2-3, he joined only if they agreed to him funding a project to take that collaborative spirit that still seemed stuck in the splintered PLATO network. The Internet and networked computing in companies was growing, and he knew he could improve on the notes files in a way that companies could take use of it. He started Iris Associates in 1984 and shipped a tool in 1989. That would evolve into what is would be called Lotus Notes when the company was acquired by Lotus in 1994 and then when Lotus was acquired by IBM, would evolve into Domino - surviving to today as HCL Domino. Ozzie would go on to become a CTO and then the Chief Software Architect at Microsoft, helping spearhead the Microsoft Azure project. Collaboration. Those notes files were also some of the earliest newsgroups. But they went further. Talkomatic introduced real time text chats. The very concept of a digital community and its norms and boundaries were being tested and challenges we still face like discrimination even manifesting themselves then. But it was inspiring and between stints at Microsoft, Ray Ozzie founded Talko in 2012 based on what he learned in the 70s, working with Talkomatic. That company was acquired by Microsoft and some of the features ported into Skype. Another way Microsoft benefited from the work done on PLATO was with Microsoft Flight Simulator. That was originally written by Bruce Artwick after leaving the university based on the flight games he’d played on PLATO. Mordor: The Depths of Dejenol was cloned from Avatar Silas Warner was connected to PLATO from terminals at the University of Indiana. During and after school, he wrote software for companies but wrote Robot War for PLATO and then co-founded Muse Software where he wrote Escape!, a precursor for lots of other maze runners, and then Castle Wolfenstein. The name would get bought for $5,000 after his company went bankrupt and one of the early block-buster first-person shooters when released as Wolfenstein 3D. Then John Carmack and John Romero created Doom. But Warner would go on to work with some of the best in gaming, including Sid Meier. Paul Alfille built the game Freecell for PLATO and Control Data released it for all PLATO systems. Jim Horne played it from the PLATO terminals at the University of Alberta and eventually released it for DOS in 1988. Horn went to work for Microsoft who included it in the Microsoft Entertainment Pack, making it one of the most popular software titles played on early versions of Windows. He got 10 shares of Microsoft stock in return and it’s still part of Windows 10 using the Microsoft Solitaire Collection.. Robert wood head and Andrew Greenberg got onto PLATO from their terminals at Cornell University where they were able to play games like Oubliette and Emprie. They would write a game called Wizardry that took some of the best that the dungeon crawl multi-players had to offer and bring them into a single player computer then console game. I spent countless hours playing Wizardry on the Nintendo NES and have played many of the spin-offs, which came as late as 2014. Not only did the game inspire generations of developers to write dungeon games, but some of the mechanics inspired features in the Ultima series, Dragon Quest, Might and Magic, The Bard’s Tale, Dragon Warrior and countless Manga. Greenberg would go on to help with Q-Bert and other games before going on to work with the IEEE. Woodhead would go on to work on other games like Star Maze. I met Woodhead shortly after he wrote Virex, an early anti-virus program for the Mac that would later become McAfee VirusScan for the Mac. Paul Tenczar was in charge of the software developers for PLATO. After that he founded Computer Teaching Corporation and introduced EnCORE, which was changed to Tencore. They grew to 56 employees by 1990 and ran until 2000. He returned to the University of Illinois to put RFID tags on bees, contributing to computing for nearly 5 decades and counting. Michael Allen used PLATO at Ohio State University before looking to create a new language. He was hired at CDC where he became a director in charge of Research and Development for education systems There, he developed the ideas for a new computer language authoring system, which became Authorware, one of the most popular authoring packages for the Mac. That would merge with Macro-Mind to become Macromedia, where bits and pieces got put into Dreamweaver and Shockwave as they released those. After Adobe acquired Macromedia, he would write a number of books and create even more e-learning software authoring tools. So PLATO gave us multi-player games, new programming languages, instant messaging, online and multiple choice testing, collaboration forums, message boards, multiple person chat rooms, early rudimentary remote screen sharing, their own brand of plasma display and all the research behind printing circuits on glass for that, and early research into touch sensitive displays. And as we’ve shown in just a few of the many people that contributed to computing after, they helped inspire an early generation of programmers and innovators. If you like this episode I strongly suggest checking out The Friendly Orange Glow from Brian Dear. It’s a lovely work with just the right mix of dry history and flourishes of prose. A short history like this can’t hold a candle to a detailed anthology like Dear’s book. Another well researched telling of the story can be found in a couple of chapters of A People’s History Of Computing In The United States, from Joy Rankin. She does a great job drawing a parallel (and sometimes direct line from) the Dartmouth Time Sharing System and others as early networks. And yes, terminals dialing into a mainframe and using resources over telephone and leased lines was certainly a form of bridging infrastructures and seemed like a network at the time. But no mainframe could have scaled to the ability to become a utility in the sense that all of humanity could access what was hosted on it. Instead, the ARPANET was put online and growing from 1969 to 1990 and working out the hard scientific and engineering principals behind networking protocols gave us TCP/IP. In her book, Rankin makes great points about the BASIC and TUTOR applications helping shape more of our modern world in how they inspired the future of how we used personal devices once connected to a network. The scientists behind ARPANET, then NSFnet and the Internet, did the work to connect us. You see, those dial-up connections were expensive over long distances. By 1974 there were 47 computers connected to the ARPANET and by 1983 we had TCP/IPv4.And much like Bitzer allowing games, they didn’t seem to care too much how people would use the technology but wanted to build the foundation - a playground for whatever people wanted to build on top of it. So the administrative and programming team at CERL deserve a lot of credit. The people who wrote the system, the generations who built features and code only to see it become obsolete came and went - but the compounding impact of their contributions can be felt across the technology landscape today. Some of that is people rediscovering work done at CERL, some is directly inspired, and some has been lost only to probably be rediscovered in the future. One thing is for certain, their contributions to e-learning are unparalleled with any other system out there. And their technical contributions, both in the form of those patented and those that were either unpatentable or where they didn’t think of patenting, are immense. Bitzer and the first high schoolers and then graduate students across the world helped to shape the digital world we live in today. More from an almost sociological aspect than technical. And the deep thought applied to the system lives on today in so many aspects of our modern world. Sometimes that’s a straight line and others it’s dotted or curved. Looking around, most universities have licensing offices now, to capitalize on the research done. Check out a university near you and see what they have available for license. You might be surprised. As I’m sure many in Champagne were after all those years. Just because CDC couldn’t capitalize on some great research doesn’t mean we can’t.
3/2/2021 • 33 minutes, 37 seconds
So Long, Fry's Electronics
We’ve covered Radioshack but there are a few other retail stores I’d like to cover as well. CompUSA, CircuitCity, and Fry’s to name a few. Not only is there something to be learned from the move from brick and mortar electronic chains to Ecommerce but there’s plenty to be learned about how to treat people and how people perceived computers and what we need and when, as well. You see, Fry’s was one of the few places you could walk in, pick a CPU, find a compatible mother board, pick a sweet chassis to put it in, get a power supply, a video card, some memory, back then probably a network card, maybe some sweet fans, a cooling system for the CPU you were about to overclock, an SSD drive to boot a machine, a hard drive to store stuff, a DVD, a floppy just in case, pick up some velcro wrap to keep the cables at bay, get a TV, a cheap knockoff smart watch, a VR headset that would never work, maybe a safe since you already have a cart, a soundbar ‘cause you did just get a TV, some headphones for when you’ll keep everyone else up with the sounder, a couple of resistors for that other project, a fixed frequency video card for that one SGI in the basement, a couple smart plugs, a solar backpack, and a CCNA book that you realize is actually 2 versions out of date when you go to take the test. Yup, that was a great trip. And ya’ there’s also a big bag of chips and a 32 ounce of some weird soda gonna’ go in the front seat with me. Sweet. Now let’s just toss the cheap flashlight we just bought into the glove box in case we ever break down and we’re good to go home and figure out how to pay for all this junk on that new Fry’s Credit Card we just opened. But that was then and this is now. Fry’s announced it was closing all of its stores on February 24th, 2021. The week we’re recording this episode. To quote the final their website: “After nearly 36 years in business as the one-stop-shop and online resource for high-tech professionals across nine states and 31 stores, Fry’s Electronics, Inc. (“Fry’s” or “Company”), has made the difficult decision to shut down its operations and close its business permanently as a result of changes in the retail industry and the challenges posed by the Covid-19 pandemic. The Company will implement the shut down through an orderly wind down process that it believes will be in the best interests of the Company, its creditors, and other stakeholders. The Company ceased regular operations and began the wind-down process on February 24, 2021. It is hoped that undertaking the wind-down through this orderly process will reduce costs, avoid additional liabilities, minimize the impact on our customers, vendors, landlords and associates, and maximize the value of the Company’s assets for its creditors and other stakeholders.” Wow. Just wow. I used to live a couple of miles from a Fry’s and it was a major part of furthering my understanding of arcane, bizarre, sometimes emergent, and definitely dingy areas of computing. And if those adjectives don’t seem to have been included lovingly, they most certainly are. You see every trip to Fry’s was strange. Donald Fry founded Fry’s Food and Drug in 1954. The store rose to prominence in the 50s and 60s until his brother Charles Fry sold it off in 1972. As a part of Kroger it still exists today, with 22,000 employees. But this isn’t the story of a supermarket chain. I guess I did initially think the two were linked because the logos look somewhat similar - but that’s where their connection ends. Instead, let’s cover what happened to the $14 million the family got from the sale of the chain. Charles Fry gave some to his sons John, Randy, and David. They added Kathryn Kolder and leased a location in Sunnyvale, California to open the first Fry’s Electronics store in 1985. This was during the rise of the microcomputer. The computing industry had all these new players who were selling boards and printers and floppy drives. They put all this stuff in bins kinda’ like you would in a grocery store and became a one-stop shop for the hobbyist and the professional alike. Unlike groceries, the parts didn’t expire so they were able to still have things selling 5 or 10 years later, albeit a bit dusty. 1985 was the era when many bought integrated circuits, mother boards, and soldering irons and built their own computers. They saw the rise of the microprocessor, the 80286 and x86s. And as we moved into an era of predominantly x86 clones of the IBM PC, the buses and cards became standard. Provided a power supply had a molex connector it was probably good to light up most mother boards and hard drives. The IDE became the standard then later SATA. But parts were pretty interchangeable. Knowing groceries, they also sold those. Get some Oranges and a microprocessor. They stopped selling those but always sold snacks until the day they closed down. But services were always a thing at Fry’s. Those who didn’t want to spend hours putting spacers on a motherboard and puttin They also sold other electronics. Sometimes the selection seemed totally random. I bought my first MP3 player at a Fry’s - the Diamond Rio. And funny LED lights for computer fans before that really became a thing. Screwdriver kits, thermal grease, RAM chips, unsoldered boards, weird little toys, train sets, coloring books, certification books for that MCSE test I took in 2002, and whatever else I could think of. The stores were kitchy. Some had walls painted like circuit boards. Some had alien motifs. Others were decorated like the old west. It’s like whatever they could find weird stuff to adorn the joint. People were increasingly going online. In 1997 they bought Frys.com. To help people get online, they started selling Internet access in 2000. But by then there were so many vendors to help people get online that it wasn’t going to be successful. People were increasingly shopping online so they bought Cyberian Outpost in 2001 and moved it to outpost.com - which later just pointed to Frys.com. The closing of a number of Radio Shack stores and Circuit City and CompUSA seemed to give them a shot in the arm for a bit. But you could buy computers at Gateway Country or through Dell. Building your own computer was becoming more and more a niche industry for gamers and others who needed specific builds. They grew to 34 stores at their height. Northern California stores in Campbell, Concord, Fremont, Roseville, Sacramento, San Jose, and that original Sunnyvale (now across the street from the old original Sunnyvale) and Southern California stores in Burbank, City of Industry, Fountain Valley, Manhattan Beach, Oxnard, San Diego, San Marcos, and the little one in Woodland Hills - it seemed like everyone in California knew to go to Fry’s when you needed some doodad. In fact, they made the documentary about General Magic because they were constantly going back and forth to Fry’s to get parts to build their device. But they did expand out of California with 8 stores in Texas, two in Airizona, one in Illinois, one in Indiana, one in Nevada, one in Oregon, and another in Washington. In some ways it looked as though they were about to have a chain that could rival the supermarket chain their dad helped build. But it wasn’t meant to be. With the fall of Radio Shack, CompUSA, and Circuit City, I was always surprised Fry’s stayed around. Tandy started a concept similar called Incredible Universe but that didn’t last too long. But I loved them. The customer service wasn’t great. The stores were always a little dirty. But I never left empty-handed. Even when I didn’t find what I was looking for. Generations of computer enthusiasts bought everything from scanners to printers at Frys. They were sued over how they advertised, for sexual harassment, during divorce settlements, and over how they labeled equipment. They lost money in embezzlements, and as people increasingly turned to Amazon and other online vendors for the best price for that MSI motherboard or a screen for the iPhone - keeping such a massive inventory was putting them out of business. So in 2019 amidst rumors they were about to go out of business, they moved to stocking the stores via consignment. Not all vendors upstream could do that, leading to an increasingly strange selection and finding what you needed less and less. Then came COVID. They closed a few stores and between the last ditch effort of consignment and empty bins as hardware moved, they just couldn’t do it any more. As with the flashier and less selection but more complete systems Circuit City and CompUSA before them, they finally closed their doors in 2021, after 36 years. And so we live in an era where many computers, tablets, and phones are no longer serviceable or have parts that can be swapped out. We live in an era where when we can service a device with those parts, we often go online to source them. And we live in an era where if we need instant gratification to replace components there are plenty of retail chains like Target or Walmart that sell components and move far more than Fry’s so are more competitive on the price. We live in an era where we don’t need to go into a retailer for software and books, both sold at high margins. There are stores on the Apple and Microsoft and Google platforms for that. And of course 2020 was a year that many retail chains had to close their doors in order to keep their employees safe, losing millions in revenue. All of that eventually became too much for other computer stores as each slowly eroded the business. And now it’s become too much for Fry’s. I will always remember the countless hours I strolled around the dingy store, palming this adapter and that cable and trying to figure out what components might fit together so I can get the equivalent of an AlienWare computer for half the cost. And I’ll even fondly remember the usually sub-par customer service, because it forced me to learn more. And I’ll always be thankful that they had crap sitting around for a decade because I always learned something new about the history of computers in their bins of arcane bits and bytes sitting around. And their closing reminds us, as the closings of former competitors and even other stores like Borders does, that an incredible opportunity lies ahead of us. These shifts in society also shift the supply chain. They used to get a 50% markup on software and a hefty markup on the books I wrote. Now I can publish software on the App Stores and pay less of my royalties to the retailers. Now I don’t need a box and manual for software. Now books don’t have to be printed and can even be self-published in those venues if I see fit to do so. And while Microsoft, Apple, and Google’s “Services” revenue or revenue from Target once belonged to stores like Fry’s, the opportunities have moved to linking and aggregating and adding machine learning and looking to fields that haven’t yet been brought into a more digital age - or even to harkening back to simpler times and providing a more small town white glove approach to life. Just as the dot com crash created a field where companies like Netflix and Google could become early unicorns, so every other rise and fall creates new, uncharted green fields and blue oceans. Thank you for your contributions - both past and future.
2/27/2021 • 16 minutes, 53 seconds
Apple 1997-2011: The Return Of Steve Jobs
Steve Jobs left Apple in 1985. He co-founded NeXT Computers and took Pixar public. He then returned to Apple as the interim CEO in 1997 at a salary of $1 per year. Some of the early accomplishments on his watch were started before he got there. But turning the company back around was squarely on him and his team. By the end of 1997, Apple moved to a build-to-order manufacturing powered by an online store built on WebObjects, the NeXT application server. They killed off a number of models, simplifying the lineup of products and also killed the clone deals, ending licensing of the operating system to other vendors who were at times building sub-par products. And they were busy. You could feel the frenetic pace. They were busy at work weaving the raw components from NeXT into an operating system that would be called Mac OS X. They announced a partnership that would see Microsoft invest $150 million into Apple to settle patent disputes but that Microsoft would get Internet Explorer bundled on the Mac and give a commitment to release Office for the Mac again. By then, Apple had $1.2 billion in cash reserves again, but armed with a streamlined company that was ready to move forward - but 1998 was a bottoming out of sorts, with Apple only doing just shy of $6 billion in revenue. To move forward, they took a little lesson from the past and released a new all-in-one computer. One that put the color back into that Apple logo. Or rather removed all the colors but Aqua blue from it. The return of Steve Jobs invigorated many, such as Johnny Ive who is reported to have had a resignation in his back pocket when he met Jobs. Their collaboration led to a number of innovations, with a furious pace starting with the iMac. The first iMacs were shaped like gumdrops and the color of candy as well. The original Bondi blue had commercials showing all the cords in a typical PC setup and then the new iMac, “as unPC as you can get.” The iMac was supposed to be to get on the Internet. But the ensuing upgrades allowed for far more than that. The iMac put style back into Apple and even computers. Subsequent releases came in candy colors like Lime, Strawberry, Blueberry, Grape, Tangerine, and later on Blue Dalmatian and Flower Power. The G3 chipset bled out into other more professional products like a blue and white G3 tower, which featured a slightly faster processor than the beige tower G3, but a much cooler look - and very easy to get into compared to any other machine on the market at the time. And the Clamshell laptops used the same design language. Playful, colorful, but mostly as fast as their traditional PowerBook counterparts. But the team had their eye on a new strategy entirely. Yes, people wanted to get online - but these computers could do so much more. Apple wanted to make the Mac the Digital Hub for content. This centered around a technology that had been codeveloped from Apple, Sony, Panasonic, and others called IEEE 1394. But that was kinda’ boring so we just called it Firewire. Begun in 1986 and originally started by Apple, Firewire had become a port that was on most digital cameras at the time. USB wasn’t fast enough to load and unload a lot of newer content like audio and video from cameras to computers. But I can clearly remember that by the year 1999 we were all living as Jobs put it in a “new emerging digital lifestyle.” This led to a number of releases from Apple. One was iMovie. Apple included it with the new iMac DV model for free. That model dumped the fan (which Jobs never liked even going back to the early days of Apple) as well as FireWire and the ability to add an AirPort card. Oh, and they released an AirPort base station in 1999 to help people get online easily. It is still one of the simplest router and wi-fi devices I’ve ever used. And was sleek with the new Graphite design language that would take Apple through for years on their professional devices. iMovie was a single place to load all those digital videos and turn them into something else. And there was another format on the rise, MP3. Most everyone I’ve ever known at Apple love music. It’s in the DNA of the company, going back to Wozniak and Jobs and their love of musicians like Bob Dylan in the 1970s. The rise of the transistor radio and then the cassette and Walkman had opened our eyes to the democratization of what we could listen to as humans. But the MP3 format, which had been around since 1993, was on the rise. People were ripping and trading songs and Apple looked at a tool called Audion and another called SoundJam and decided that rather than Sherlock (or build that into the OS) that they would buy SoundJam in 2000. The new software, which they called iTunes, allowed users to rip and burn CDs easily. Apple then added iPhoto, iWeb, and iDVD. For photos, creating web sites, and making DVDs respectively. The digital hub was coming together. But there was another very important part of that whole digital hub strategy. Now that we had music on our computers we needed something more portable to listen to that music on. There were MP3 players like the Diamond Rio out there, and there had been going back to the waning days of the Digital Equipment Research Lab - but they were either clunky or had poor design or just crappy and cheap. And mostly only held an album or two. I remember walking down that isle at Fry’s about once every other month waiting and hoping. But nothing good ever came. That is, until Jobs and the Apple hardware engineering lead Job Rubinstein found Tony Fadell. He had been at General Magic, you know, the company that ushered in mobility as an industry. And he’d built Windows CE mobile devices for Philips in the Velo and Nino. But when we got him working with Jobs, Rubinstein, and Johnny Ive on the industrial design front, we got one of the most iconic devices ever made: the iPod. And the iPod wasn’t all that different on the inside from a Newton. Blasphemy I know. It sported a pair of ARM chips and Ive harkened back to simpler times when he based the design on a transistor radio. Attention to detail and the lack thereof in the Sony Diskman propelled Apple to sell more than 400 million iPods to this day. By the time the iPod was released in 2001, Apple revenues had jumped to just shy of $8 billion but dropped back down to $5.3. But everything was about to change. And part of that was that the iPod design language was about to leak out to the rest of the products with white iBooks, white Mac Minis, and other white devices as a design language of sorts. To sell all those iDevices, Apple embarked on a strategy that seemed crazy at the time. They opened retail stores. They hired Ron Johnson and opened two stores in 2001. They would grow to over 500 stores, and hit a billion in sales within three years. Johnson had been the VP of merchandising at Target and with the teams at Apple came up with the idea of taking payment without cash registers (after all you have an internet connected device you want to sell people) and the Genius Bar. And generations of devices came that led people back into the stores. The G4 came along - as did faster RAM. And while Apple was updating the classic Mac operating system, they were also hard at work preparing NeXT to go across the full line of computers. They had been working the bugs out in Rhapsody and then Mac OS X Server, but the client OS, Codenamed Kodiak, went into beta in 2000 and then was released as a dual-boot option in Cheetah, in 2001. And thus began a long line of big cats, going to Puma then Jaguar in 2002, Panther in 2003, Tiger in 2005, Leopard in 2007, Snow Leopard in 2009, Lion in 2011, Mountain Lion in 2012 before moving to the new naming scheme that uses famous places in California. Mac OS X finally provided a ground-up, modern, object-oriented operating system. They built the Aqua interface on top of it. Beautiful, modern, sleek. Even the backgrounds! The iMac would go from a gumdrop to a sleek flat panel on a metal stand, like a sunflower. Jobs and Ive are both named on the patents for this as well as many of the other inventions that came along in support of the rapid device rollouts of the day. Jaguar, or 10.2, would turn out to be a big update. They added Address Book, iChat - now called Messages, and after nearly two decades replaced the 8-bit Happy Mac with a grey Apple logo in 2002. Yet another sign they were no longer just a computer company. Some of these needed a server and storage so Apple released the Xserve in 2002 and the Xserve RAID in 2003. The pro devices also started to transition from the grey graphite look to brushed metal, which we still use today. Many wanted to step beyond just listening to music. There were expensive tools for creating music, like ProTools. And don’t get me wrong, you get what you pay for. It’s awesome. But democratizing the creation of media meant Apple wanted a piece of software to create digital audio - and released Garage Band in 2004. For this they again turned to an acquisition, EMagic, which had a tool called Logic Audio. I still use Logic to cut my podcasts. But with Garage Band they stripped it down to the essentials and released a tool that proved wildly popular, providing an on-ramp for many into the audio engineering space. Not every project worked out. Apple had ups and downs in revenue and sales in the early part of the millennium. The G4 Cube was released in 2000 and while it is hailed as one of the greatest designs by industrial designers it was discontinued in 2001 due to low sales. But Steve Jobs had been hard at work on something new. Those iPods that were becoming the cash cow at Apple and changing the world, turning people into white earbud-clad zombies spinning those click wheels were about to get an easier way to put media into iTunes and so on the device. The iTunes Store was released in 2003. Here, Jobs parlayed the success at Apple along with his own brand to twist the arms of executives from the big 5 record labels to finally allow digital music to be sold online. Each song was a dollar. Suddenly it was cheap enough that the music trading apps just couldn’t keep up. Today it seems like everyone just pays a streaming subscription but for a time, it gave a shot in the arm to music companies and gave us all this new-found expectation that we would always be able to have music that we wanted to hear on-demand. Apple revenue was back up to $8.25 billion in 2004. But Apple was just getting started. The next seven years would see that revenue climb from to $13.9 billion in 2005, $19.3 in 2006, $24 billion in 2007, $32.4 in 2008, $42.9 in 2009, $65.2 in 2010, and a staggering $108.2 in 2011. After working with the PowerPC chipset, Apple transitioned new computers to Intel chips in 2005 and 2006. Keep in mind that most people used desktops at the time and just wanted fast. And it was the era where the Mac was really open source friendly so having the ability to load in the best the Linux and Unix worlds had to offer for software inside projects or on servers was made all the easier. But Intel could produce chips faster and were moving faster. That Intel transition also helped with what we call the “App Gap” where applications written for Windows could be virtualized for the Mac. This helped the Mac get much more adoption in businesses. Again, the pace was frenetic. People had been almost begging Apple to release a phone for years. The Windows Mobile devices, the Blackberry, the flip phones, even the Palm Treo. They were all crap in Jobs’ mind. Even the Rockr that had iTunes in it was crap. So Apple released the iPhone in 2007 in a now-iconic Jobs presentation. The early version didn’t have apps, but it was instantly one of the more saught-after gadgets. And in an era where people paid $100 to $200 for phones it changed the way we thought of the devices. In fact, the push notifications and app culture and always on fulfilled the General Magic dream that the Newton never could and truly moved us all into an always-on i (or Internet) culture. The Apple TV was also released in 2007. I can still remember people talking about Apple releasing a television at the time. The same way they talk about Apple releasing a car. It wasn’t a television though, it was a small whitish box that resembled a Mac Mini - just with a different media-browsing type of Finder. Now it’s effectively an app to bootstrap the media apps on a Mac. It had been a blistering 10 years. We didn’t even get into Pages, FaceTime, They weren’t done just yet. The iPad was released in 2010. By then, Apple revenues exceeded those of Microsoft. The return and the comeback was truly complete. Similar technology used to build the Apple online store was also used to develop the iTunes Store and then the App Store in 2008. Here, rather than go to a site you might not trust and download an installer file with crazy levels of permissions. One place where it’s still a work in progress to this day was iTools, released in 2000 and rebranded to .Mac or dot Mac in 2008, and now called MobileMe. Apple’s vision to sync all of our data between our myriad of devices wirelessly was a work in progress and never met the lofty goals set out. Some services, like Find My iPhone, work great. Others notsomuch. Jobs famously fired the team lead at one point. And while it’s better than it was it’s still not where it needs to be. Steve Jobs passed away in 2011 at 56 years old. His first act at Apple changed the world, ushering in first the personal computing revolution and then the graphical interface revolution. He left an Apple that meant something. He returned to a demoralized Apple and brought digital media, portable music players, the iPhone, the iPad, the Apple TV, the iMac, the online music store, the online App Store, and so much more. The world had changed in that time, so he left, well, one more thing. You see, when they started, privacy and security wasn’t much of a thing. Keep in mind, computers didn’t have hard drives. The early days of the Internet after his return was a fairly save I or Internet world. But by the time he passed away there there were some troubling trends. The data on our phones and computers could weave together nearly every bit of our life to an outsider. Not only could this lead to identity theft but with the growing advertising networks and machine learning capabilities, the consequences of privacy breaches on Apple products could be profound as a society. He left an ethos behind to build great products but not at the expense of those who buy them. One his successor Tim Cook has maintained. On the outside it may seem like the daunting 10 plus years of product releases has slowed. We still have the Macbook, the iMac, a tower, a mini, an iPhone, an iPad, an Apple TV. We now have HomeKit, a HomePod, new models of all those devices, Apple silicon, and some new headphones - but more importantly we’ve had to retreat a bit internally and direct some of those product development cycles to privacy, protecting users, shoring up the security model. Managing a vast portfolio of products in the largest company in the world means doing those things isn’t always altruistic. Big companies can mean big law suits when things go wrong. These will come up as we cover the history of the individual devices in greater detail. The history of computing is full of stories of great innovators. Very few took a second act. Few, if any, had as impactful a first act as either that Steve Jobs had. It wasn’t just him in any of these. There are countless people from software developers to support representatives to product marketing gurus to the people that write the documentation. It was all of them, working with inspiring leadership and world class products that helped as much as any other organization in the history of computing, to shape the digital world we live in today.
2/21/2021 • 25 minutes, 31 seconds
From Moveable Type To The Keyboard
QWERTY. It’s a funny word. Or not a word. But also not an acronym per se. Those are the top six letters in a modern keyboard. Why? Because the frequency they’re used allows for hammers on a traditional typewriter to travel to and fro and the effort allows us to be more efficient with our time while typing. The concept of the keyboard goes back almost as far back as moveable type - but it took hundreds of years to standardize where we are today. Johannes Gutenberg is credited for developing the printing press in the 1450s. Printing using wooden blocks was brought to the Western world from China, which led him to replace the wood or clay characters with metal, thus giving us what we now think of as Moveable Type. This meant we were now arranging blocks of characters to print words onto paper. From there it was only a matter of time that we would realize that pressing a key could stamp a character onto paper as we went rather than developing a full page and then pressing ink to paper. The first to get credit for pressing letters onto paper using a machine was Venetian Francesco Rampazzetto in 1575. But as with many innovations, this one needed to bounce around in the heads of inventors until the appropriate level of miniaturization and precision was ready. Henry Mill filed an English patent in 1714 for a machine that could type (or impress) letters progressively. By then, printed books were ubiquitous but we weren’t generating pages of printed text on the fly just yet. Others would develop similar devices but from 1801 to 1810, Pellegrino Turri in Italy developed carbon paper. Here, he coated one side of paper with carbon and the other side with wax. Why did he invent that, other than to give us an excuse to say carbon copy later (and thus the cc in an email)? Either he or Agostino Fantoni da Fivizzano invented a mechanical machine for pressing characters to paper for Countess Carolina Fantoni da Fivizzano, a blind friend of his. She would go on to send him letters written on the device, some of which exist to this day. More inventors tinkered with the idea of mechanical writing devices, often working in isolation from one another. One was a surveyor, William Austin Burt. He found the handwritten documents of his field laborious and so gave us the typographer in 1829. Each letter was moved to where needed to print manually so it wasn’t all that much faster than the handwritten document, but the name would be hyphenated later to form type-writer. And with precision increasing and a lot of invention going on at the time there were other devices. But his patent was signed by Andrew Jackson. James Pratt introduced his Pterotype in an article in the Scientific American in 1867. It was a device that more closely resembles the keyboard layout we know today, with 4 rows of keys and a split in the middle for hands. Others saw the article and continued their own innovative additions. Frank Hall had worked on the telegraph before the Civil War and used his knowledge there to develop a Braille writer, which functioned similarly to a keyboard. He would move to Wisconsin, where he came in contact with another team developing a keyboard. Christopher Latham Sholes saw the article in the Scientific American and along with Carlos Glidden and Samuel Soule out of Milwaukee developed the QWERTY keyboard we know of as the standard keyboard layout today from 1867 to 1868. Around the same time, Danish pastor Rasmus Malling-Hansen introduced the writing ball in 1870. It could also type letters onto paper but with a much more complicated keyboard layout. It was actually the first typewriter to go into mass production - but at this point new inventions were starting to follow the QWERTY layout. Because asdfjkl;. Both though were looking to increase typing speed with Malling-Mansen’s layout putting constanents on the right side and vowels on the left - but Sholes and Glidden mixed keys up to help reduce the strain on hardware as it recoiled, thus splitting common characters in words between the sides. James Densmore encountered the Sholes work and jumped in to help. They had it relentlessly tested and iterated on the design, getting more and more productivity gains and making the device more and more hardy. When the others left the project, it was Densmore and Sholes carrying on. But Sholes was also a politician and editor of a newspaper, so had a lot going on. He sold his share of the patent for their layout for $12,000 and Densmore decided to go with royalties instead. By the 1880s, the invention had been floating around long enough and given a standardized keyboard it was finally ready to be mass produced. This began with the Sholes & Glidden Type Writer introduced in America in 1874. That was followed by the Caligraph. But it was Remington that would take the Sholes patent and create the Remington Typewriter, removing the hyphen from the word typewriter and going mainstream - netting Densmore a million and a half bucks in 1800s money for his royalties. And if you’ve seen anything typed on it, you’ll note that it supported one font: the monospaced sans serif Grotesque style. Characters had always been upper case. Remington added a shift key to give us the ability to do both upper and lower case in 1878 with the Remington Model 2. This was also where we got the ampersand, parenthesis, percent symbol, and question mark as shift characters for numbers. Remington also added tab and margins in 1897. Mark Twain was the first author to turn a manuscript in from a typewriter using what else but the Remington Typewriter. By then, we were experimenting with the sizes and spaces between characters, or kerning, to make typed content easier to read. Some companies moved to slab serif or Pica fonts and typefaces. You could really tell a lot about a company by that Olivetti with it’s modern, almost anti-Latin fonts. The Remington Typewriter Company would later merge with the Rand Kardex company to form Remington Rand, making typewriters, guns, and then in 1950, acquiring the Eckert-Mauchly Computer Corporation, who made ENIAC - arguably the first all-digital computer. Rand also acquired Engineering Research Associates (or ERA) and introduced the Univac. Electronics maker Sperry acquired them in 1955, and then merged with Burroughs to form Unisys in 1988, still a thriving company. But what’s important is that they knew typewriters. And keyboards. But electronics had been improving in the same era that Remington took their typewriters mainstream, and before. Samuel Morse developed the recording telegraph in 1835 and David Hughes added the printed telegraph. Emile Baudot gave us a 5 bit code in the 1870s that enhanced that but those were still using keys similar to what you find on a piano. The typewriter hadn’t yet merged with digital communications just yet. Thomas Edison patented the electric typewriter in 1872 but didn’t produce a working model. And this was a great time of innovation. For example, Alexander Graham Bell was hard at work on patenting the telephone at the time. James Smathers then gave us the first electronic typewriter in 1920 and by the 1930s improved Baudot, or baud was combined with a QUERTY keyboard by Siemens and others to give us typing over the wire. The Teletype Corporation was founded in 1906 and would go from tape punch and readers to producing the teletypes that allowed users to dial into mainframes in the 1970s timesharing networks. But we’re getting ahead of ourselves. How did we eventually end up plugging a keyboard into a computer? Herman Hollerith, the mind behind the original IBM punch cards for tabulating machines before his company got merged with others to form IBM, brought us text keypunches, which were later used to input data into early computers. The Binac computer used a similar representation with 8 keys and an electromechanical control was added to input data into the computer like a punch card might - for this think of a modern 10-key pad. Given that we had electronic typewriters for a couple of decades it was only a matter of time before a full keyboard worth of text was needed on a computer. That came in 1954 with the pioneering work done MIT. Here, Douglas Ross wanted to hookup a Flexowriter electric typewriter to a computer, which would be done the next year as yet another of the huge innovations coming out of the Whirlwind project at MIT. With the addition of core memory to computing that was the first time a real keyboard (and being able to write characters into a computer) was really useful. After nearly 400 years since the first attempts to build a moveable type machine and then and just shy of 100 years since the layout had been codified, the computer keyboard was born. The PLATO team in late 60s University of Illinois Champaign Urbana were one of many research teams that sought to develop cheaper input output mechanisms for their computer Illiac and prior to moving to standard keyboards they built custom devices with fewer keys to help students select multiple choice answers. But eventually they used teletype-esque systems. Those early keyboards were mechanical. They still made a heavy clanky sound when the keys were pressed. Not as much as when using a big mechanical typewriter, but not like the keyboards we use today. These used keys with springs inside them. Springs would be replaced with pressure pads in some machines, including the Sinclair ZX80 and ZX81. And the Timex Sinclair 1000. Given that there were less moving parts, they were cheap to make. They used conductive traces with a gate between two membranes. When a key was pressed electricity flowed through what amounted to a flip-flop. When the key was released the electricity stopped flowing. I never liked them because they just didn’t have that feel. In fact, they’re still used in devices like microwaves to provide for buttons under led lights that you can press. By the late 1970s, keyboards were becoming more and more common. The next advancement was in Chiclet keyboards, common on the TRS-80 and the IBM PCjr. These were were like membrane keyboards but used moulded rubber. Scissor switch keyboards became the standard for laptops - these involve a couple of pieces of plastic under each key, arranged like a scissor. And more and more keyboards were produced. With an explosion in the amount of time we spent on computers, we eventually got about as many designs of ergonomic keyboards as you can think of. Here, doctors or engineers or just random people would attempt to raise or lower hands or move hands apart or depress various keys or raise them. But as we moved from desktops to laptops or typing directly on screens as we do with tablets and phones, those sell less and less. I wonder what Sholes would say if you showed him and the inventors he worked with what the QWERTY keyboard looks like on an iPhone today? I wonder how many people know that at least two of the steps in the story of the keyboard had to do with helping the blind communicate through the written word? I wonder how many know about the work Alexander Graham Bell did with the deaf and the impact that had on his understanding of the vibrations of sound and the emergence of phonautograph to record sound and how that would become acoustic telegraphy and then the telephone, which could later stream baud? Well, we’re out of time for today so that story will have to get tabled for a future episode. In the meantime, look around for places where there’s no standard. Just like the keyboard layout took different inventors and iterations to find the right amount of productivity, any place where there’s not yet a standard just needs that same level of deep thinking and sometimes generations to get it perfected. But we still use the QWERTY layout today and so sometimes once we find the right mix, we’ve set in motion an innovative that can become a true game changer. And if it’s not ready, at least we’ve contributed to the evolutions that revolutionize the world. Even if we don’t use those inventions. Bell famously never had a phone installed in his office. Because distractions. Luckily I disabled notifications on my phone before recording this or it would never get out…
2/18/2021 • 17 minutes, 6 seconds
Apple and NeXT Computer
Steve Jobs had an infamous split with the board of directors of Apple and left the company shortly after the release of the original Mac. He was an innovator who at 21 years old had started Apple in the garage with Steve Wozniak and at 30 years old while already plenty wealthy felt he still had more to give and do. We can say a lot of things about him but he was arguably one of the best product managers ever. He told Apple he’d be taking some “low-level staffers” and ended up taking Rich Page, Bud Tribble, Dan'l Lewin, George Crow, and Susan Barnes to be the CFO. They also took Susan Kare and Joanna Hoffman. had their eyes on a computer that was specifically targeting higher education. They wanted to build computers for researchers and universities. Companies like CDC and Data General had done well in Universities. The team knew there was a niche that could be carved out there. There were some gaps with the Mac that made it a hard sell in research environments. Computer scientists needed object-oriented programming and protected memory. Having seen the work at PARC on object-oriented languages, Jobs knew the power and future-proof approach. Unix System V had branched a number of times and it was a bit more of a red ocean than I think they realized. But Jobs put up $7 million of his own money to found NeXT Computer. He’d add another $5 million and Ross Perot would add another $20 million. The pay bands were one of the most straight-forward of any startup ever founded. The senior staff made $75,000 and everyone else got $50,000. Simple. Ironically, so soon after the 1984 Super Bowl ad where Jobs based IBM, they hired the man who designed the IBM logo, Paul Rand, to design a logo for NeXT. They paid him $100,000 flat. Imagine the phone call when Jobs called IBM to get them to release Rand from a conflict of interest in working with them. They released the first computer in 1988. The NeXT Computer, as it was called, was expensive for the day, coming in at $6,500. It sported a Motorola 68030 CPU and clocked in at a whopping 25 MHz. And it came with a special operating system called NeXTSTEP. NeXTSTEP was based on the Mach kernel with some of the source code coming from BSD. If we go back a little, Unix was started at Bell Labs in 1969 and by the late 70s had been forked from Unix System V to BSD, Unix version 7, and PWB - with each of those resulting in other forks that would eventually become OpenBSD, SunOS, NetBSD, Solaris, HP-UX, Linux, AIX, and countless others. Mach was developed at Carnegie Mellon University and is one of the earliest microkernels. For Mach, Richard Rashid (who would later found Microsoft Research) and Avie Tevanian, were looking specifically to distributed computing. And the Mach project was kicked off in 1985, the same year Jobs left Apple. Mach was backwards-compatible to BSD 4.2 and so could run a pretty wide variety of software. It allowed for threads, or units of execution and tasks or objects that enabled threads. It provided support for messages, which for object oriented languages are typed data objects that fall outside the scope of tasks and threads and then a protected message queue, to manage the messages between tasks and rights of access. They stood it up on a DEC VAX and released it publicly in 1987. Here’s the thing, Unix licensing from Bell Labs was causing problems. So it was important to everyone that the license be open. And this would be important to NeXT as well. NeXT needed a next-generation operating system and so Avi Tevanian was recruited to join NeXT as the Vice President of Software Engineering. There, he designed NeXTSTEP with a handful of engineers. The computers had custom boards and were fast. And they were a sleek black like nothing I’d seen before. But Bill Gates was not impressed claiming that “If you want black, I’ll get you a can of paint.” But some people loved the machines and especially some of the tools NeXT developed for programmers. They got a factory to produce the machines and it only needed to crank out 100 a month as opposed to the thousands it was built to produce. In other words, the price tag was keeping universities from buying the machines. So they pivoted a little. They went up-market with the NeXTcube in 1990, which ran NeXTSTEP, OPENSTEP, or NetBSD and came with the Motorola 68040 CPU. This came machine in at $8,000 to almost $16,000. It came with a hard drive. For the lower end of the market they also released the NeXTstation in 1990, which shipped for just shy of $5,000. The new models helped but by 1991 they had to lay off 5 percent of the company and another 280 by 1993. That’s when the hardware side got sold to Canon so NeXT could focus exclusively on NeXTSTEP. That is, until they got acquired by Apple in 1997. By the end, they’d sold around 50,000 computers. Apple bought NeXT for $429 million and 1.5 million shares of Apple stock, trading at 22 cents at the time, which was trading at $17 a share so worth another $25 and a half million dollars. That makes the deal worth $454 million or $9,080 per machine NeXT had ever built. But it wasn’t about the computer business, which had already been spun down. It was about Jobs and getting a multi-tasking, object-oriented, powerhouse of an operating system, the grandparent of OS X - and the derivative macOS, iOS, iPadOS, watchOS, and tvOS forks. The work done at NeXT has had a long-term impact on the computer industry as a whole. For one, the spinning pinwheel on a Mac. And the Dock. And the App Store. And Objective-C. But also Interface Builder as an IDE was revolutionary. Today we use Xcode. But many of the components go back all the way. And so much more. After the acquisition, NeXT became Mac OS X Server in 1999 and by 2001 was Mac OS X. The rest there is history. But the legacy of the platform is considerable. Just on NeXTSTEP we had a few pretty massive successes. Tim Berners-Lee developed the first web browser WorldWideWeb on NeXTSTEP for a NeXT . Other browsers for other platforms would come but his work became the web as we know it today. The machine he developed the web on is now on display at the National Museum of Science and Media in the UK. We also got games like Quake, Heretic, Stife, and Doom from Interface Builder. And webobjects. And the people. Tevanian came with NeXT to Apple as the Senior Vice President of Software Engineering. Jobs became an advisor, then CEO. Craig Federighi came with the acquisition as well - now Apple’s VP of software engineering. And I know dozens of others who came in from NeXT and helped reshape the culture at Apple. Next.com still redirects to Apple.com. It took three years to ship that first computer at NeXT. It took 2 1/2 years to develop the iPhone. The Apple II, iPod, iPad, and first iMac were much less. Nearly 5 years for the original Mac. Some things take a little more time to flush out than others. Some need the price of components or new components to show up before you know it can be insanely great. Some need false starts like the Steve Jobs Steve Jobs famously said Apple wanted to create a computer in a book in 1983. That finally came out with the release of the iPad in 2010, 27 years later. And so the final component of the Apple acquisition of NeXT to mention is Steve Jobs himself. He didn’t initially come in. He’d just become a billionaire off Pixar and was doing pretty darn well. His arrival back at Apple signified the end of a long draught for the company and all those products we mentioned and the iTunes music store and the App Store (both initially built on WebObjects) would change the way we consume content forever. His impact was substantial. For one, after factoring stock splits, the company might still be trading at .22 cents a share, which is what it would be today with all that. Instead they’re the most highly valued company in the world. But that pales in comparison to the way he and his teams and that relentless eye to product and design has actually changed the world. And the way his perspectives on privacy help protect us today, long after he passed. The heroes journey (as described is a storytelling template that follows a hero from disgrace, to learn the mistakes of their past and reinvent themselves amidst a crisis throughout a grand adventure, and return home transformed. NeXT and Pixar represent part of that journey here. Which makes me wonder: what is my own Monomyth? Where will I return to? What is or was my abyss? These can be large or small. And while very few people in the world will have one like Steve Jobs did, we should all reflect on ours and learn from them. And yes that was plural because life is not so simple that there is one. The past, and our understanding of it, predicts the future. Good luck on your journey.
2/15/2021 • 14 minutes, 12 seconds
Apple's Lost Decade
I often think of companies in relation to their contribution to the next evolution in the forking and merging of disciplines in computing that brought us to where we are today. Many companies have multiple contributions. Few have as many such contributions as Apple. But there was a time when they didn’t seem so innovative. This lost decade began about half way through the tenure of John Sculley and can be seen through the lens of the CEOs. There was Sculley, CEO from 1983 to 1993. Co-founders and spiritual centers of Apple, Steve Jobs and Steve Wozniak, left Apple in 1985. Jobs to create NeXT and Wozniak to jump into a variety of companies like making universal remotes, wireless GPS trackers, and and other adventures. This meant Sculley was finally in a position to be fully in charge of Apple. His era would see sales 10x from $800 million to $8 billion. Operationally, he was one of the more adept at cash management, putting $2 billion in the bank by 1993. Suddenly the vision of Steve Jobs was paying off. That original Mac started to sell and grow markets. But during this time, first the IBM PC and then the clones, all powered by the Microsoft operating system, completely took the operating system market for personal computers. Apple had high margins yet struggled for relevance. Under Sculley, Apple released HyperCard, funded a skunkworks team in General Magic, arguably the beginning of ubiquitous computing, and using many of those same ideas he backed the Newton, coining the term personal digital assistant. Under his leadership, Apple marketing sent 200,000 people home with a Mac to try it out. Put the device in the hands of the people is probably one of the more important lessons they still teach newcomers that work in Apple Stores. Looking at the big financial picture it seems like Sculley did alright. But in Apple’s fourth-quarter earnings call in 1993, they announced a 97 drop from the same time in 1992. This was also when a serious technical debt problem began to manifest itself. The Mac operating system grew from the system those early pioneers built in 1984 to Macintosh System Software going from version 1 to version 7. But after annual releases leading to version 6, it took 3 years to develop system 7 and the direction to take with the operating system caused a schism in Apple engineering around what would happen once 7 shipped. Seems like most companies go through almost the exact same schism. Microsoft quietly grew NT to resolve their issues with Windows 3 and 95 until it finally became the thing in 2000. IBM had invested heavily into that same code, basically, with Warp - but wanted something new. Something happened while Apple was building macOS 7. They lost Jean Lois Gasseé who had been head of development since Steve Jobs left. When Sculley gave everyone a copy of his memoir, Gasseé provided a copy of The Mythical Man-Month, from Fred Brooks’ experience with the IBM System 360. It’s unclear today if anyone read it. To me this is really the first big sign of trouble. Gassée left to build another OS, BeOS. By the time macOS 7 was released, it was clear that the operating system was bloated, needed a massive object-oriented overhaul, and under Sculley the teams were split, with one team eventually getting spun off into its own company and then became a part of IBM to help with their OS woes. The team at Apple took 6 years to release the next operating system. Meanwhile, one of Sculley’s most defining decisions was to avoid licensing the Macintosh operating system. Probably because it was just too big a mess to do so. And yet everyday users didn’t notice all that much and most loved it. But third party developers left. And that was at one of the most critical times in the history of personal computers because Microsoft was gaining a lot of developers for Windows 3.1 and released the wildly popular Windows 95. The Mac accounted for most of the revenue of the company, but under Sculley the company dumped a lot of R&D money into the Newton. As with other big projects, the device took too long to ship and when it did, the early PDA market was a red ocean with inexpensive competitors. The Palm Pilot effectively ended up owning that pen computing market. Sculley was a solid executive. And he played the part of visionary from time to time. But under his tenure Apple found operating system problems, rumors about Windows 95, developers leaving Apple behind for the Windows ecosystem, and whether those technical issues are on his lieutenants or him, the buck stocks there. The Windows clone industry led to PC price wars that caused Apple revenues to plummet. And so Markkula was off to find a new CEO. Michael Spindler became the CEO from 1993 to 1996. The failure of the Newton and Copland operating systems are placed at his feet, even though they began in the previous regime. Markkula hired Digital Equipment and Intel veteran Spindler to assist in European operations and he rose to President of Apple Europe and then ran all international. He would become the only CEO to have no new Mac operating systems released in his tenure. Missed deadlines abound with Copland and then Tempo, which would become Mac OS 8. And those aren’t the only products that came out at the time. We also got the PowerCD, the Apple QuickTake digital camera, and the Apple Pippin. Bandai had begun trying to develop a video game system with a scaled down version of the Mac. The Apple Pippin realized Markkula’s idea from when the Mac was first conceived as an Apple video game system. There were a few important things that happened under Spindler though. First, Apple moved to the PowerPC architecture. Second, he decided to license the Macintosh operating system to companies wanting to clone the Macintosh. And he had discussions with IBM, Sun, and Philips to acquire Apple. Dwindling reserves, increasing debt. Something had to change and within three years, Spindler was gone. Gil Amelio was CEO from 1996 to 1997. He moved from the board while the CEO at National Semiconductor to CEO of Apple. He inherited a company short on cash and high on expenses. He quickly began pushing forward OS 8, cut a third of the staff, streamline operations, dumping some poor quality products, and releasing new products Apple needed to be competitive like the Apple Network Server. He also tried to acquire BeOS for $200 million, which would have Brough Gassée back but instead acquired NeXT for $429 million. But despite the good trajectory he had the company on, the stock was still dropping, Apple continued to lose money, and an immovable force was back - now with another decade of experience launching two successful companies: NeXT and Pixar. The end of the lost decade can be seen as the return of Steve Jobs. Apple didn’t have an operating system. They were in a lurch soy-to-speak. I’ve seen or read it portrayed that Steve Jobs intended to take control of Apple. And I’ve seen it portrayed that he was happy digging up carrots in the back yard but came back because he was inspired by Johnny Ive. But I remember the feel around Apple changed when he showed back up on campus. As with other companies that dug themselves out of a lost decade, there was a renewed purpose. There was inspiration. By 1997, one of the heroes of the personal computing revolution, Steve Jobs, was back. But not quite… He became interim CEO in 1997 and immediately turned his eye to making Apple profitable again. Over the past decade, the product line expanded to include a dozen models of the Mac. Anyone who’s read Geoffrey Moore’s Crossing the Chasm, Inside the Tornado, and Zone To Win knows this story all too well. We grow, we release new products, and then we eventually need to take a look at the portfolio and make some hard cuts. Apple released the Macintosh II in 1987 then the Macintosh Portable in 1989 then the Iicx and II ci in 89 along with the Apple IIgs, the last of that series. By facing competition in different markets, we saw the LC line come along in 1990 and the Quadra in 1991, the same year three models of the PowerBook were released. Different printers, scanners, CD-Roms had come along by then and in 1993, we got a Macintosh TV, the Apple Newton, more models of the LC and by 1994 even more of those plus the QuickTake, Workgroup Server, the Pippin and by 1995 there were a dozen Performas, half a dozen Power Macintosh 6400s, the Apple Network Server and yet another versions of the Performa 6200 and we added the eMade and beige G3 in 1997. The SKU list was a mess. Cleaning that up took time but helped prepare Apple for a simpler sales process. Today we have a good, better, best with each device, with many a computer being build-to-order. Jobs restructured the board, ending the long tenure of Mike Markkula, who’d been so impactful at each stage of the company so far. One of the forces behind the rise of the Apple computer and the Macintosh was about to change the world again, this time as the CEO.
2/12/2021 • 15 minutes, 17 seconds
The Unlikely Rise Of The Macintosh
There was a nexus of Digital Research and Xerox PARC, along with Stanford and Berkeley in the Bay Area. The rise of the hobbyists and the success of Apple attracted some of the best minds in computing to Apple. This confluence was about to change the world. One of those brilliant minds that landed at Apple started out as a technical writer. Apple hired Jef Raskin as their 31st employee, to write the Apple II manual. He quickly started harping on people to build a computer that was easy to use. Mike Markkula wanted to release a gaming console or a cheap computer that could compete with the Commodore and Atari machines at the time. He called the project “Annie.” The project began with Raskin, but he had a very different idea than Markkula’s. He summed it up in an article called “Computers by the Millions” that wouldn’t see publication until 1982. His vision was closer to his PhD dissertation, bringing computing to the masses. For this, he envisioned a menu driven operating system that was easy to use and inexpensive. Not yet a GUI in the sense of a windowing operating system and so could run on chips that were rapidly dropping in price. He planned to use the 6809 chip for the machine and give it a five inch display. He didn’t tell anyone that he had a PhD when he was hired, as the team at Apple was skeptical of academia. Jobs provided input, but was off working on the Lisa project, which used the 68000 chip. So they had free reign over what they were doing. Raskin quickly added Joanna Hoffman for marketing. She was on leave from getting a PhD in archaeology at the University of Chicago and was the marketing team for the Mac for over a year. They also added Burrell Smith, employee #282 from the hardware technician team, to do hardware. He’d run with the Homebrew Computer Club crowd since 1975 and had just strolled into Apple one day and asked for a job. Raskin also brought in one of his students from the University of California San Diego who was taking a break from working on his PhD in neurochemistry. Bill Atkinson became employee 51 at Apple and joined the project. They pulled in Andy Hertzfeld, who Steve Jobs hired when Apple bought one of his programs as he was wrapping up his degree at Berkeley and who’d been sitting on the Apple services team and doing Apple III demos. They added Larry Kenyon, who’d worked at Amdahl and then on the Apple III team. Susan Kare came in to add art and design. They, along with Chris Espinosa - who’d been in the garage with Jobs and Wozniak working on the Apple I, ended up comprising the core team. Over time, the team grew. Bud Tribble joined as the manager for software development. Jerrold Manock, who’d designed the case of the Apple II, came in to design the now-iconic Macintosh case. The team would eventually expand to include Bob Belleville, Steve Capps, George Crow, Donn Denman, Bruce Horn, and Caroline Rose as well. It was still a small team. And they needed a better code name. But chronologically let’s step back to the early project. Raskin chose his favorite Apple, the Macintosh, as the codename for the project. As far as codenames go it was a pretty good one. So their mission would be to ship a machine that was easy to use, would appeal to the masses, and be at a price point the masses could afford. They were looking at 64k of memory, a Motorola 6809 chip, and a 256 bitmap display. Small, light, and inexpensive. Jobs’ relationship with the Lisa team was strained and he was taken off of that and he started moving in on the Macintosh team. It was quickly the Steve Jobs show. Having seen what could be done with the Motorola 68000 chip on the Lisa team, Jobs had them redesign the board to work with that. After visiting Xerox PARC at Raskin’s insistence, Jobs finally got the desktop metaphor and true graphical interface design. Xerox had not been quiet about the work at PARC. Going back to 1972 there were even television commercials. And Raskin had done time at PARC while on sabbatical from Stanford. Information about Smalltalk had been published and people like Bill Atkinson were reading about it in college. People had been exposed to the mouse all around the Bay Area in the 60s and 70s or read Engelbart’s scholarly works on it. Many of the people that worked on these projects had doctorates and were academics. They shared their research as freely as love was shared during that counter-culture time. Just as it had passed from MIT to Dartmouth and then in the back of Bob Albrecht’s VW had spread around the country in the 60s. That spirit of innovation and the constant evolutions over the past 25 years found their way to Steve Jobs. He saw the desktop metaphor and mouse and fell in love with it, knowing they could build one for less than the $400 unit Xerox had. He saw how an object-oriented programming language like Smalltalk made all that possible. The team was already on their way to the same types of things and so Jobs told the people at PARC about the Lisa project, but not yet about the Mac. In fact, he was as transparent as anyone could be. He made sure they knew how much he loved their work and disclosed more than I think the team planned on him disclosing about Apple. This is the point where Larry Tesler and others realized that the group of rag-tag garage-building Homebrew hackers had actually built a company that had real computer scientists and was on track to changing the world. Tesler and some others would end up at Apple later - to see some of their innovations go to a mass market. Steve Jobs at this point totally bought into Raskin’s vision. Yet he still felt they needed to make compromises with the price and better hardware to make it all happen. Raskin couldn’t make the kinds of compromises Jobs wanted. He also had an immunity to the now-infamous Steve Jobs reality distortion field and they clashed constantly. So eventually Raskin the project just when it was starting to take off. Raskin would go on to work with Canon to build his vision, which became the Canon CAT. With Raskin gone, and armed with a dream team of mad scientists, they got to work, tirelessly pushing towards shipping a computer they all believed would change the world. Jobs brought in Fernandez to help with projects like the macOS and later HyperCard. Wozniak had a pretty big influence over Raskin in the early days of the Mac project and helped here and there withe the project, like with the bit-serial peripheral bus on the Mac. Steve Jobs wanted an inexpensive mouse that could be manufactured en masse. Jim Yurchenco from Hovey-Kelley, later called Ideo, got the task - given that trusted engineers at Apple had full dance cards. He looked at the Xerox mouse and other devices around - including trackballs in Atari arcade machines. Those used optics instead of mechanical switches. As the ball under the mouse rolled beams of light would be interrupted and the cost of those components had come down faster than the technology in the Xerox mouse. He used a ball from a roll-on deodorant stick and got to work. The rest of the team designed the injection molded case for the mouse. That work began with the Lisa and by the time they were done, the price was low enough that every Mac could get one. Armed with a mouse, they figured out how to move windows over the top of one another, Susan Kare designed iconography that is a bit less 8-bit but often every bit as true to form today. Learning how they wanted to access various components of the desktop, or find things, they developed the Finder. Atkinson gave us marching ants, the concept of double-clicking, the lasso for selecting content, the menu bar, MacPaint, and later, HyperCard. It was a small team, working long hours. Driven by a Jobs for perfection. Jobs made the Lisa team the enemy. Everything not the Mac just sucked. He took the team to art exhibits. He had the team sign the inside of the case to infuse them with the pride of an artist. He killed the idea of long product specifications before writing code and they just jumped in, building and refining and rebuilding and rapid prototyping. The team responded well to the enthusiasm and need for perfectionism. The Mac team was like a rebel squadron. They were like a start-up, operating inside Apple. They were pirates. They got fast and sometimes harsh feedback. And nearly all of them still look back on that time as the best thing they’ve done in their careers. As IBM and many learned the hard way before them, they learned a small, inspired team, can get a lot done. With such a small team and the ability to parlay work done for the Lisa, the R&D costs were minuscule until they were ready to release the computer. And yet, one can’t change the world over night. 1981 turned into 1982 turned into 1983. More and more people came in to fill gaps. Collette Askeland came in to design the printed circuit board. Mike Boich went to companies to get them to write software for the Macintosh. Berry Cash helped prepare sellers to move the product. Matt Carter got the factory ready to mass produce the machine. Donn Denman wrote MacBASIC (because every machine needed a BASIC back then). Martin Haeberli helped write MacTerminal and Memory Manager. Bill Bull got rid of the fan. Patti King helped manage the software library. Dan Kottke helped troubleshoot issues with mother boards. Brian Robertson helped with purchasing. Ed Riddle designed the keyboard. Linda Wilkin took on documentation for the engineering team. It was a growing team. Pamela Wyman and Angeline Lo came in as programmers. Hap Horn and Steve Balog as engineers. Jobs had agreed to bring in adults to run the company. So they recruited 44 years old hotshot CEO John Sculley to change the world as their CEO rather than selling sugar water at Pepsi. Scully and Jobs had a tumultuous relationship over time. While Jobs had made tradeoffs on cost versus performance for the Mac, Sculley ended up raising the price for business reasons. Regis McKenna came in to help with the market campaign. He would win over so much trust that he would later get called out of retirement to do damage control when Apple had an antenna problem on the iPhone. We’ll cover Antenna-gate at some point. They spearheaded the production of the now-iconic 1984 Super Bowl XVIII ad, which shows woman running from conformity and depicted IBM as the Big Brother from George Orwell’s book, 1984. Two days after the ad, the Macintosh 128k shipped for $2,495. The price had jumped because Scully wanted enough money to fund a marketing campaign. It shipped late, and the 128k of memory was a bit underpowered, but it was a success. Many of the concepts such as a System and Finder, persist to this day. It came with MacWrite and MacPaint and some of the other Lisa products were soon to follow, now as MacProject and MacTerminal. But the first killer app for the Mac was Microsoft Word, which was the first version of Word ever shipped. Every machine came with a mouse. The machines came with a cassette that featured a guided tour of the new computer. You could write programs in MacBASIC and my second language, MacPascal. They hit the initial sales numbers despite the higher price. But over time that bit them on sluggish sales. Despite the early success, the sales were declining. Yet the team forged on. They introduced the Apple LaserWriter at a whopping $7,000. This was a laser printer that was based on the Canon 300 dpi engine. Burrell Smith designed a board and newcomer Adobe knew laser printers, given that the founders were Xerox alumni. They added postscript, which had initially been thought up while working with Ivan Sutherland and then implemented at PARC, to make for perfect printing at the time. The sluggish sales caused internal issues. There’s a hangover when we do something great. First there were the famous episodes between Jobs, Scully, and the board of directors at Apple. Scully seems to have been portrayed by many to be either a villain or a court jester of sorts in the story of Steve Jobs. Across my research, which began with books and notes and expanded to include a number of interviews, I’ve found Scully to have been admirable in the face of what many might consider a petulant child. But they all knew a brilliant one. But amidst Apple’s first quarterly loss, Scully and Jobs had a falling out. Jobs tried to lead an insurrection and ultimately resigned. Wozniak had left Apple already, pointing out that the Apple II was still 70% of the revenues of the company. But the Mac was clearly the future. They had reached a turning point in the history of computers. The first mass marketed computer featuring a GUI and a mouse came and went. And so many others were in development that a red ocean was forming. Microsoft released Windows 1.0 in 1985. Acorn, Amiga, IBM, and others were in rapid development as well. I can still remember the first time I sat down at a Mac. I’d used the Apple IIs in school and we got a lab of Macs. It was amazing. I could open a file, change the font size and print a big poster. I could type up my dad’s lyrics and print them. I could play SimCity. It was a work of art. And so it was signed by the artists that brought it to us: Peggy Alexio, Colette Askeland, Bill Atkinson, Steve Balog, Bob Belleville, Mike Boich, Bill Bull, Matt Carter, Berry Cash, Debi Coleman, George Crow, Donn Denman, Christopher Espinosa, Bill Fernandez, Martin Haeberli, Andy Hertzfeld, Joanna Hoffman, Rod Holt, Bruce Horn, Hap Horn, Brian Howard, Steve Jobs, Larry Kenyon, Patti King, Daniel Kottke, Angeline Lo, Ivan Mach, Jerrold Manock, Mary Ellen McCammon, Vicki Milledge, Mike Murray, Ron Nicholson Jr., Terry Oyama, Benjamin Pang, Jef Raskin, Ed Riddle, Brian Robertson, Dave Roots, Patricia Sharp, Burrell Smith, Bryan Stearns, Lynn Takahashi, Guy "Bud" Tribble, Randy Wigginton, Linda Wilkin, Steve Wozniak, Pamela Wyman and Laszlo Zidek. Steve Jobs left to found NeXT. Some, like George Crow, Joanna Hoffman, and Susan Care, went with him. Bud Tribble would become a co-founder of NeXT and then the Vice President of Software Technology after Apple purchased NeXT. Bill Atkinson and Andy Hertzfeld would go on to co-found General Magic and usher in the era of mobility. One of the best teams ever assembled slowly dwindled away. And the oncoming dominance of Windows in the market took its toll. It seems like every company has a “lost decade.” Some like Digital Equipment don’t recover from it. Others, like Microsoft and IBM (who has arguably had a few), emerge as different companies altogether. Apple seemed to go dormant after Steve Jobs left. They had changed the world with the Mac. They put swagger and an eye for design into computing. But in the next episode we’ll look at that long hangover, where they were left by the end of it, and how they emerged to become to change the world yet again. In the meantime, Walter Isaacson weaves together this story about as well as anyone in his book Jobs. Steven Levy brilliantly tells it in his book Insanely Great. Andy Hertzfeld gives some of his stories at folklore.org. And countless other books, documentaries, podcasts, blog posts, and articles cover various aspects as well. The reason it’s gotten so much attention is that where the Apple II was the watershed moment to introduce the personal computer to the mass market, the Macintosh was that moment for the graphical user interface.
2/9/2021 • 21 minutes, 14 seconds
On Chariots of the Gods?
Humanity is searching for meaning. We binge tv shows. We get lost in fiction. We make up amazing stories about super heroes. We hunt for something deeper than what’s on the surface. We seek conspiracies or... aliens. I finally got around to reading a book that had been on my list for a long time, recently. Not because I thought I would agree with its assertions - but because it came up from time to time in my research. Chariots of the Gods? is a book written in 1968 by German Erich Von Daniken. He goes through a few examples to, in his mind, prove that aliens not only had been to Earth but that they destroyed Sodom with fire and brimstone which he said was a nuclear explosion. He also says the Ark of the Covenant was actually a really big walkie-talkie for calling space. Ultimately, the thesis centers around the idea than humans could not possibly have made the technological leaps we did and so must have been given to us from the gods. I find this to be a perfectly satisfactory science fiction plot. In fact, various alien conspiracy theories seemed to begin soon after Orson Welles 1938 live adaption of H.G. Wells’ War of the Worlds and like a virus, they mutated. But did this alien virus start in a bat in Wuhan or in Roman Syria. The ancient Greeks and then Romans had a lot of gods. Lucian of Samosata thought they should have a couple more. He wove together a story, which he called “A True Story.” In it, he says it’s all make-believe. Because they believed in multiple pantheons of gods in modern day Syria in the second century AD. In the satire, Lucian and crew get taken to the Moon where they get involved in a war between the Moon and the Sun kings for the rights to colonize the Morning Star. They then get eaten by a whale and escape and travel meeting great Greeks through time including Pythagoras, Homer, and Odysseus. And they find the new world. Think of how many modern plots are wrapped up in that book from the second century, made to effectively make fun of storytellers like Homer? The 1800s was one of the first centuries where humanity began to inherit a rapid merger and explosion of scientific understanding and Edgar Allan Poe again took us to the moon in "The Unparalleled Adventure of One Hans Pfaall" in 1835. Jules Verne, Mary Shelley, and then H.G. Welles with that War of the Worlds in 1898. By then we’d mapped the surface of the moon with telescopes, so they wrote of Mars and further. H.P. Lovecraft gave us the Call of Cthulhu. These authors predicted the future - but science fiction became a genre that did more. It helped us create satire or allegory or just comparisons to these rapid global changes in ways that called out the social impact to consider before or after we invent. And to just cope with evolving social norms. The magazine Amazing Stories came in 1926 and the greatest work of science fiction premiered in 1942 with Isaac Asimov’s Foundation. Science fiction was opening our eyes to what was possible and opened the minds of scientists to study what we might create in the future. But it wasn’t real. Von Daniken and French author Robert Charroux seemed to influence one another in taking history and science and turning them into pseudohistory and pseudoscience. And both got many of their initial ideas from the 1960 book, The Morning of the Magicians. But Chariots of the Gods? was a massive success and a best seller. And rather than be dismissed it has now spread to include conspiracy and other theories. Which is fine as fiction, not as non-fiction. Let’s look at some other specific examples from Chariots of the Gods? Von Daniken claims that Japanese Dogu figures were carvings of aliens. He claims there were alien helicopter carvings in an Egyptian temple. He claims the Nazca lines in Peru were a way to call aliens and that a map from 1513 actually showed the earth from space rather than thinking it possible that cartography was capable of showing a somewhat accurate representation of the world in the Age of Discovery. He claimed stories in the Bible were often inspired by alien visits much as some First Nation peoples and cargo cults thought people in ships visiting their lands for the first time might be gods. The one thing I’ve learned researching these episodes is that technology has been a constant evolution. Many of our initial discoveries like fire, agriculture, and using the six simple machines could be observed in nature. From the time we learned to make fire, it was only a matter of time before humanity discovered that stones placed in or around fire might melt in certain ways - and so metallurgy was born. We went through population booms as we discovered each of these. We used the myths and legends that became religions to hand down knowledge, as I was taught to use mnemonics to memorize the seven layers of the OSI model. That helped us preserve knowledge of astronomy across generations so we could explore further and better maintain our crops. The ancient Sumerians then Babylonians gave us writing. But we had been drawing on caves for thousands of years. Which seems more likely, that we were gifted this advance or that as we began to settle in more dense urban centers that we out of a need to scale operations tracked the number of widgets we had with markings that, over time evolved into a written language? First through pictures and then through words that evolved into sentences and then epics? We could pass down information more reliably across generation. Trade and commerce and then ziggurats and pyramids help hone our understanding of mathematics. The study of logic and automata allowed us to build bigger and faster and process more raw materials. Knowledge of all of these discoveries spread across trade routes. So ask yourself this. Which is more likely, the idea that humans maintained a constant, ever-evolving stream of learned ingenuity that was passed down for tens of thousands of years until it accelerated when we learned to write, or do you think aliens from outer space instead gave us technology? I find it revokes our very agency to assert anything but the idea that humans are capable of the fantastic feats we have reached and believe it insulting to take away from the great philosophers, discoverers, scientists, and thinkers that got us where we are today. Our species has long made up stories to explain that which the science of the day cannot. Before we understand the why, we make up stories about the how. This allowed us to pass knowledge down between generations. We see this in ancient explanations of the movements of stars before we had astrolabes. We see humans want to leave something behind that helps the next generations, or burial sites like with Stonehenge - not summon Thor from an alien planet as Marvel has rewritten their own epics to indicate. In part based on rethinking these mythos in the context of Chariots of the Gods? Ultimately the greater our gaps in understanding, the more disconnected with ourselves I find that most people are. We listen to talking heads rather than think for ourselves. We get lost in theories of cabals. We seek a deeper, missing knowledge because we can’t understand everything in front of us. Today, if we know where to look, and can decipher the scientific jargon, all the known knowledge of science and history are at our fingertips. But it can take a lifetime to master one of thousands of fields of scientific research. If we don’t have that specialty then we can perceive it as unreachable and think maybe this pseudohistorical account of humanity is true and maybe aliens gave us If we feel left behind then it becomes easier to blame others when we can’t get below the surface of complicated concepts. Getting left behind might mean that jobs don’t pay what they paid our parents. We may perceive others as getting attention or resources we feel we deserve. We may feel isolated and alone. And all of those are valid feelings. When they’re heard then maybe we can look to the future instead of accepting pseudoscience and pseudohistory and conspiracies. Because while they make for fun romps on the big screen, they’re dangerous when taken as fact.
2/6/2021 • 13 minutes, 51 seconds
The Apple Lisa
Apple found massive success on the back of the Apple II. They went public like many of the late 70s computer companies and the story could have ended there, as it did for many computer companies of the era who were potentially bigger, had better technology, better go to market strategies, and/or even some who were far more innovative. But it didn’t. The journey to the next stage began with the Apple IIc, Apple IIgs, and other incrementally better, faster, or smaller models. Those funded the research and development of a number of projects. One was a new computer: the Lisa. I bet you thought we were jumping into the Mac next. Getting there. But twists and turns, as the title suggests. The success of the Apple II led to many of the best and brightest minds in computers wanting to go work at Apple. Jobs came to be considered a visionary. The pressure to actually become one has been the fall of many a leader. And Jobs almost succumbed to it as well. Some go down due to a lack of vision, others because they don’t have the capacity for executional excellence. Some lack lieutenants they can trust. The story isn’t clear with Jobs. He famously sought perfection. And sometimes he got close. The Xerox Palo Alto Research Center, or PARC for short, had been a focal point of raw research and development, since 1970. They inherited many great innovations, outlandish ideas, amazing talent, and decades of research from academia and Cold War-inspired government grants. Ever since Sputnik, the National Science Foundation and the US Advanced Research Projects Agency had funded raw research. During Vietnam, that funding dried up and private industry moved in to take products to market. Arthur Rock had come into Xerox in 1969, on the back of an investment into Scientific Data Systems. While on the board of Xerox, he got to see the advancements being made at PARC. PARC hired some of the oNLine System (NLS) team who worked to help ship the Xerox Alto in 1973, shipping a couple thousand computers. They followed that up with the Xerox Star in 1981, selling about 20,000. But PARC had been at it the whole time, inventing all kinds of goodness. And so always thinking of the next computer, Apple started the Lisa project in 1978, the year after the release of the Apple II, when profits were just starting to roll in. Story has it that Steve Jobs secured a visit to PARC and made out the back with the idea for a windowing personal computer GUI complete with a desktop metaphor. But not so fast. Apple had already begun the Lisa and Macintosh projects before Jobs visited Xerox. And after the Alto was shown off internally at Xerox in 1977, complete with Mother of All Demo-esque theatrics on stages using remote computers. They had the GUI, the mouse, and networking - while the other computers released that year, the Apple II, Commodore, and TRS-80 were still doing what Dartmouth, the University of Illinois, and others had been doing since the 60s - just at home instead of on time sharing computers. In other words, enough people in computing had seen the oNLine System from Stanford. The graphical interface was coming and wouldn’t be stopped. The mouse had been written about in scholarly journals. But it was all pretty expensive. The visits to PARC, and hiring some of the engineers, helped the teams at Apple figure out some of the problems they didn’t even know they had. They helped make things better and they helped the team get there a little quicker. But by then the coming evolution in computing was inevitable. Still, the Xerox Star was considered a failure. But Apple said “hold my beer” and got to work on a project that would become the Lisa. It started off simply enough: some ideas from Apple executives like Steve Jobs and then 10 people, led by Ken Rothmuller, to develop a system with windows and a mouse. Rothmuller got replaced with John Couch, Apple’s 54th employee. Trip Hawkins got a great education in marketing on that team. He would later found Electronic Arts, one of the biggest video game publishers in the world. Larry Tesler from the Stanford AI Lab and then Xerox PARC joined the team to run the system software team. He’d been on ARPANet since writing Pub an early markup language and was instrumental in the Gypsy Word Processor, Smalltalk, and inventing copy and paste. Makes you feel small to think of some of this stuff. Bruce Daniels, one of the Zork creators from MIT, joined the team from HP as the software manager. Wayne Rosing, formerly of Digital and Data General, was brought in to design the hardware. He’d later lead the Sparc team and then become a VP of Engineering at Google. The team grew. They brought in Bill Dresselhaus as a principal product designer for the look and use and design and even packaging. They started with a user interface and then created the hardware and applications. Eventually there would be nearly 100 people working on the Lisa project and it would run over $150 million in R&D. After 4 years, they were still facing delays and while Jobs had been becoming more and more involved, he was removed from the project. The personal accounts I’ve heard seem to be closer to other large out of control projects at companies that I’ve seen though. The Apple II used that MOS 6502 chip. And life was good. The Lisa used the Motorola 68000 at 5 MHz. This was a new architecture to replace the 6800. It was time to go 32-bit. The Lisa was supposed to ship with between 1 and 2 megabytes of RAM. It had a built-in 12 inch screen that was 720 x 364. They got to work building applications, releasing LisaWrite, LisaCalc, LisaDraw, LisaGraph, LisaGuide, LisaList, LisaProject, and LisaTerminal. They translated it to British English, French, German, Italian, and Spanish. All the pieces were starting to fall into place. But the project kept growing. And delays. Jobs got booted from the Lisa project amidst concerns it was bloated, behind schedule, wasting company resources, and that Jobs’ perfectionism was going to result in a product that could never ship. The cost of the machine was over $10,000. Thing is, as we’ll get into later, every project went over budget and ran into delays for the next decade. Great ideas could then be capitalized on by others - even if a bit watered down. Some projects need to teach us how not to do projects - improve our institutional knowledge about the project or product discipline. That didn’t exactly happen with Lisa. We see times in the history of computing and technology for that matter, when a product is just too far advanced for its time. That would be the Xerox Alto. As costs come down, we can then bring ideas to a larger market. That should have been the Lisa. But it wasn’t. While nearly half the cost of a Xerox Star, less than half the number of units were sold. Following the release of the Lisa, we got other desktop metaphors and graphical interfaces. Agat out of the Soviet Union, SGI, Visi (makers of Visicalc), GEM from Digital Research, DeskMate from Tandy, Amiga Intuition, Acorn Master Compact, the Arthur for the ARM, and the initial releases of Microsoft Windows. By the late 1980s the graphical interface was ubiquitous and computers were the easiest to use for the novice than they’d ever been before. But developers didn’t flock to the system as they’d done with the Apple II. You needed a specialized development workstation so why would they? People didn’t understand the menuing system yet. As someone who’s written command line tools, sometimes they’re just easier than burying buttons in complicated graphical interfaces. “I’m not dead yet… just… badly burned. Or sick, as it were.” Apple released the Lisa 2 in 1984. It went for about half the price and was a little more stable. One reason was that the Twiggy disk drives Apple built for the Lisa were replaced with Sony microfloppy drives. This looked much more like what we’d get with the Mac, only with expansion slots. The end of the Lisa project was more of a fizzle. After the original Mac was released, Lisa shipped as the Macintosh XL, for $4,000. Sun Remarketing built MacWorks to emulate the Macintosh environment and that became the main application of the Macintosh XL. Sun Remarketing bought 5,000 of the Mac XLs and improved them somewhat. The last of the 2,700 Lisa computers were buried in a landfill in Utah in 1989. As the whole project had been, they ended up being a write-off. Apple traded them out for a deep discount on the Macintosh Plus. By then, Steve Jobs was long gone, Apple was all about the Mac and the next year General Magic would begin ushering in the era of mobile devices. The Lisa was a technical marvel at the time and a critical step in the evolution of the desktop metaphor, then nearly twenty years old, beginning at Stanford on NASA and ARPA grants, evolving further at PARC when members of the team went there, and continuing on at Apple. The lessons learned in the Lisa project were immense and helped inform the evolution of the next project, the Mac. But might the product have actually gained traction in the market if Steve Jobs had not been telling people within Apple and outside that the Mac was the next thing, while the Apple II line was still accounting for most of the revenue of the company? There’s really no way to tell. The Mac used a newer Motorola 68000 at nearly 8 megahertz so was faster, the OS was cleaner, the machine was prettier. It was smaller, boxier like the newer Japanese cars at the time. It was just better. But it probably couldn’t have been if not for the Lisa. Lisa was slower than it was supposed to be. The operating system tended to be fragile. There were recalls. Steve Jobs was never afraid to cannibalize a product to make the next awesome thing. He did so with Lisa. If we step back and look at the Lisa as an R&D project, it was a resounding success. But as a public company, the shareholders didn’t see it that way at the time. So next time there’s an R&D project running amuck, think about this. The Lisa changed the world, ushering in the era of the graphical interface. All for the low cost of $50 million after sales of the device are taken out of it. But they had to start anew with the Mac and only bring in the parts that worked. They built out too much technical debt while developing the product to do anything else. While it can be painful - sometimes it’s best to start with a fresh circuit board and a blank command line editor. Then we can truly step back and figure out how we want to change the world.
2/2/2021 • 16 minutes, 2 seconds
Apple: The Apple I computer to the ///
I’ve been struggling with how to cover a few different companies, topics, or movements for awhile. The lack of covering their stories thus far has little to do with their impact but just trying to find where to put them in the history of computing. One of the most challenging is Apple. This is because there isn’t just one Apple. Instead there are stages of the company, each with their own place in the history of computers. Today we can think of Apple as one of the Big 5 tech companies, which include Amazon, Apple, Google, Facebook, and Microsoft. But there were times in the evolution of the company where things looked bleak. Like maybe they would get gobbled up by another tech company. To oversimplify the development of Apple, we’ll break up their storied ascent into four parts: Apple Computers: This story covers the mid-1970s to mid 1980s and covers Apple rising out of the hobbyist movement and into a gangbuster IPO. The Apple I through III families all centered on one family of chips and took the company into the 90s. The Macintosh: The rise and fall of the Mac covers the introduction of the now-iconic Mac through to the Power Macintosh era. Mac OS X: This part of the Apple story begins with the return of Steve Jobs to Apple and the acquisition of NeXT, looks at the introduction of the Intel Macs and takes us through to the transition to the Apple M1 CPU. Post PC: Steve Jobs announced the “post PC” era in 2007, and in the coming years the sales of PCs fell for the first time, while tablets, phones, and other devices emerged as the primary means people used devices. We’ll start with the early days, which I think of as one of the four key Apple stages of development. And those early days go back far past the days when Apple was hocking the Apple I. They go to high school. Jobs and Woz Bill Fernandez and Steve Wozniak built a computer they called “The Cream Soda Computer” in 1970 when Bill was 16 and Woz was 20. It was a crude punch card processing machine built from some parts Woz got from the company he was working for at the time. Fernandez introduced Steve Wozniak to a friend from middle school because they were both into computers and both had a flare for pranky rebelliousness. That friend was Steve Jobs. By 1972, the pranks turned into their first business. Wozniak designed Blue Boxes, initially conceived by Cap’n Crunch John Draper, who got his phreaker name from a whistle in a Cap’n Crunch box that made a tone in 2600 Hz that sent AT&T phones into operator mode. Draper would actually be an Apple employee for a bit. They designed a digital version and sold a few thousand dollars worth. Jobs went to Reed College. Wozniak went to Berkely. Both dropped out. Woz got a sweet gig at HP designing calculators, where Jobs had worked a summer job in high school. India to find enlightenment. When Jobs became employee number 40 at Atari, he got Wozniak to help create Breakout. That was the year The Altair 8800 was released and Wozniak went to the first meeting of a little club called the Homebrew Computer Club in 1975 when they got an Altair so the People’s Computer Company could review it. And that was the inspiration. Having already built one computer with Fernandez, Woz designed schematics for another. Going back to the Homebrew meetings to talk through ideas and nerd out, he got it built and proud of his creation, returned to Homebrew with Jobs to give out copies of the schematics for everyone to play with. This was the age of hackers and hobbyists. But that was about to change ever so slightly. The Apple I Jobs had this idea. What if they sold the boards. They came up with a plan. Jobs sold his VW Microbus and Wozniak sold his HP-65 calculator and they got to work. Simple math. They could sell 50 boards for $40 bucks each and make some cash like they’d done with the blue boxes. But you know, a lot of people didn’t know what to do with the board. Sure, you just needed a keyboard and a television, but that still seemed a bit much. Then a little bigger plan - what if they sold 50 full computers. They went to the Byte Shop and talked them into buying 50 for $500. They dropped $20,000 on parts and netted a $5,000 return. They’d go on to sell about 200 of the Apple Is between 1976 and 1977. It came with a MOS 6502 chip running at a whopping 1 MHz and with 4KB of memory, which could go to 8. They provided Apple BASIC, as most vendors did at the time. That MOS chip was critical. Before it, many used an Intel or the Motorola 6800, which went for $175. But the MOS 6502 was just $25. It was an 8-bit microprocessor designed by a team that Chuck Peddle ran after leaving the 6800 team at Motorola. Armed with that chip at that price, and with Wozniak’s understanding of what it needed to do and how it interfaced with other chips to access memory and peripherals, the two could do something new. They started selling the Apple 1 and to quote an ad “the Apple comes fully assembled, tested & burned-in and has a complete power supply on-board, initial set-up is essentially “hassle free” and you can be running in minutes.” This really tells you something about the computing world at the time. There were thousands of hobbyists and many had been selling devices. But this thing had on-board RAM and you could just add a keyboard and video and not have to read LEDs to get output. The marketing descriptions were pretty technical by modern Apple standards, telling us something of the users. It sold for $666.66. They got help from Patty Jobs building logic boards. Jobs’ friend from college Daniel Kottke joined for the summer, as did Fernandez and Chris Espinosa - now Apple’s longest-tenured employee. It was a scrappy garage kind of company. The best kind. They made the Apple I until a few months after they released the successor. But the problem with the Apple I was that there was only one person who could actually support it when customers called: Wozniak. And he was slammed, busy designing the next computer and all the components needed to take it to the mass market, like monitors, disk drives, etc. So they offered a discount for anyone returning the Apple I and destroyed most returned. Those Apple I computers have now been auctioned for hundreds of thousands of dollars all the way up to $1.75 million. The Apple II They knew they were on to something. But a lot of people were building computers. They needed capital if they were going to bring in a team and make a go at things. But Steve Jobs wasn’t exactly the type of guy venture capitalists liked to fund at the time. Mike Markkula was a product-marketing manager at chip makers Fairchild and Intel who retired early after making a small fortune on stock options. That is, until he got a visit from Steve Jobs. He brought money but more importantly the kind of assistance only a veteran of a successful corporation who’d ride that wave could bring. He brought in Michael "Scotty" Scott, employee #4, to be the first CEO and they got to work on mapping out an early business plan. If you notice the overlapping employee numbers, Scotty might have had something to do with that… As you may notice by Wozniak selling his calculator, at the time computers weren’t that far removed from calculators. So Jobs brought in a calculator designer named Jerry Manock to design a plastic injection molded case, or shell, for the Apple II. They used the same chip and a similar enough motherboard design. They stuck with the default 4KB of memory and provided jumpers to make it easier to go up to 48. They added a cassette interface for IO. They had a toggle circuit that could trigger the built-in speaker. And they would include two game paddles. This is similar to bundles provided with the Commodore and other vendors of the day. And of course it still worked with a standard TV - but now that TVs were mostly color, so was the video coming out of the Apple II. And all of this came at a starting price of $1,298. The computer initially shipped with a version of BASIC written by Wozniak but Apple later licensed the Microsoft 6502 BASIC to ship what they called Applesoft BASIC, short for Apple and Micorosft. Here, they turned to Randy Wiggington who was Apple’s employee #6 and had gotten rides to the Homebrew Computer Club from Wozniak as a teenager (since he lived down the street). He and others added features onto Microsoft BASIC to free Wozniak to work on other projects. Deciding they needed a disk operating system, or DOS. Here, rather than license the industry standard CP/M at the time, Wigginton worked with Shepardson, who did various projects for CP/M and Atari. The motherboard on the Apple II remains an elegant design. There were certain innovations that Wozniak made, like cutting down the number of DRAM chips by sharing resources between other components. The design was so elegant that Bill Fernandez had to join them as employee number four, in order to help take the board and create schematics to have it silkscreened. The machines were powerful. All that needed juice. Jobs asked his former boss Al Alcorn for someone to help out with that. Rod Holt, employee number 5, was brought in to design the power supply. By implementing a switching power supply, as Digital Equipment had done in the PDP-11, rather than a transformer-based power supply, the Apple II ended up being far lighter than many other machines. The Apple II was released in 1977 at the West Coast Computer Fair. It, along with the TRS-80 and the Commodore PET would become the 1977 Trinity, which isn’t surprising. Remember Peddle who ran the 6502 design team - he designed the PET. And Steve Leininger was also a member of the Homebrew Computer Club who happened to work at National Semiconductor when Radio Shack/Tandy started looking for someone to build them a computer. The machine was stamped with an Apple logo. Jobs hired Rob Janoff, a local graphic designer, to create the logo. This was a picture of an Apple made out of a rainbow, showing that the Apple II had color graphics. This rainbow Apple stuck and became the logo for Apple Computers until 1998, after Steve Jobs returned to Apple, when the Apple went all-black, but the silhouette is now iconic, serving Apple for 45 years and counting. The computers were an instant success and sold quickly. But others were doing well in the market. Some incumbents and some new. Red oceans mean we have to improve our effectiveness. So this is where Apple had to grow up to become a company. Markkula made a plan to get Apple to $500 million in sales in 10 years on the backs of his $92,000 investment and another $600,000 in venture funding. They did $2.7 million dollars in sales in 1977. This idea of selling a pre-assembled computer to the general public was clearly resonating. Parents could use it to help teach their kids. Schools could use it for the same. And when we were done with all that, we could play games on it. Write code in BASIC. Or use it for business. Make some documents in Word Star, spreadsheets in VisiCalc, or use one of the thousands of titles available for the Mac. Sales grew 150x until 1980. Given that many thought cassettes were for home machines and floppies were for professional machines, it was time to move away from tape. Markkela realized this and had Wozniak design a floppy disk for the Apple II, which went on to be known as the Drive II. Wozniak had experience with disk controllers and studied the latest available. Wozniak again managed to come up with a value engineered design that allowed Apple to produce a good drive for less than any other major vendor at the time. Wozniak would actually later go on to say that it was one of his best designs (and many contemporaries agreed). Markkula filled gaps as well as anyone. He even wrote free software programs under the name of Johnny Appleseed, a name also used for years in product documentation. He was a classic hacker type of entrepreneur on their behalf, sitting in the guerrilla marketing chair some days or acting as president of the company others, and mentor for Jobs in other days. From Hobbyists to Capitalists Here’s the thing - I’ve always been a huge fan of Apple. Even in their darkest days, which we’ll get to in later episodes, they represented an ideal. But going back to the Apple 1, they were nothing special. Even the Apple II. Osborne, Commodore, Vector Graphics, Atari, and hundreds of other companies were springing up, inspired first by that Altair and then by the rapid drop in the prices of chips. The impact of the 1 megahertz barrier and cost of those MOS 6502 chips was profound. The MOS 6502 chip would be used in the Apple II, the Atari 2600, the Nintendo NES, the BBY Micro. And along with the Zylog Z80 and Intel 8080 would spark a revolution in personal computers. Many of those companies would disappear in what we’d think of as a personal computer bubble if there was more money in it. But those that survived, took things to an order of magnitude higher. Instead of making millions they were making hundreds of millions. Many would even go to war in a race to the bottom of prices. And this is where Apple started to differentiate themselves from the rest. For starters, due to how anemic the default Altair was, most of the hobbyist computers were all about expansion. You can see it on the Apple I schematics and you can see it in the minimum of 7 expansion slots in the Apple II lineup of computers. Well, all of them except the IIc, marketed as a more portable type of device, with a handle and an RCA connection to a television for a monitor. The media seemed to adore them. In an era of JR Ewing of Dallas, Steve Jobs was just the personality to emerge and still somewhat differentiate the new wave of computer enthusiasts. Coming at the tail end of an era of social and political strife, many saw something of themselves in Jobs. He looked the counter-culture part. He had the hair, but this drive. The early 80s were going to be all about the yuppies though - and Jobs was putting on a suit. Many identified with that as well. Fueled by the 150x sales performance shooting them up to $117M in sales, Apple filed for an IPO, going public in 1980, creating hundreds of millionaires, including at least 40 of their own employees. It was the biggest IPO since Ford in 1956, the same year Steve Jobs was born. The stock was filed at $14 and shot up to $29 on the first day alone, leaving Apple sitting pretty on a $1.778 valuation. Scotty, who brought the champagne, made nearly a $100M profit. One of the Venture Capitalists, Arthur Rock, made over $21M on a $57,600 investment. Rock had been the one to convince the Shockley Semiconductor team to found Fairchild, a key turning point in putting silicon into the name of Silicon Valley. When Noyce and Moore left there to found Intel, he was involved. And he would stay in touch with Markkula, who was so enthusiastic about Apple that Rock invested and began a stint on the board of directors at Apple in 1978, often portrayed as the villain in the story of Steve Jobs. But let’s think about something for a moment. Rock was a backer of Scientific Data Systems, purchased by Xerox in 1969, becoming the Xerox 500. Certainly not Xerox PARC and in fact, the anti-PARC, but certainly helping to connect Jobs to Xerox later as Rock served on the board of Xerox. The IPO Hangover Money is great to have but also causes problems. Teams get sidetracked trying to figure out what to do with their hauls. Like Rod Holt’s $67M haul that day. It’s a distraction in a time when executional excellence is critical. We have to bring in more people fast, which created a scenario Mike Scott referred to as a “bozo explosion.” Suddenly more people actually makes us less effective. Growing teams all want a seat at a limited table. Innovation falls off as we rush to keep up with the orders and needs of existing customers. Bugs, bigger code bases to maintain, issues with people doing crazy things. Taking our eyes off the ball and normalizing the growth can be hard. By 1981, Scotty was out after leading some substantial layoffs. Apple stock was down. A big IPO also creates investments in competitors. Some of those would go on a race to the bottom in price. Apple didn’t compete on price. Instead, they started to plan the next revolution, a key piece of Steve Jobs emerging as a household name. They would learn what the research and computer science communities had been doing - and bring a graphical interface and mouse to the world with Lisa and a smaller project brought forward at the time by Jef Raskin that Jobs tried to kill - but one that Markkula not only approved, but kept Jobs from killing, the Macintosh. Fernandez, Holt, Wigginton, and even Wozniak just drifted away or got lost in the hyper-growth of the company, as is often the case. Some came back. Some didn’t. Many of us go through the same in rapidly growing companies. Next (but not yet NeXT) But a new era of hackers was on the way. And a new movement as counter to the big computer culture as Jobs. But first, they needed to take a trip to Xerox. In the meantime, the Apple III was an improvement but proved that the Apple computer line had run its course. They released it in 1980 and recalled the first 14,000 machines and never peaked 75,000 machines sold, killing off the line in 1984. A special year.
1/30/2021 • 25 minutes, 33 seconds
A Steampunk's Guide To Clockworks: From The Cradle Of Civilization To Electromechanical Computers
We mentioned John Locke in the episode on the Scientific Revolution. And Leibniz. They not only worked in the new branches of science, math, and philosophy, but they put many of their theories to use and were engineers. Computing at the time was mechanical, what we might now think of as clockwork. And clockwork was starting to get some innovative new thinking. As we’ve covered, clockworks go back thousands of years. But with a jump in more and more accurate machining and more science, advances in timekeeping were coming. Locke and Huygens worked on pendulum clocks and then moved to spring driven clocks. Both taught English patents and because they didn’t work that well, neither were granted. But more somethings needed to happen to improve the accuracy of time. Time was becoming increasingly important. Not only to show up to appointments and computing ever increasing math problems but also for navigation. Going back to the Greeks, we’d been estimating our position on the Earth relative to seconds and degrees. And a rapidly growing maritime power like England at the time needed to use clocks to guide ships. Why? The world is a sphere. A sphere has 360 degrees which multiplied by 60 minutes is 21,600. The North South circumference is 21603 nautical miles. Actually the world isn’t a perfect sphere so the circumference around the equator is 21,639 nautical miles. Each nautical mile is 6,076 feet. When traveling by sea, trying to do all that math in feet and inches is terribly difficult and so we came up with 180 lines each of latitude, running east-west and longitude running north-south. That’s 60 nautical miles in each line, or 60 minutes. The distance between each naturally goes down as one gets closer to the poles - and goes down a a percentage relative to the distance to those poles. Problem was that the most accurate time to check your position relative to the sun was at noon or to use the Polaris North Star at night. Much of this went back to the Greeks and further. The Sumerians developed the sexagesimal system, or base 60 and passed it down to the Babylonians in the 3rd millennium BCE and by 2000 BCE gave us the solar year and the sundial. As their empire grew rich with trade and growing cities by 1500 BCE the Egyptians had developed the first water clocks timers, proved by the Karnak water clock, beginning as a controlled amount of water filling up a vessel until it reached marks. Water could be moved - horizontal water wheels were developed as far back as the 4th millennium BCE. Both the sundial and the water clock became more precise in the ensuing centuries, taking location and the time of the year into account. Due to water reacting differently in various climates we also got the sandglass, now referred to as the hourglass. The sundial became common in Greece by the sixth century BCE, as did the water clock, which they called the clepsydra. By then it had a float that would tell the time. Plato even supposedly added a bowl full of balls to his inflow water clock that would dump them on a copper plate as an alarm during the day for his academy. We still use the base 60 scale and the rough solar years from even more ancient times. But every time sixty seconds ticks by something needs to happen to increment a minute and every 60 minutes needs to increment an hour. From the days of Thales in the 600s BCE and earlier, the Greeks had been documenting and studying math and engineering. And inventing. All that gathered knowledge was starting to come together. Ctesibius was potentially the first to head the Library of Alexandria and while there, developed the siphon, force pumps, compressed air, and so the earliest uses of pneumatics. He is accredited for adding a scale and float thus mechanics. And expanding the use to include water powered gearing that produced sound and moved dials with wheels. The Greek engineer Philo of Byzantium in the 240s BCE, if not further back, added an escapement to the water clock. He started by simply applying a counterweight to the end of a spoon and as the spoon filled, a ball was released. He also described a robotic maid who, when Greeks put a cup in her hand, poured wine. Archimedes added the idea that objects displaced water based on their volume but also mathematical understanding of the six simple machines. He then gets credited for being the first to add a gear to a water clock. We now have gears and escapements. Here’s a thought, given their lifetimes overlapping, Philo, Archimedes, and Ctesibius could have all been studying together at the library. Archimedes certainly continued on with earlier designs, adding a chime to the early water clocks. And Archimedes is often credited for providing us with the first transmission gears. The Antikythera device proves the greeks also made use of complex gearing. Transferring energy in more complex gearing patterns. It is hand cranked but shows mathematical and gearing mastery by choosing a day and year and seeing when the next eclipse and olympiad would be. And the Greeks were all to happy to use gearing for other devices, such as an odometer in the first century BCE and to build the Tower of the Winds, an entire building that acted as a detailed and geared water clock as well as perhaps a model of the universe. And we got the astrolabe at the same time, from Apollonius or Hipparchus. But a new empire had risen. The astrolabe was a circle of metal with an arm called an alidade that users sighted to the altitude of a star and based on that, you could get your location. The gearing was simple but the math required to get accurate readings was not. These were analog computers of a sort - you gave them an input and they produced an output. At this point they were mostly used by astronomers and continued to be used by Western philosophers at least until the Byzantines. The sundial, water clocks, and many of these engineering concepts were brought to Rome as the empire expanded, many from Greece. The Roman Vitruvius is credited with taking that horizontal water wheel and flipping it vertical in 14 CE. Around the same time, Augustus Caesar built a large sundial in Campus Martius. The Romans also added a rod to cranks giving us sawmills in the third century. The larger the empire the more time people spent in appointments and the more important time became - but also the more people could notice the impact that automata had. Granted much of it was large, like a windmill at the time, but most technology starts huge and miniaturizes as more precision tooling becomes available to increasingly talented craftspeople and engineers. Marcus Vitruvius Pollio was an architect who wrote 10 books in the 20s BCE about technology. His works link aqueducts to water-driven machinations that could raise water from mines, driven by a man walking on a wheel above ground like a hamster does today but with more meaning. They took works from the Hellenistic era and put them in use on an industrial scale. This allowed them to terraform lands and spring new cities into existence. Sawing timber with mills using water to move saws allowed them to build faster. And grinding flour with mills allowed them to feed more people. Heron of Alexandria would study and invent at the Library of Alexandria, amongst scrolls piled to the ceilings in halls with philosophers and mechanics. The inheritor of so much learning, he developed vending machines, statues that moved, and even a steam engine. If the Greeks and early Roman conquered of Alexandria could figure out how a thing work, they could automate it. Many automations were to prove the divine. Such as water powered counterweights to open doors when priests summoned a god, and blew compressed air through trumpets. He also used a wind mill to power an organ and a programmable cart using a weight to turn a drive axle. He also developed an omen machine, with ropes and pulleys on a gear that caused a bird to sing, the song driven by a simple whistle being lowered into water. His inventions likely funding more and more research. But automations in Greek times were powered by natural forces, be it hand cranked, fire, or powered by water. Heron also created a chain driven automatic crossbow, showing the use of a chain-driven machine and he used gravity to power machines, automating devices as sand escaped from those sand glasses. He added pegs to pulleys so the distance travelled could be programmed. Simple and elegant machines. And his automata extended into the theater. He kept combining simple machines and ropes and gravity into more and more complex combinations, getting to the point that he could run an automated twenty minute play. Most of the math and mechanics had been discovered and documented in the countless scrolls in the Library of Alexandria. And so we get the term automated from the Greek word for acting of oneself. But automations weren’t exclusive to the Greeks. By the time Caligula was emperor of the Roman Empire, bronze valves could be used to feed iron pipes in his floating ships that came complete with heated floors. People were becoming more and more precise in engineering and many a device was for telling time. The word clock comes from Latin for bell or clogga. I guess bells should automatically ring at certain times. Getting there... Technology spreads or is rediscovered. By Heron the Greeks and Romans understood steam, pistons, gears, pulleys, programmable automations, and much of what would have been necessary for an industrial or steampunk revolution. But slaves were cheap and plentiful in the empire. The technology was used in areas where they weren’t. Such as at Barbegal to feed Arles in modern France, the Romans had a single hillside flour grinding complex with automated hoppers, capable of supplying flour to thousands of Romans. Constantine, the first Christian Roman emperor, was based there before founding Constantinople. And as Christianity spread, the gimmicks that enthralled the people as magic were no longer necessary. The Greeks were pagans and so many of their works would be cleansed or have Christian writings copied over them. Humanity wasn’t yet ready. Or so we’ve been led to believe. The inheritors of the Roman Empire were the Byzantines, based where Europe meets what we now think of as the Middle East. We have proof of geared portable sundials there, fewer gears but showing evidence of the continuation of automata and the math used to drive it persisting in the empire through to the 400s. And maybe confirming written accounts that there were automated lions and thrones in the empire of Constantinople. And one way geared know-how continued and spread was along trade routes which carried knowledge in the form of books and tradespeople and artifacts, sometimes looted from temples. One such trade route was the ancient Silk Road (or roads). Water clocks were being used in Egypt, Babylon, India, Persia, Greece, Rome, and China. The Tang Dynasty in China took or rediscovered the escapement to develop a water powered clockwork escapement in the 700s and then in the Song Dynasty developed astronomical clock towers in the 900s. By now the escapements Su Sung is often credited for the first mechanical water clock in 1092. And his Cosmic Engine would mark the transition from water clocks to fully mechanical clocks, although still hydromechanical. The 1100s saw Bhoja in the Paramara dynasty of India emerge as a patron of the arts and sciences and write a chapter on mechanical bees and birds. These innovations could have been happening in a vacuum in each - or word and works could have spread through trade. That technology disappeared in Europe, such as plumbing in towns that could bring tap water to homes or clockworks, as the Roman Empire retreated. The specialists and engineers lacked the training to build new works or even maintain many that existed in modern England, France, and Germany. But the heads of rising eastern empires were happy to fund such efforts in a sprint to become the next Alexander. And so knowledge spread west from Asia and was infused with Greek and Roman knowhow in the Middle East during the Islamic conquests. The new rulers expanded quickly, effectively taking possession of Egypt, Mesopotamia, parts of Asia, the Turkish peninsula, Greece, parts of Southern Italy, out towards India, and even Spain. In other words, all of the previous centers of science. And they were tolerant, not looking to convert conquered lands to Islam. This allowed them to learn from their subjects in what we now think of as the Arabic translation movement in the 7th century when Arabic philosophers translated but also critiqued and refined works from the lands they ruled. This sparked the Muslim golden age, which became the new nexus of science at the time. Over time we saw the Seljuks, ruling out of Baghdad, and Abbasids as Islamic empires who funded science and philosophy. They brought caravans of knowledge into their capitals. The Abbasids even insisted on a specific text from Ptolemy (the Almagest) when doing a treaty so they could bring it home for study. They founding of schools of learning known as Madrasas in every town. This would be similar to a university system today. Over the centuries following, they produced philosophers like Muhammad Ibn Musa Al-Khwarizmi, who solved quadratic equations, giving us algebra. This would become important to make clockwork devices became more programmable (and for everything else algebra is great at helping with). They sent clockworks as gifts, such as a brass automatic water clock sent to Charlemagne between 802 and 807, complete with chimes. Yup, the clogga rang the bell. They went far past where Heron left off though. There was Ibn-Sina, Al-Razi, Al-Jazari, Al Kindi, Thābit ibn Qurra, Ridwan, and countless other philosophers carrying on the tradition. The philosophers took the works of the Greeks, copied, and studied them. They evolved the technology to increasing levels of sophistication. And many of the philosophers completed their works at what might be considered the Islamic version of the Library of Alexandria, The House of Wisdom in Baghdad. In fact, when Baghdad was founded about 50 miles north of ancient Babylon, the Al-Mansur Palace Library was part of the plan and over subsequent Caliphs was expanded adding an observatory that would then be called the House of Wisdom. The Banu Musa brothers worked out of there and wrote twenty books including the first Book of Ingenious Devices. Here, they took the principles the Greeks and others had focused on and got more into the applications of those principles. On the way to their compilation of devices, they translated books from other authors, including A Book on Degrees on the Nature of Zodiacal Signs from China and Greek works.The three brothers combined pneumatics and aerostatics. They added plug valves, taps, float valves, and conical valves. They documented the siphon and funnel for pouring liquids into the machinery and thought to put a float in a chamber to turn what we now think of as the first documented crank shaft. We had been turning circular motion into linear motion with wheels, but we were now able to turn linear motion into circular motion as well. They used all of this to describe in engineering detail, if not build and invent, marvelous fountains. Some with multiple jets alternating. Some were wind powered and showed worm-and-pinion gearing. Al-Biruni, around the turn of the first millennia, came out of modern Uzbekistan and learned the ancient Indian Sanskrit, Persian, Hebrew, and Greek. He wrote 95 books on astronomy and math. He studied the speed of light vs speed of sound, the axis of the earth and applied the scientific method to statics and mechanics. This moved theories on balances and weights forward. He produced geared mechanisms that are the ancestor of modern astrolabes. The Astrolabe was also brought to the Islamic world. Muslim astronomers added newer scales and circles. As with in antiquity, they used it in navigation but they had another use, to aid in prayer by showing the way to Mecca. Al-Jazari developed a number of water clocks and is credited with others like developed by others due to penning another Book of Knowledge of Ingenious Mechanical Devices. Here, he describes a camshaft, crank dive and reciprocating pumps, two way valves, and expanding on the uses of pneumatic devices. He developed programmable humanoid robots in the form of automatic musicians on a boat. These complex automata included cams and pegs, similar to those developed by Heron of Alexandria, but with increasing levels of sophistication, showing we were understanding the math behind the engineering and it wasn’t just trial and error. All golden ages must end. Or maybe just evolve and migrate. Fibonacci and Bacon quoted then, showing yet another direct influence from multiple sources around the world flowing into Europe following the Holy Wars. Pope Urban II began inspiring European Christian leaders to wage war against the Muslims in 1095. And so the Holy Wars, or Crusades would begin and rage until 1271. Here, we saw manuscripts copied and philosophy flow back into Europe. Equally as important, Muslim Caliphates in Spain and Sicily and trade routes. And another pair of threats were on the rise. The plague and the Mongols. The Mongol invasions began in the 1200s and changed the political makeup of the known powers of the day. The Mongols sacked Baghdad and burned the House of Wisdom. After the mongols and Mughals, the Islamic Caliphates had warring factions internally, the empires fractured, and they turned towards more dogmatic approaches. The Ottomon empire rose and would last until World War I, and while they continued to sponsor scientists and great learners, the nexus of scientific inquiry and the engineering that inspired shifted again and the great works were translated with that shift, including into Latin - the language of learning in Europe. By 1492 the Moors would be kicked out of Spain. That link from Europe to the Islamic golden age is a critical aspect of the transfer of knowledge. The astrolabe was one such transfer. As early as the 11th century, metal astrolabes arrive in France over the Pyrenees to the north and to the west to Portugal . By the 1300s it had been written about by Chaucer and spread throughout Europe. Something else happened in the Iberian peninsula in 1492. Columbus sailed off to discover the New World. He also used a quadrant, or a quarter of an astrolabe. Which was first written about in Ptolemy’s Almagest but later further developed at the House of Wisdom as the sine quadrant. The Ottoman Empire had focused on trade routes and trade. But while they could have colonized the New World during the Age of Discovery, they didn’t. The influx of wealth coming from the Americas caused inflation to spiral and the empire went into a slow decline over the ensuing centuries until the Turkish War of Independence, which began in 1919. In the meantime, the influx of money and resources and knowledge from the growing European empires saw clockworks and gearing arriving back in Europe in full force in the 14th century. In 1368 the first mechanical clock makers got to work in England. Innovation was slowed due to the Plague, which destroyed lives and property values, but clockwork had spread throughout Europe. The Fall of Constantinople to the Ottomons in 1453 sends a wave of Greek Scholars away from the Ottoman Empire and throughout Europe. Ancient knowledge, enriched with a thousand years of Islamic insight was about to meet a new level of precision metalwork that had been growing in Europe. By 1495, Leonardo da Vinci showed off one of the first robots in the world - a knight that could sit, stand, open its visor independently. He also made a robotic lion and repeated experiments from antiquity on self driving carts. And we see a lot of toys following the mechanical innovations throughout the world. Because parents. We think of the Renaissance as coming out of Italy but scholars had been back at it throughout Europe since the High Middle Ages. By 1490, a locksmith named Peter Hele is credited for developing the first mainspring in Nurnburg. This is pretty important for watches. You see, up to this point nearly every clockwork we’ve discussed was powered by water or humans setting a dial or fire or some other force. The mainspring stores energy as a small piece of metal ribbon is twisted around an axle, called an abror, into a spiral and then wound tighter and tighter, thus winding a watch. The mainspring drove a gear train of increasingly smaller gears which then sent energy into the escapement but without a balance wheel those would not be terribly accurate just yet. But we weren’t powering clocks with water. At this point, clocks started to spread as expensive decorations, appearing on fireplace mantles and on tables of the wealthy. These were not small by any means. But Peter Henlein would get the credit in 1510 for the first real watch, small enough to be worn as a necklace. By 1540, screws were small enough to be used in clocks allowing them to get even smaller. The metals for gears were cut thinner, clock makers and toy makers were springing up all over the world. And money coming from speculative investments in the New World was starting to flow, giving way to fuel even more investment into technology. Jost Burgi invented the minute hand in 1577. But as we see with a few disciplines he decided to jump into, Galileo Galilei has a profound impact on clocks. Galileo documents the physics of the pendulum in 1581 and the center of watchmaking would move to Geneva later in that decade. Smaller clockworks spread with wheels and springs but the 1600s would see an explosion in hundreds of different types of escapements and types of gearing. He designed an escapement for a pendulum clock but died before building it. 1610 watches got glass to protect the dials and 1635 French inventor Paul Viet Blois added enamel to the dials. Meanwhile, Blaise Pascal developed the Pascaline in 1642, giving the world the adding machine. But it took another real scientist to pick up Galileo’s work and put it into action to propel clocks forward. To get back to where we started, a golden age of clockwork was just getting underway. In 1657 Huygens created a clock driven by the pendulum, which by 1671 would see William Clement add the suspension spring and by 1675 Huygens would give us the balance wheel, mimicking the back and forth motion of Galileo’s pendulum. The hairspring, or balance spring, then controlled the speed making it smooth and more accurate. And the next year, we got the concentric minute hand. I guess Robert Hooke gets credit for the anchor escapement, but the verge escapement had been in use for awhile by then. So who gets to claim inventing some of these devices is debatable. Leibniz then added a stepped reckoner to the mechanical calculator in 1672 going from adding and subtracting to multiplication and division. Still calculating and not really computing as we’d think of it today. At this point we see a flurry of activity in a proton-industrial revolution. Descartes puts forth that bodies are similar to complex machines and that various organs, muscles, and bones could be replaced with gearing similar to how we can have a hip or heart replaced today. Consider this a precursor to cybernetics. We see even more mechanical toys for the rich - but labor was still cheap enough that automation wasn’t spreading faster. And so we come back to the growing British empire. They had colonized North America and the empire had grown wealthy. They controlled India, Egypt, Ireland, the Sudan, Nigeria, Sierra Leone, Kenya, Cyprus, Hong Kong, Burma, Australia, Canada, and so much more. And knowing the exact time was critical for a maritime empire because we wouldn’t get radar until World War II. There were clocks but still, the clocks built had to be corrected at various times, based on a sundial. This is because we hadn’t yet gotten to the levels of constant power and precise gearing and the ocean tended to mess with devices. The growing British Empire needed more reliable ways than those Ptolemy used to tell time. And so England would offer prizes ranging from 10,000 to 20,000 pounds for more accurate ways to keep time in the Maritime Act in 1714. Crowdsourcing. It took until the 1720s. George Graham, yet another member of the Royal Society, picked up where Thomas Tompion left off and added a cylinder escapement to watches and then the deadbeat escapement. He chose not to file patents for these so all watch makers could use them. He also added mercurial compensation to pendulum clocks. And John Harrison added the grid-iron compensation pendulum for his H1 marine chronometer. And George Graham added the cylinder escapement. 1737 or 1738 sees another mechanical robot, but this time Jacques de Vaucanson brings us a duck that can eat, drink, and poop. But that type of toy was a one-off. Swiss Jaquet-Droz built automated dolls that were meant to help sell more watches, but here we see complex toys that make music (without a water whistle) and can even write using programmable text. The toys still work today and I feel lucky to have gotten to see them at the Museum of Art History in Switzerland. Frederick the Great became entranced by clockwork automations. Magicians started to embrace automations for more fantastical sets. At this point, our brave steampunks made other automations and their automata got cheaper as the supply increased. By the 1760s Pierre Le Roy and Thomas Earnshaw invented the temperature compensated balance wheel. Around this time, the mainspring was moved into a going barrel so watches could continue to run while the mainspring was being wound. Many of these increasingly complicated components required a deep understanding of the math about the simple machine going back to Archimedes but with all of the discoveries made in the 2,000 years since. And so in 1785 Josiah Emery made the lever escapement standard. The mechanical watch fundamentals haven’t changed a ton in the past couple hundred years (we’ll not worry about quartz watches here). But the 1800s saw an explosion in new mechanical toys using some of the technology invented for clocks. Time brings the cost of technology down so we can mass produce trinkets to keep the kiddos busy. This is really a golden age of dancing toys, trains, mechanical banks, and eventually bringing in spring-driven wind-up toys. Another thing happened in the 1800s. With all of this knowhow on building automations, and all of this scientific inquiry requiring increasingly complicated mathematics, Charles Babbage started working on the Difference Engine in 1822 and then the Analytical Engine in 1837, bringing in the idea of a Jacquard loom punched card. The Babbage machines would become the precursor of modern computers, and while they would have worked if built to spec, were not able to be run in his lifetime. Over the next few generations, we would see his dream turn into reality and the electronic clock from Frank Hope-Jones in 1895. There would be other innovations such as in 1945 when the National Institute of Standards and technology created the first atomic clock. But in general parts got smaller, gearing more precise, and devices more functional. We’d see fits and starts for mechanical computers, with Percy Ludgate’s Analytical Machine in 1909, the Marchant Calculator in 1918, the electromechanical Enigma in the 1920s, the Polish Enigma double in 1932, the Z1 from Konrad Zuse in 1938, and the Mark 1 Fire Control Computer for the US Navy in the World War II era, when computers went electro-mechanical and electric, effectively ending the era of clockwork-driven machinations out of necessity, instead putting that into what I consider fun tinkerations. Aristotle dreamed of automatic looms freeing humans from the trappings of repetitive manual labors so we could think. A Frenchman built them. Long before Aristotle, Pre-Socratic Greek legends told of statues coming to life, fire breathing statues, and tables moving themselves. Egyptian statues were also known to have come to life to awe and inspire the people. The philosophers of the Thales era sent Pythagoras and others to Egypt where he studied with Egyptian priests. Why priests? They led ascetic lives, often dedicated to a branch of math or science. And that’s in the 6th century BCE. The Odyssey was written about events from the 8th century BCE. We’ve seen time and time again in the evolutions of science that we often understood how to do something before we understood why. The legendary King Solomon and King Mu of the Zhao dynasty are said to have automata, or clockwork, or moving statues, or to have been presented with these kinds of gifts, going back thousands of years. And there is the chance that they were. Since then, we’ve seen a steady advent of this back and forth between engineering and science. Sometimes, we understand how to do something through trial and error or random discovery. And then we add the math and science to catch up to it. Once we do understand the science behind a discovery we uncover better ways and that opens up more discoveries. Aristotle’s dream was realized and extended to the point we can now close the blinds, lock the doors, control the lights, build cars, and even now print cars. We mastered time in multiple dimensions, including Newton’s relative time. We mastered mechanics and then the electron and managed to merge the two. We learned to master space, mapping them to celestial bodies. We mastered mechanics and the math behind it. Which brings us to today. What do you have to do manually? What industries are still run by manual labor? How can we apply complex machines or enrich what those can do with electronics in order to free our fellow humans to think more? How can we make Aristotle proud? One way is to challenge and prove or disprove any of his doctrines in new and exciting ways. Like Newton and then Einstein did. We each have so much to give. I look forward to seeing or hearing about your contributions when its time to write their histories!
1/21/2021 • 40 minutes, 53 seconds
Connections: ARPA > RISC > ARM > Apple's M1
Let’s oversimplify something in the computing world. Which is what you have to do when writing about history. You have to put your blinders on so you can get to the heart of a given topic without overcomplicating the story being told. And in the evolution of technology we can’t mention all of the advances that lead to each subsequent evolution. It’s wonderful and frustrating all at the same time. And that value judgement of what goes in and what doesn’t can be tough. Let’s start with the fact that there are two main types of processors in our devices. There’s the x86 chipset developed by Intel and AMD and then there’s the RISC-based processors, which are ARM and for the old school people, also include PowerPC and SPARC. Today we’re going to set aside the x86 chipset that was dominant for so long and focus on how the RISC and so ARM family emerged. First, let’s think about what the main difference is between ARM and x86. RISC and so ARM chips have a focus on reducing the number of instructions required to perform a task to as few as possible, and so RISC stands for Reduced Instruction Set Computing. Intel, other than the Atom series chips, with the x86 chips has focused on high performance and high throughput. Big and fast, no matter how much power and cooling is necessary. The ARM processor requires simpler instructions which means there’s less logic and so more instructions are required to perform certain logical operations. This increases memory and can increase the amount of time to complete an execution, which ARM developers address with techniques like pipelining, or instruction-level parallelism on a processor. Seymour Cray came up with this to split up instructions so each core or processor handles a different one and so Star, Amdahl and then ARM implemented it as well. The X86 chips are Complex Instruction Set Computing chips, or CISC. Those will do larger, more complicated tasks, like computing floating point integers or memory searches, on the chip. That often requires more consistent and larger amounts of power. ARM chips are built for low power. The reduced complexity of operations is one reason but also it’s in the design philosophy. This means less heat syncs and often accounting for less consistent streams of power. This 130 watt x86 vs 5 watt ARM can mean slightly lower clock speeds but the chips can cost more as people will spend less in heat syncs and power supplies. This also makes the ARM excellent for mobile devices. The inexpensive MOS 6502 chips helped revolutionize the personal computing industry in 1975, finding their way into the Apple II and a number of early computers. They were RISC-like but CISC-like as well. They took some of the instruction set architecture family from the IBM System/360 through to the PDP, General Nova, Intel 8080, Zylog, and so after the emergence of Windows, the Intel finally captured the personal computing market and the x86 flourished. But the RISC architecture actually goes back to the ACE, developed in 1946 by Alan Turing. It wasn’t until the 1970s that Carver Mead from Caltech and Lynn Conway from Xerox PARC saw that the number of transistors was going to plateau on chips while workloads on chips were growing exponentially. ARPA and other agencies needed more and more instructions, so they instigated what we now refer to as the VLSI project, a DARPA program initiated by Bob Kahn to push into the 32-bit world. They would provide funding to different universities, including Stanford and the University of North Carolina. Out of those projects, we saw the Geometry Engine, which led to a number of computer aided design, or CAD efforts, to aid in chip design. Those workstations, when linked together, evolved into tools used on the Stanford University Network, or SUN, which would effectively spin out of Stanford as Sun Microsystems. And across the bay at Berkeley we got a standardized Unix implementation that could use the tools being developed in Berkely Software Distribution, or BSD, which would eventually become the operating system used by Sun, SGI, and now OpenBSD and other variants. And the efforts from the VLSI project led to Berkely RISC in 1980 and Stanford MIPS as well as the multi chip wafer.The leader of that Berkeley RISC project was David Patterson who still serves as vice chair of the RISC-V Foundation. The chips would add more and more registers but with less specializations. This led to the need for more memory. But UC Berkeley students shipped a faster ship than was otherwise on the market in 1981. And the RISC II was usually double or triple the speed of the Motorola 68000. That led to the Sun SPARC and DEC Alpha. There was another company paying attention to what was happening in the RISC project: Acorn Computers. They had been looking into using the 6502 processor until they came across the scholarly works coming out of Berkeley about their RISC project. Sophie Wilson and Steve Furber from Acorn then got to work building an instruction set for the Acorn RISC Machine, or ARM for short. They had the first ARM working by 1985, which they used to build the Acorn Archimedes. The ARM2 would be faster than the Intel 80286 and by 1990, Apple was looking for a chip for the Apple Newton. A new company called Advanced RISC Machines or Arm would be founded, and from there they grew, with Apple being a shareholder through the 90s. By 1992, they were up to the ARM6 and the ARM610 was used for the Newton. DEC licensed the ARM architecture to develop the StrongARMSelling chips to other companies. Acorn would be broken up in 1998 and parts sold off, but ARM would live on until acquired by Softbank for $32 billion in 2016. Softbank is currently in acquisition talks to sell ARM to Nvidia for $40 billion. Meanwhile, John Cocke at IBM had been working on the RISC concepts since 1975 for embedded systems and by 1982 moved on to start developing their own 32-bit RISC chips. This led to the POWER instruction set which they shipped in 1990 as the RISC System/6000, or as we called them at the time, the RS/6000. They scaled that down to the Power PC and in 1991 forged an alliance with Motorola and Apple. DEC designed the Alpha. It seemed as though the computer industry was Microsoft and Intel vs the rest of the world, using a RISC architecture. But by 2004 the alliance between Apple, Motorola, and IBM began to unravel and by 2006 Apple moved the Mac to an Intel processor. But something was changing in computing. Apple shipped the iPod back in 2001, effectively ushering in the era of mobile devices. By 2007, Apple released the first iPhone, which shipped with a Samsung ARM. You see, the interesting thing about ARM is they don’t fab chips, like Intel - they license technology and designs. Apple licensed the Cortex-A8 from ARM for the iPhone 3GS by 2009 but had an ambitious lineup of tablets and phones in the pipeline. And so in 2010 did something new: they made their own system on a chip, or SoC. Continuing to license some ARM technology, Apple pushed on, getting between 800MHz to 1 GHz out of the chip and using it to power the iPhone 4, the first iPad, and the long overdue second-generation Apple TV. The next year came the A5, used in the iPad 2 and first iPad Mini, then the A6 at 1.3 GHz for the iPhone 5, the A7 for the iPhone 5s, iPad Air. That was the first 64-bit consumer SoC. In 2014, Apple released the A8 processor for the iPhone 6, which came in speeds ranging from 1.1GHz to the 1.5 GHz chip in the 4th generation Apple TV. By 2015, Apple was up to the A9, which clocked in at 1.85 GHz for the iPhone 6s. Then we got the A10 in 2016, the A11 in 2017, the A12 in 2018, A13 in 2019, A14 in 2020 with neural engines, 4 GPUs, and 11.8 billion transistors compared to the 30,000 in the original ARM. And it’s not just Apple. Samsung has been on a similar tear, firing up the Exynos line in 2011 and continuing to license the ARM up to Cortex-A55 with similar features to the Apple chips, namely used on the Samsung Galaxy A21. And the Snapdragon. And the Broadcoms. In fact, the Broadcom SoC was used in the Raspberry Pi (developed in association with Broadcom) in 2012. The 5 models of the Pi helped bring on a mobile and IoT revolution. And so nearly every mobile device now ships with an ARM chip as do many a device we place around our homes so our digital assistants can help run our lives. Over 100 billion ARM processors have been produced, well over 10 for every human on the planet. And the number is about to grow even more rapidly. Apple surprised many by announcing they were leaving Intel to design their own chips for the Mac. Given that the PowerPC chips were RISC, the ARM chips in the mobile devices are RISC, and the history Apple has with the platform, it’s no surprise that Apple is going back that direction with the M1, Apple’s first system on a chip for a Mac. And the new MacBook Pro screams. Even software running in Rosetta 2 on my M1 MacBook is faster than on my Intel MacBook. And at 16 billion transistors, with an 8 core GPU and a 16 core neural engine, I’m sure developers are hard at work developing the M3 on these new devices (since you know, I assume the M2 is done by now). What’s crazy is, I haven’t felt like Intel had a competitor other than AMD in the CPU space since Apple switched from the PowerPC. Actually, those weren’t great days. I haven’t felt that way since I realized no one but me had a DEC Alpha or when I took the SPARC off my desk so I could play Civilization finally. And this revolution has been a constant stream of evolutions, 40 years in the making. It started with an ARPA grant, but various evolutions from there died out. And so really, it all started with Sophie Wilson. She helped give us the BBC Micro and the ARM. She was part of the move to Element 14 from Acorn Computers and then ended up at Broadcom when they bought the company in 2000 and continues to act as the Director of IC Design. We can definitely thank ARPA for sprinkling funds around prominent universities to get us past 10,000 transistors on a chip. Given that chips continue to proceed at such a lightning pace, I can’t imagine where we’ll be at in another 40 years. But we owe her (and her coworkers at Acorn and the team at VLSI, now NXP Semiconductors) for their hard work and innovations.
1/17/2021 • 14 minutes, 55 seconds
Bob Tayler: ARPA to PARC to DEC
Robert Taylor was one of the true pioneers in computer science. In many ways, he is the string (or glue) that connected the US governments era of supporting computer science through ARPA to innovations that came out of Xerox PARC and then to the work done at Digital Equipment Corporation’s Systems Research Center. Those are three critical aspects of the history of computing and while Taylor didn’t write any of the innovative code or develop any of the tools that came out of those three research environments, he saw people and projects worth funding and made sure the brilliant scientists got what they needed to get things done. The 31 years in computing that his stops represented were some of the most formative years for the young computing industry and his ability to inspire the advances that began with Vannevar Bush’s 1945 article called “As We May Think” then ended with the explosion of the Internet across personal computers. Bob Taylor inherited a world where computing was waking up to large crusty but finally fully digitized mainframes stuck to its eyes in the morning and went to bed the year Corel bought WordPerfect because PCs needed applications, the year the Pentium 200 MHz was released, the year Palm Pilot and eBay were founded, the year AOL started to show articles from the New York Times, the year IBM opened a we web shopping mall and the year the Internet reached 36 million people. Excite and Yahoo went public. Sometimes big, sometimes small, all of these can be traced back to Bob Taylor - kinda’ how we can trace all actors to Kevin Bacon. But more like if Kevin Bacon found talent and helped them get started, by paying them during the early years of their careers… How did Taylor end up as the glue for the young and budding computing research industry? Going from tween to teenager during World War II, he went to Southern Methodist University in 1948, when he was 16. He jumped into the US Naval Reserves during the Korean War and then got his masters in psychology at the University of Texas at Austin using the GI Bill. Many of those pioneers in computing in the 60s went to school on the GI Bill. It was a big deal across every aspect of American life at the time - paving the way to home ownership, college educations, and new careers in the trades. From there, he bounced around, taking classes in whatever interested him, before taking a job at Martin Marietta, helping design the MGM-31 Pershing and ended up at NASA where he discovered the emerging computer industry. Taylor was working on projects for the Apollo program when he met JCR Licklider, known as the Johnny Appleseed of computing. Lick, as his friends called him, had written an article called Man-Computer Symbiosis in 1960 and had laid out a plan for computing that influenced many. One such person, was Taylor. And so it was in 1962 he began and in 1965 that he succeeded in recruiting Taylor away from NASA to take his place running ARPAs Information Processing Techniques Office, or IPTO. Taylor had funded Douglas Engelbart’s research on computer interactivity at Stanford Research Institute while at NASA. He continued to do so when he got to ARPA and that project resulted in the invention of the computer mouse and the Mother of All Demos, one of the most inspirational moments and a turning point in the history of computing. They also funded a project to develop an operating system called Multics. This would be a two million dollar project run by General Electric, MIT, and Bell Labs. Run through Project MAC at MIT there were just too many cooks in the kitchen. Later, some of those Bell Labs cats would just do their own thing. Ken Thompson had worked on Multics and took the best and worst into account when he wrote the first lines of Unix and the B programming language, then one of the most important languages of all time, C. Interactive graphical computing and operating systems were great but IPTO, and so Bob Taylor and team, would fund straight out of the pentagon, the ability for one computer to process information on another computer. Which is to say they wanted to network computers. It took a few years, but eventually they brought in Larry Roberts, and by late 1968 they’d awarded an RFQ to build a network to a company called Bolt Beranek and Newman (BBN) who would build Interface Message Processors, or IMPs. The IMPS would connect a number of sites and route traffic and the first one went online at UCLA in 1969 with additional sites coming on frequently over the next few years. That system would become ARPANET, the commonly accepted precursor to the Internet. There was another networking project going on at the time that was also getting funding from ARPA as well as the Air Force, PLATO out of the University of Illinois. PLATO was meant for teaching and had begun in 1960, but by then they were on version IV, running on a CDC Cyber and the time sharing system hosted a number of courses, as they referred to programs. These included actual courseware, games, convent with audio and video, message boards, instant messaging, custom touch screen plasma displays, and the ability to dial into the system over lines, making the system another early network. Then things get weird. Taylor is sent to Vietnam as a civilian, although his rank equivalent would be a brigadier general. He helped develop the Military Assistance Command in Vietnam. Battlefield operations and reporting were entering the computing era. Only problem is, while Taylor was a war veteran and had been deep in the defense research industry for his entire career, Vietnam was an incredibly unpopular war and seeing it first hand and getting pulled into the theater of war, had him ready to leave. This combined with interpersonal problems with Larry Roberts who was running the ARPA project by then over Taylor being his boss even without a PhD or direct research experience. And so Taylor joined a project ARPA had funded at the University of Utah and left ARPA. There, he worked with Ivan Sutherland, who wrote Sketchpad and is known as the Father of Computer Graphics, until he got another offer. This time, from Xerox to go to their new Palo Alto Research Center, or PARC. One rising star in the computer research world was pretty against the idea of a centralized mainframe driven time sharing system. This was Alan Kay. In many ways, Kay was like Lick. And unlike the time sharing projects of the day, the Licklider and Kay inspiration was for dedicated cycles on processors. This meant personal computers. The Mansfield Amendment in 1973 banned general research by defense agencies. This meant that ARPA funding started to dry up and the scientists working on those projects needed a new place to fund their playtime. Taylor was able to pick the best of the scientists he’d helped fund at ARPA. He helped bring in people from Stanford Research Institute, where they had been working on the oNLineSystem, or NLS. This new Computer Science Laboratory landed people like Charles Thacker, David Boggs, Butler Lampson, and Bob Sproul and would develop the Xerox Alto, the inspiration for the Macintosh. The Alto though contributed the very ideas of overlapping windows, icons, menus, cut and paste, word processing. In fact, Charles Simonyi from PARC would work on Bravo before moving to Microsoft to spearhead Microsoft Word. Bob Metcalfe on that team was instrumental in developing Ethernet so workstations could communicate with ARPANET all over the growing campus-connected environments. Metcalfe would leave to form 3COM. SuperPaint would be developed there and Alvy Ray Smith would go on to co-found Pixar, continuing the work begun by Richard Shoup. They developed the Laser Printer, some of the ideas that ended up in TCP/IP, and the their research into page layout languages would end up with Chuck Geschke, John Warnock and others founding Adobe. Kay would bring us the philosophy behind the DynaBook which decades later would effectively become the iPad. He would also develop Smalltalk with Dan Ingalls and Adele Goldberg, ushering in the era of object oriented programming. They would do pioneering work on VLSI semiconductors, ubiquitous computing, and anything else to prepare the world to mass produce the technologies that ARPA had been spearheading for all those years. Xerox famously did not mass produce those technologies. And nor could they have cornered the market on all of them. The coming waves were far too big for one company alone. And so it was that PARC, unable to bring the future to the masses fast enough to impact earnings per share, got a new director in 1983 and William Spencer was yet another of three bosses that Taylor clashed with. Some resented that he didn’t have a PhD in a world where everyone else did. Others resented the close relationship he maintained with the teams. Either way, Taylor left PARC in 1983 and many of the scientists left with him. It’s both a curse and a blessing to learn more and more about our heroes. Taylor was one of the finest minds in the history of computing. His tenure at PARC certainly saw the a lot of innovation and one of the most innovative teams to have ever been assembled. But as many of us that have been put into a position of leadership, it’s easy to get caught up in the politics. I am ashamed every time I look back and see examples of building political capital at the expense of a project or letting an interpersonal problem get in the way of the greater good for a team. But also, we’re all human and the people that I’ve interviewed seem to match the accounts I’ve read in other books. And so Taylor’s final stop was Digital Equipment Corporation where he was hired to form their Systems Research Center in Palo Alto. They brought us the AltaVista search engine, the Firefly computer, Modula-3 and a few other advances. Taylor retired in 1996 and DEC was acquired by Compaq in 1998 and when they were acquired by HP the SRC would get merged with other labs at HP. From ARPA to Xerox to Digital, Bob Taylor certainly left his mark on computing. He had a knack of seeing the forest through the trees and inspired engineering feats the world is still wrestling with how to bring to fruition. Raw, pure science. He died in 2017. He worked with some of the most brilliant people in the world at ARPA. He inspired passion, and sometimes drama in what Stanford’s Donald Knuth called “the greatest by far team of computer scientists assembled in one organization.” In his final email to his friends and former coworkers, he said “You did what they said could not be done, you created things that they could not see or imagine.” The Internet, the Personal Computer, the tech that would go on to become Microsoft Office, object oriented programming, laser printers, tablets, ubiquitous computing devices. So, he isn’t exactly understating what they accomplished in a false sense of humility. I guess you can’t do that often if you’re going to inspire the way he did. So feel free to abandon the pretense as well, and go inspire some innovation. Heck, who knows where the next wave will come from. But if we aren’t working on it, it certainly won’t come. Thank you so much and have a lovely, lovely day. We are so lucky to have you join us on yet another episode.
1/15/2021 • 14 minutes, 31 seconds
WordStar
We’ve covered Xerox PARC a few times - and one aspect that’s come up has been the development of the Bravo word processor from Butler Lampson, Charles Simonyi, and team. Simonyi went on to work at Microsoft and spearheaded the development of Microsoft Word. But Bravo was the first WYSIWYG tool for creating documents, which we now refer to as a word processor. That was 1974. Something else we’ve covered happened in 1974, the release of the Altair 8800. One aspect of the Altair we didn’t cover is that Michael Shrayer was a tinkerer who bought an Alatir and wrote a program that allowed him to write manuals. This became the Electric Pencil. It was text based though and not a WYSIWYG like Bravo was. It ran in 8k of memory and would be ported to Intel 8080, Zylog Z-80, and other processors over the years leading into the 80s. But let’s step back to the 70s for a bit. Because bell bottoms. The Altair inspired a clone called the IMSAI 8080 in 1975. The direct of marketing, Seymour Rubenstein started tinkering with the idea of a word processor. He left IMSAI and by 1978, put together $8,500 and started a company called MicroPro International. He convinced Rob Barnaby, the head programmer at IMSAI, to join him. They did market research into the tools being used by IBM and Xerox. They made a list of what was needed and got to work. The word processor grew. They released their word processor, which they called WordStar, for CP/M running on the Intel 8080. By then it was 1979 and CP/M was a couple years old but already a pretty dominant operating system for microcomputers. Software was a bit more expensive at the time and WordStar sold for $495. At the time, you had to port your software to each OS running on each hardware build. And the code was in assembly so not the easiest thing in the world. This meant they wanted to keep the feature set slim so WordStar could run on as many platforms as possible. They ran on the Osborne 1 portable and with CP/M support they became the standard. They could wrap words automatically to the next line. Imagine that. They ported the software to other platforms. It was clear there was a new OS that they needed to run on. So they brought in Jim Fox, who ported WordStar to run on DOS in 1981. They were on top of the world. Sure, there was Apple Write, Word, WordPerfect, and Samna, but WordStar was it. Arthur C Clarke met Rubenstein and Barnaby and said they "made me a born-again writer, having announced my retirement in 1978, I now have six books in the works, all through WordStar." He would actually write dozens more works. They released the third version in 1982 and quickly grew into the most popular, dominant word processor on the market. The code base was getting a little stale and so they brought in Peter Mierau to overhaul it for WordStar 4. The refactor didn’t come at the best of times. In software, you’re the market leader until… You thought I was going to say Microsoft moved into town? Nope, although Word would eventually dominate word processing. But there was one more step before computing got there. Next, along with the release of the IBM PC, WordPerfect took the market by storm. They had more features and while WordStar was popular, it was the most pirated piece of software at the time. This meant less money to build features. Like using the MS-DOS keyboard to provide more productivity tools. This isn’t to say they weren’t making money. They’d grown to $72M in revenue by 1984. When they filed for their initial public offering, or IPO, they had a huge share of the word processing market and accounted for one out of every ten dollars spent on software. WordStar 5 came in 1989 and as we moved into the 90s, it was clear that WordStar 2000 had gone nowhere so WordStar 6 shipped in 1990 and 7 in 1991. The buying tornado had slowed and while revenues were great, copy-protecting disks were slowing the spread of the software. Rubinstein is commonly credited with creating the first end-user software licensing agreement, common with nearly every piece of proprietary software today. Everyone was pirating back then so if you couldn’t use WordStar, move on to something you could steal. You know, like WordPerfect. MultiMate, AmiPro, Word, and so many other tools. Sales were falling. New features weren’t shipping. One pretty big one was support for Windows. By the time Windows support shipped, Microsoft had released Word, which had a solid two years to become the new de facto standard. SoftKey would acquire the company in 1994, and go on to acquire a number of other companies until 2002 when they were acquired. But by then WordStar was so far forgotten that no one was sure who actually owned the WordStar brand. I can still remember using WordStar. And I remember doing work when I was a consultant for a couple of authors to help them recover documents, which were pure ASCII files or computers that had files in WordStar originally but moved to the WSD extension later. And I can remember actually restoring a BAK file while working at the computer labs at the University of Georgia, common in the DOS days. It was a joy to use until I realized there was something better. Rubinstein went on to buy another piece of software, a spreadsheet. He worked with another team, got a little help from Barnaby and and Fox and eventually called it Surpass, which was acquired by Borland, who would rename it to Quattro Pro. That spreadsheet borrowed the concept of multiple sheets in tabs from Boeing Calc, now a standard metaphor. Amidst lawsuits with Lotus on whether you could patent how software functions, or the UX of software, Borland sold Lotus to Novell during a time when Novell was building a suite of products to compete with Microsoft. We can thank WordStar for so much. Inspiring content creators and creative new features for word processing. But we also have to remember that early successes are always going to inspire additional competition. Any company that grows large enough to file an initial public offering is going to face barbarian software vendors at their gates. When those vendors have no technical debt, they can out-deliver features. But as many a software company has learned, expanding to additional products by becoming a portfolio company is one buffer for this. As is excellent execution. The market was WordStar’s to lose. And there’s a chance that it was lost the second Microsoft pulled in Charles Simonyi, one of the original visionaries behind Bravo from Xerox PARC. But when you have 10% of all PC software sales it seems like maybe you got outmaneuvered in the market. But ultimately the industry was so small and so rapidly changing in the early 1980s that it was ripe for disruption on an almost annual basis. That is, until Microsoft slowly took the operating system and productivity suite markets and .doc, .xls, and .ppt files became the format all other programs needed to support. And we can thank Rubinstein and team for pioneering what we now call the software industry. He started on an IBM 1620 and ended his career with WebSleuth, helping to usher in the search engine era. Many of the practices he put in place to promote WordStar are now common in the industry. These days I talk to a dozen serial entrepreneurs a week. They could all wish to some day be as influential as he.
1/8/2021 • 10 minutes, 22 seconds
The Immutable Laws of Game Mechanics In A Microtransaction-Based Economy
Once upon a time, we put a quarter in a machine and played a game for awhile. And life was good. The rise of personal computers and subsequent fall in the cost of microchips allowed some of the same chips found in early computers, such as the Zylog Z80, to bring video game consoles into homes across the world. That one chip could be found in the ColecoVision, Nintendo Game Boy, and the Sega Genesis. Given that many of the cheaper early computers came with joysticks or gaming at the time, the line between personal computer and video game console seemed natural. Then came the iPhone, which brought an explosion of apps. Apps were anywhere from a buck to a hundred. We weren't the least surprised by the number of games that exploded onto the platform. Nor by the creativity of the developers. When the Apple App Store and Google Play added in-app purchasing and later in-app subscriptions it all just seemed natural. But it has profoundly changed the way games are purchased, distributed, and the entire business model of apps. The Evolving Business Model of Gaming Video games were originally played in arcades, similar to pinball. The business model was each game was a quarter or token. With the advent of PCs and video game consoles, games were bought in stores, as were records or cassettes that included music. The business model was that the store made money (40-50%), the distributor who got the game into a box and on the shelf in the store made money, and the company that made the game got some as well. And discounts to sell more inventory usually came out of someone not called the retailer. By the time everyone involved got a piece, it was common for the maker of the game to get between $5 and $10 dollars per unit sold for a $50 game. No one was surprised that there was a whole cottage industry of software piracy. Especially given that most games could be defeated in 40 to 100 hours. This of course spawned a whole industry to thwart piracy, eating into margins but theoretically generating more revenue per game created. Industries evolve. Console and computer gaming split (although arguably consoles have always just been computers) and the gamer-verse further schism'd between those who played various types of games. Some games were able to move to subscription models and some companies sprang up to deliver games through subscriptions or as rentals (game rentals over a modem was the business model that originally inspired the AOL founders). And that was ok for the gaming industry, which slowly grew to the point that gaming was a larger industry than the film industry. Enter Mobile Devices and App Stores Then came mobile devices, disrupting the entire gaming industry. Apple began the App Store model, establishing that the developer got 70% of the sale - much better than 5%. Steve Jobs had predicted the coming App Store in a 1985 and then when the iPhone was released tried to keep the platform closed but eventually capitulated and opened up the App Store to developers. Those first developers made millions. Some developers were able to port games to mobile platforms and try to maintain a similar pricing model to the computer or console versions. But the number of games created a downward pressure that kept games cheap, and often free. The number of games in the App Store grew (today there are over 5 million apps between Apple and Google). With a constant downward pressure on price, the profits dropped. Suddenly, game developers forgot they used to get 10 percent of the sale of a game a lot of times and started to blame the stores the games were distributed in on the companies that owned the App Stores: Apple, Google, and in some cases, Steam. The rise and subsequent decrease in popularity of Pokémon Go was the original inspiration for this article in 2016 but since a number of games have validated the perspectives. These free games provide a valuable case study into how the way we design a game to be played (known as game mechanics) impacts our ability to monetize the game in various ways. And there are lots and lots of bad examples in games (and probably legislation on the way to remedy abuses) that also tells us what not to do. The Microtransaction-Based Economy These days, game developers get us hooked on the game early, get us comfortable with the pace of the game and give us an early acceleration. But then that slows down. Many a developer then points us to in-app purchases in order to unlock items that allow us to maintain the pace of a game, or even to hasten the pace. And given that we're playing against other people a lot of the time, they try and harness our natural competitiveness to get us to buy things. These in-app purchases are known as microtransactions. And the aggregate of these in-app purchases can be considered as a microtransaction-based economy. As the microtransaction-based economy has arrived in full force, there are certain standards emerging as cultural norms for these economies. And violating these rules cause vendors to get blasted on message boards and more importantly lose rabid fans of the game. As such, I’ve decided to codify my own set of laws for these, which are follows: All items that can be purchased with real money should be available for free. For example, when designing a game that has users building a city and we develop a monument that users can pay $1 for and place in their city to improve morale of those that live in the city, that monument should be able to be earned in the game as well. Otherwise, you’re able to pay for an in-app purchase that gives some players an advantage for doing nothing more than spending money. In-app purchases do not replace game play, but hasten the progression through the game. For example, when designing a game that has users level up based on earning experience points for each task they complete, we never want to just gift experience points based on an in-app purchase. Instead, in-app purchases should provide a time-bound amplification to experience (such as doubling experience for 30 minutes in Pokémon Go or keeping anyone else from attacking a player for 24 hours in Clash of Clans so we can save enough money to buy that one Town Hall upgrade we just can’t live without). The amount paid for items in a game should correlate to the amount of time saved in game play. For example, get stuck on a level in Angry Birds. We could pay a dollar for a pack of goodies that will get us past that level (and probably 3 more), so we can move on. Or we could keep hammering away at that level for another hour. Thus, we saved an hour, but lost pride points in the fact that we didn’t conquer that level. Later in the game, we can go back and get three stars without paying to get past it. Do not allow real-world trading. This is key. If it’s possible to build an economy outside the game, players can then break your game mechanics. For example, in World of Warcraft, you can buy gold, and magic items online for real money and then log into the game only to have another shady character add those items to your inventory. This leads to people writing programs known as bots (short for robots) to mine gold or find magic items on their behalf so they can sell it in the real world. There are a lot of negative effects to such behavior, including the need to constantly monitor for bots (which wastes a lot of developer cycles), bots cause the in-game economy to practically crash when the game updates (e.g. a map) and breaks the bots, and make games both more confusing for users and less controllable by the developer. Establish an in-game currency. You don’t want users of the game buying things with cash directly. Instead, you want them to buy a currency, such as gold, rubies, gems, karma, or whatever you’d like to call that currency. Disassociating purchases from real world money causes users to lose track of what they’re buying and spend more money. Seems shady, and it very well may be, but I don’t write games so I can’t say if that’s the intent or not. It’s a similar philosophy to buying poker chips, rather than using money in a casino (just without the free booze). Provide multiple goals within the game. Players will invariably get bored with the critical path in your game. When they do, it’s great for players to find other aspects of the game to keep them engaged. For example, in Pokémon Go, you might spend 2 weeks trying to move from level 33 to level 34. During that time, you might as well go find that last Charmander so you can evolve to a Charzard. That’s two different goals: one to locate a creature, the other to gain experience. Or you can go take over some gyms in your neighborhood. Or you can power level by catching hundreds of Pidgeys. The point is, to keep players engaged during long periods with no progression, having a choose your own adventure style game play is important. For massive multiplayers (especially role playing games) this is critical, as players will quickly tire of mining for gold and want to go, for example, jump into the latest mass land war. To place a little context around this, there are also 28 medals in Pokémon Go (that I’m aware of), which keep providing more and more goals in the game. Allow for rapid progression early in the game in order to hook users, so they will pay for items later in the game. We want people to play our games because they love them. Less than 3% of players will transact an in-app purchase in a given game. But that number skyrockets as time is invested in a game. Quickly progressing through levels early in a game keeps users playing. Once users have played a game for 8 or 9 hours, if you tell them they can go to bed and for a dollar and it will seem like they kept playing for another 8 or 9 hours, based on the cool stuff they’ll earn, they’re likely to give up that dollar and keep playing for another couple of hours rather than get that much needed sleep! We should never penalize players that don't pay up. In fact, players often buy things that simply change the look of their character in games like Among Us. There is no need to impact game mechanics with purchase if we build an awesome enough game. Create achievable goals in discrete amounts of time. Boom Beach villages range from level 1 to level 64. As players rise through the ability to reach the next stage becomes logarithmically more difficult given other players are paying to play. Goals against computers players (or NPCs or AI according to how we want to think of it) are similar. All should be achievable though. The game Runeblade for the Apple Watch was based on fundamentally sound game mechanics that could enthrall a player for months; however, there’s no way to get past a certain point. Therefore, players lose interest, Eric Cartman-style, and went home. Restrict the ability to automate the game. If we had the choice to run every day to lose weight or to eat donuts and watch people run and still lose weight, which would most people choose? Duh. Problem is that when players automate your game, they end up losing interest as their time investment in the game diminishes, as does the necessary skill level to shoot up through levels in games. Evony Online was such a game; and I’m pretty sure I still get an email every month chastising me for botting the game 8-10 years after anyone remembers that the game existed. As a game becomes too dependent on resources obtained by gold mining bots in World of Warcraft, the economy of the game could crash when they were knocked off-line. Having said this, such drama adds to the intrigue - which can be a game inside a game for many. Pit players against one another. Leaderboards. Everyone wants to be in 1st place, all the time. Or to see themselves moving up in rankings. By providing a ranking system, we increase engagement, and drive people towards making in-app purchases. Those just shouldn't be done to directly get a leg up. It's a slippery slope to allow a player to jump 30 people in front of them to get to #1,000 in the rankings only to see those people do an in-app purchase and create an addiction to the in-app purchases in order to maintain their position in the rankings. It's better to make smaller amounts and keep players around than have them hate a developer once they're realized the game was making money off addiction. Sounds a bit like Don’t pit weak players against strong players unnecessarily. In Clash of Clans a player builds a village. As they build more cool stuff in the village, the village levels up. The player can buy rubies to complete buildings faster, and so you can basically buy the village levels. But, since a player can basically buy levels, the levels can exceed the players skill. Therefore, in order to pit matched players in battles, a second metric was introduced to match battles that is based on won/lost ratios of battles. By ensuring that players of similar skill duel one another, the skill of players is more likely to progress organically and therefore they remain engaged with the game. The one exception to this rule that I’ve seen actually work well so far has been in Pokémon Go where a player needs to be physically close to a gym rather than just close to the gym while sitting in their living room playing on a console. That geographical alignment really changes this dynamic, as does the great way that gym matches heavily favor attackers, driving fast turnover in gyms and keeping the game accessible to lower level players. Add time-based incentives. If a player logs into a game every day, they should get a special incentive for the day that amplifies the more days they log in in a row. Or if they don’t log in, another player can steal all the stuff. Players get a push alert when another player attacks them. There are a number of different ways to incentivize players to keep logging into an app. The more we keep players in an app, the more likely they are to make a purchase. Until they get so many alerts that they delete your app. Don’t do that. Incentivize pure gameplay. It might seem counter-intuitive to incentivize players to not use in-app purchases. But not allowing for a perfect score on an in-app purchase (e.g. not allowing for a perfect level in Angry Birds if you used an in-app purchase) will drive more engagement in a game, while likely still allowing for an in-app purchase and then a late-game strategy of finding perfection to unlock that hidden extra level, or whatever the secret sauce is for your game. Apply maximum purchasing amounts. Games can get addictive for players. We want dolphins, not whales. This is to say that we want people to spend what they would have spent on a boxed game, say $50, or even that per month. But when players get into spending thousands per day, they're likely to at some point realize their error in judgement and contact Apple or Google for a refund. And they should get one. Don't take advantage of people. Make random returns on microtransactions transparent. There has been talk of regulating randomized loot boxes. Why? Because the numbers don't add up. Rampant abuse of in-app purchases for random gear means that developers who publish the algorithm or source code for how those rewards are derived will have a certain level of non-repudiation when the law suits start. Again, if those rewards can be earned during the game as well (maybe at a lower likelihood) then we're not abusing game mechanics. Conclusion The above list might seem manipulative at times. Especially to those who don't write code for a living. And to some degree it is. But it can be done ethically and when it is the long-term returns are greater. If nothing else, these laws are a code of ethics of sorts. These are lessons that hundreds of companies are out there learning by trial and error, and hopefully documenting them can help emergent companies not have to repeat some of the same mistakes of others. We could probably get up to 100 of these (with examples) if we wanted to! What laws have you noticed?
12/23/2020 • 22 minutes
RFC1
Months before the first node of ARPANET went online, the intrepid easy engineers were just starting to discuss the technical underpinnings of what would evolve into the Internet some day. Here, we hear how hosts would communicate to the IMPs, or early routing devices (although maybe more like a Paleolithic version of what's in a standard network interface today). It's nerdy. There's discussion of packets and what bits might do what and later Vint Cerf and Bob Kahn would redo most of this early work as the protocols evolved towards TCP/IP. But reading their technical notes and being able to trace those through thousands of RFCs that show the evolution into the Internet we know today is an amazing look into the history of computing.
12/18/2020 • 18 minutes, 54 seconds
The Spread of Science And Culture From The Stone Age to the Bronze Age
Humanity realized we could do more with stone tools some two and a half million years ago. We made stone hammers and cutting implements made by flaking stone, sharpening deer bone, and sticks, sometimes sharpened into spears. It took 750,000 years, but we figured out we could attach those to sticks to make hand axes and other cutting tools about 1.75 million years ago. Humanity had discovered the first of six simple machines, the wedge. During this period we also learned to harness fire. Because fire frightened off animals that liked to cart humans off in the night the population increased, we began to cook food, and the mortality rate increased. More humans. We learned to build rafts and began to cross larger bodies of water. We spread. Out of Africa, into the Levant, up into modern Germany, France, into Asia, Spain, and up to the British isles by 700,000 years ago. And these humanoid ancestors traded. Food, shell beads, bone tools, even arrows. By 380,000-250,000 years ago we got the first anatomically modern humans. The oldest of those remains has been found in modern day Morocco in Northern Africa. We also have evidence of that spread from the African Rift to Turkey in Western Asia to the Horn of Africa in Ethiopia, Eritraea, across the Red Sea and then down into Israel, South Africa, the Sudan, the UAE, Oman, into China, Indonesia, and the Philopenes. 200,000 years ago we had cored stone on spears, awls, and in the late Stone Age saw the emergence of craftsmanship and cultural identity. This might be cave paintings or art made of stone. We got clothing around 170,000 years ago, when the area of the Sahara Desert was still fertile ground and as people migrated out of there we got the first structures of sandstone blocks at the border of Egypt and modern Sudan. As societies grew, we started to decorate, first with seashell beads around 80,000, with the final wave of humans leaving Africa just in time for the Toba Volcano supereruption to devastate human populations 75,000 years ago. And still we persisted, with cave art arriving 70,000 years ago. And our populations grew. Around 50,000 years ago we got the first carved art and the first baby boom. We began to bury our dead and so got the first religions. In the millennia that followed we settled in Australia, Europe, Japan, Siberia, the Arctic Circle, and even into the Americas. This time period was known as the Great Leap Forward and we got microliths, or small geometric blades shaped into different forms. This is when the oldest settlements have been found from Egypt, the Italian peninsula, up to Germany, Great Britain, out to Romania, Russia, Tibet, and France. We got needles and deep sea fishing. Tuna sashimi anyone? By 40,000 years ago the neanderthals went extinct and modern humans were left to forge our destiny in the world. The first aboriginal Australians settled the areas we now call Sydney and Melbourne. We started to domesticate dogs and create more intricate figurines, often of a Venus. We made ivory beads, and even flutes of bone. We slowly spread. Nomadic peoples, looking for good hunting and gathering spots. In the Pavolv Hills in the modern Czech Republic they started weaving and firing figurines from clay. We began to cremate our dead. Cultures like the Kebaran spread, to just south of Haifa. But as those tribes grew, there was strength in numbers. The Bhimbetka rock shelters began in the heart of modern-day India, with nearly 800 shelters spread across 8 square miles from 30,000 years ago to well into the Bronze Age. Here, we see elephants, deer, hunters, arrows, battles with swords, and even horses. A snapshot into the lives of of generation after generation. Other cave systems have been found throughout the world including Belum in India but also Germany, France, and most other areas humans settled. As we found good places to settle, we learned that we could do more than forage and hunt for our food. Our needs became more complex. Over those next ten thousand years we built ovens and began using fibers, twisting some into rope, making clothing out of others, and fishing with nets. We got our first semi-permanent settlements, such as Dolce Vestonice in the modern day Czech Republic, where they had a kiln that could be used to fire clay, such as the Venus statue found there - and a wolf bone possibly used as a counting stick. The people there had woven cloth, a boundary made of mammoth bones, useful to keep animals out - and a communal bonfire in the center of the village. A similar settlement in modern Siberia shows a 24,000 year old village. Except the homes were a bit more subterranean. Most parts of the world began to cultivate agriculture between 20,000 and 15,000 years ago according to location. During this period we solved the age old problem of food supplies, which introduced new needs. And so we saw the beginnings of pottery and textiles. Many of the cultures for the next 15,000 years are now often referred to based on the types of pottery they would make. These cultures settled close to the water, surrounding seas or rivers. And we built large burial mounds. Tools from this time have been found throughout Europe, Asia, Africa, and in modern Mumbai in India. Some cultures were starting to become sedentary, such as the Natufian culture we collected grains, started making bread, cultivating cereals like rye, we got more complex socioeconomics, and these villages were growing to support upwards of 150 people. The Paleolithic time of living in caves and huts, which began some two and a half million years ago was ending. By 10,000 BCE, Stone Age technology evolved to include axes, chisels, and gouges. This is a time many parts of the world entered the Mesolithic period. The earth was warming and people were building settlements. Some were used between cycles of hunting. As the plants we left in those settlements grew more plentiful, people started to stay there more, some becoming permanent inhabitants. Settlements like in Nanzhuangtou, China. Where we saw dogs and stones used to grind and the cultivation of seed grasses. The mesolithic period is when we saw a lot of cave paintings and engraving. And we started to see a division of labor. A greater amount of resources led to further innovation. Some of the inventions would then have been made in multiple times and places again and again until we go them right. One of those was agriculture. The practice of domesticating barley, grains, and wheat began in the millennia leading up to 10,000 BCE and spread up from Northeast Africa and into Western Asia and throughout. There was enough of a surplus that we got the first granary by 9500 BCE. This is roughly the time we saw the first calendar circles emerge. Tracking time would be done first with rocks used to form early megalithic structures. Domestication then spread to animals with sheep coming in around the same time, then cattle, all of which could be done in a pastoral or somewhat nomadic lifestyle. Humans then began to domesticate goats and pigs by 8000 BCE, in the Middle East and China. Something else started to appear in the eight millennium BCE: a copper pendant was found in Iraq. Which brings us to the Neolithic Age. And people were settling along the Indus River, forming larger complexes such as Mehrgarh, also from 7000 BCE. The first known dentistry dates back to this time, showing drilled molars. People in the Timna Valley, located in modern Israel also started to mine copper. This led us to the second real crafting specialists after pottery. Metallurgy was born. Those specialists sought to improve their works. Potters started using wheels, although we wouldn’t think to use them vertically to pull a cart until somewhere between 6000 BCE and 4000 BCE. Again, there are six simple machines. The next is the wheel and axle. Humans were nomadic, or mostly nomadic, up until this point but settlements and those who lived in them were growing. We starting to settle in places like Lake Nasser and along the river banks from there, up the Nile to modern day Egypt. Nomadic people settled into areas along the eastern coast of the Mediterranean and between the Tigris and Euphrates Rivers with Maghzaliyah being another village supporting 150 people. They began to building using packed earth, or clay, for walls and stone for foundations. This is where one of the earliest copper axes has been found. And from those early beginnings, copper and so metallurgy spread for nearly 5,000 years. Cultures like the Yangshao culture in modern China first began with slash and burn cultivation, or plant a crop until the soil stops producing and move on. They built rammed earth homes with thatched, or wattle, roofs. They were the first to show dragons in artwork. In short, with our bellies full, we could turn our attention to the crafts and increasing our standard of living. And those discoveries were passed from complex to complex in trade, and then in trade networks. Still, people gotta’ eat. Those who hadn’t settled would raid these small villages, if only out of hunger. And so the cultural complexes grew so neolithic people could protect one another. Strength in numbers. Like a force multiplier. By 6000 BCE we got predynastic cultures flourishing in Egypt. With the final remnants of the ice age retreating, raiders moved in on the young civilization complexes from the spreading desert in search of food. The area from the Nile Valley in northern Egypt, up the coast of the Mediterranean and into the Tigris and Euphrates is now known as the Fertile Crescent - and given the agriculture and then pottery found there, known as the cradle of civilization. Here, we got farming. We weren’t haphazardly putting crops we liked in the grounds but we started to irrigate and learn to cultivate. Generations passed down information about when to plant various crops was handed down. Time was kept by the season and the movement of the stars. People began settling into larger groups in various parts of the world. Small settlements at first. Rice was cultivated in China, along the Yangtze River. This led to the rise of the Beifudi and Peiligang cultures, with the first site at Jaihu with over 45 homes and between 250 and 800 people. Here, we see raised altars, carved pottery, and even ceramics. We also saw the rise of the Houli culture in Neolithic China. Similar to other sites from the time, we see hunting, fishing, early rice and millet production and semi-subterranean housing. But we also see cooked rice, jade artifacts, and enough similarities to show technology transfer between Chinese settlements and so trade. Around 5300 BCE we saw them followed by the Beixin culture, netting fish, harvesting hemp seeds, building burial sites away from settlements, burying the dead with tools and weapons. The foods included fruits, chicken and eggs, and lives began getting longer with more nutritious diets. Cultures were mingling. Trading. Horses started to be tamed, spreading from around 5000 BCE in Kazakstan. The first use of the third simple machine came around 5000 BCE when the lever was used first, although it wouldn’t truly be understood until Archimedes. Polished stone axes emerged in Denmark and England. Suddenly people could clear out larger and larger amounts of forest and settlements could grow. Larger settlements meant more to hunt, gather, or farm food - and more specialists to foster innovation. In todays Southern Iraq this led to the growth of a city called Eridu. Eridu was the city of the first Sumerian kings. The bay on the Persian Gulf allowed trading and being situated at the mouth of the Euphrates it was at the heart of the cradle of civilization. The original neolithic Sumerians had been tribal fishers and told stories of kings from before the floods, tens of thousands of years before the era. They were joined by the Samarra culture, which dates back to 5,700 BCE, to the north who brought knowledge of irrigation and nomadic herders coming up from lands we would think of today as the Middle East. The intermixing of skills and strengths allowed the earliest villages to be settled in 5,300 BCE and grow into an urban center we would consider a city today. This was the beginning of the Sumerian Empire Going back to 5300, houses had been made of mud bricks and reed. But they would build temples, ziggurats, and grow to cover over 25 acres with over 4,000 people. As the people moved north and gradually merged with other cultural complexes, the civilization grew. Uruk grew to over 50,000 people and is the etymological source of the name Iraq. And the population of all those cities and the surrounding areas that became Sumer is said to have grown to over a million people. They carved anthropomorphic furniture. They made jewelry of gold and created crude copper plates. They made music with flutes and stringed instruments, like the lyre. They used saws and drills. They went to war with arrows and spears and daggers. They used tablets for writing, using a system we now call cuneiform. Perhaps they wrote to indicate lunar months as they were the first known people to use 12 29-30 day months. They could sign writings with seals, which they are also credited with. How many months would it be before Abraham of Ur would become the central figure of the Old Testament in the Bible? With scale they needed better instruments to keep track of people, stock, and other calculations. The Sumerian abacus - later used by the Egyptians and then the device we know of as an abacus today entered widespread use in the sixth century in the Persian empire. More and more humans were learning larger precision counting and numbering systems. They didn’t just irrigate their fields; they built levees to control floodwaters and canals to channel river water into irrigation networks. Because water was so critical to their way of life, the Sumerian city-states would war and so built armies. Writing and arithmetic don’t learn themselves. The Sumerians also developed the concept of going to school for twelve years. This allowed someone to be a scribe or writer, which were prestigious as they were as necessary in early civilizations as they are today. In the meantime, metallurgy saw gold appear in 4,000 BCE. Silver and lead in 3,000 BCE, and then copper alloys. Eventually with a little tin added to the copper. By 3000 BCE this ushered in the Bronze Age. And the need for different resources to grow a city or empire moved centers of power to where those resources could be found. The Mesopotamian region also saw a number of other empires rise and fall. The Akkadians, Babylonians (where Hammurabi would eventually give the first written set of laws), Chaldeans, Assyrians, Hebrews, Phoenicians, and one of the greatest empires in history, the Persians, who came out of villages in Modern Iran that went back past 10,000 BCE to rule much of the known world at the time. The Persians were able to inherit all of the advances of the Sumerians, but also the other cultures of Mesopotamia and those they traded with. One of their trading partners that the Persians conquered later in the life of the empire, was Egypt. Long before the Persians and then Alexander conquered Egypt they were a great empire. Wadi Halfa had been inhabited going back 100,000 years ago. Industries, complexes, and cultures came and went. Some would die out but most would merge with other cultures. There is not much archaeological evidence of what happened from 9,000 to 6,000 BCE but around this time many from the Levant and Fertile Crescent migrated into the area bringing agriculture, pottery, then metallurgy. These were the Nabta then Tasian then Badarian then Naqada then Amratian and in around 3500 BCE we got the Gerzean who set the foundation for what we may think of as Ancient Egypt today with a drop in rain and suddenly people moved more quickly from the desert like lands around the Nile into the mincreasingly metropolitan centers. Cities grew and with trade routes between Egypt and Mesopotamia they frequently mimicked the larger culture. From 3200 BCE to 3000 BCE we saw irrigation begin in protodynastic Egypt. We saw them importing obsidian from Ethiopia, cedar from Lebanon, and grow. The Canaanites traded with them and often through those types of trading partners, Mesopotamian know-how infused the empire. As did trade with the Nubians to the south, who had pioneered astrological devices. At this point we got Scorpion, Iry-Hor, Ka, Scorpion II, Double Falcon. This represented the confederation of tribes who under Narmer would unite Egypt and he would become the first Pharaoh. They would all be buried in Umm El Qa’ab, along with kings of the first dynasty who went from a confederation to a state to an empire. The Egyptians would develop their own written language, using hieroglyphs. They took writing to the next level, using ink on papyrus. They took geometry and mathematics. They invented toothpaste. They built locked doors. They took the calendar to the next level as well, giving us 364 day years and three seasons. They’d of added a fourth if they’d of ever visited Minnesota, don’tchaknow. And many of those Obelisks raided by the Romans and then everyone else that occupied Egypt - those were often used as sun clocks. They drank wine, which is traced in its earliest form to China. Imhotep was arguably one of the first great engineers and philosophers. Not only was he the architect of the first pyramid, but he supposedly wrote a number of great wisdom texts, was a high priest of Ra, and acted as a physician. And for his work in the 27th century BCE, he was made a deity, one of the few outside of the royal family of Egypt to receive such an honor. Egyptians used a screw cut of wood around 2500 BCE, the fourth simple machine. They used it to press olives and make wine. They used the fifth to build pyramids, the inclined plane. And they helped bring us the last of the simple machines, the pulley. And those pyramids. Where the Mesopotamians built Ziggurats, the Egyptians built more than 130 pyramids from 2700 BCE to 1700 BCE. And the Great Pyramid of Giza would remain the largest building in the world for 3,800 years. It is built out of 2.3 million blocks, some of which weigh as much as 80 tonnes. Can you imagine 100,000 people building a grave for you? The sundial emerged in 1,500 BCE, presumably in Egypt - and so while humans had always had limited lifespans, our lives could then be divided up into increments of time. The Chinese cultural complexes grew as well. Technology and evolving social structures allowed the first recorded unification of all those neolithic peoples when You the Great and his father brought flood control, That family, as the Pharos had, claimed direct heritage to the gods, in this case, the Yellow Emperor. The Xia Dynasty began in China in 2070 BCE. They would flourish until 1600 BCE when they were overthrown by the Shang who lasted until 1046 when they were overthrown by the Zhou - the last ancient Chinese dynasty before Imperial China. Greek civilizations began to grow as well. Minoan civilization from 1600 to 1400 BCE grew to house up to 80,000 people in Knossos. Crete is a large island a little less than half way from Greece to Egypt. There are sites throughout the islands south of Greece that show a strong Aegean and Anatolian Cycladic culture emerging from 4,000 BCE but given the location, Crete became the seat of the Minoans, first an agricultural community and then merchants, facilitating trade with Egypt and throughout the Mediterranean. The population went from less than 2,000 people in 2500 BCE to up to 100,000 in 1600 BCE. They were one of the first to be able to import knowledge, in the form of papyrus from Egypt. The Mycenaeans in mainland Greece, along with earthquakes that destroyed a number of the buildings on Crete, contributed to the fall of the Minoan civilization and alongside the Hittites, Assyrians, Egyptians, and Babylonians, we got the rise of the first mainland European empire: Mycenaean Greece. Sparta would rise, Athens, Corinth, Thebes. After conquering Troy in the Trojan War the empire went into decline with the Bronze Age collapse. We can read about the war in the Iliad and the return home in the Odyssey, written by Homer nearly 400 years later. The Bronze Age ended in around 1,200 BCE - as various early empires outgrew the ability to rule ancient metropolises and lands effectively, as climate change forced increasingly urbanized centers to de-urbanize, as the source of tin dried up, and as smaller empires banded together to attack larger empires. Many of these empires became dependent on trade. Trade spread ideas and technology and science. But tribalism and warfare disrupted trade routes and fractured societies. We had to get better at re-using copper to build new things. The fall of cultures caused refugees, as we see today. It’s likely a conflagration of changing cultures and what we now call Sea People caused the collapse. These Sea People include refugees, foreign warlords, and mercenaries used by existing empires. These could have been the former Philistines, Minoans, warriors coming down from the Black Sea, the Italians, people escaping a famine on the Anatolian peninsula, the Mycenaeans as they fled the Dorian invasion, Sardinians, Sicilians, or even Hittites after the fall of that empire. The likely story is a little bit of each of these. But the Neo-Assyrians were weakened in order to take Mesopotamia and then the Neo-Babylonians were. And finally the Persian Empire would ultimately be the biggest winners. But at the end of the Bronze Age, we had all the components for the birth of the Iron Age. Humans had writing, were formally educating our young, we’d codified laws, we mined, we had metallurgy, we tamed nature with animal husbandry, we developed dense agriculture, we architected, we warred, we destroyed, we rebuilt, we healed, and we began to explain the universe. We started to harness multiple of the six simple machines to do something more in the world. We had epics that taught the next generation to identify places in the stars and pass on important knowledge to the next generation. And precision was becoming more important. Like being able to predict an eclipse. This led Chaldean astronomers to establish Saros, a period of 223 synodic months to predict the eclipse cycle. And instead of humans computing those times, within just a few hundred years, Archimedes would document the use of and begin putting math behind many of the six simple devices so we could take interdisciplinary approaches to leveraging compound and complex machines to build devices like the Antikythera mechanism. We were computing. We also see that precision in the way buildings were created. After the collapse of the Bronze Age there would be a time of strife. Warfare, famines, disrupted trade. The great works of the Pharaohs, Mycenaeans and other world powers of the time would be put on hold until a new world order started to form. As those empires grew, the impacts would be lasting and the reach would be greater than ever. We’ll add a link to the episode that looks at these, taking us from the Bronze Age to antiquity. But humanity slowly woke up to proto-technology. And certain aspects of our lives have been inherited over so many generations from then.
12/11/2020 • 31 minutes, 35 seconds
The Printing Press
The written word allowed us to preserve human knowledge, or data, from generation to generation. We know only what we can observe from ancient remains from before writing, but we know more and more about societies as generations of people literate enough to document their stories spread. And the more documented, the more knowledge to easily find and build upon, thus a more rapid amount of innovation available to each generation... The Sumerians established the first written language in the third millennium BCE. They carved data on clay. Written languages spread and by the 26th century BCE the Diary of Merer was written to document building the Great Pyramid of Giza. They started with papyrus, made from the papyrus plant. They would extract the pulp and make thin sheets from it. The sheets of papyrus ranged in color and how smooth the surface was. But papyrus doesn’t grow everywhere. People had painted on pots and other surfaces and ended up writing on leather at about the same time. Over time, it is only natural that they moved on to use parchment, or stretched and dried goat, cow, and sheep skins, to write on. Vellum is another material we developed to write on, similar, but made from calfskin. The Assyrians and Babylonians started to write on vellum in the 6th century BCE. The Egyptians wrote what we might consider data that was effectively included into pictograms we now call hieroglyphs on papyrus and parchment with ink. For example, per the Unicode Standard 13.0 my cat would be the hieroglyph 130E0. But digital representations of characters wouldn’t come for a long time. It was still carved in stone or laid out in ink back then. Ink was developed by the Chinese thousands of years ago, possibly first by mixing soot from a fire and various minerals. It’s easy to imagine early neolithic peoples stepping in a fire pit after it had cooled and realizing they could use first their hands to smear it on cave walls and then a stick and then a brush to apply it to other surfaces, like pottery. By the time the Egyptians were writing with ink, they were using iron and ocher for pigments. India ink was introduced in the second century in China. They used it to write on bamboo, wooden tablets, and even bones. It was used in India in the fourth century BCE and used burned bits of bone, powders made of patroleum called carbon black, and pigments with hide glue then ground and dried. This allowed someone writing to dip a wet brush into the mixture in order to use it to write. And these were used up through the Greek and then Roman times. More innovative chemical compounds would be used over time. We added lead, pine soot, vegetable oils, animal oils, mineral oils, and while the Silk Road is best known for bringing silks to the west, Chinese ink was the best and another of the luxuries transported across it, well into the 17th century. Ink wasn’t all the Silk Road brought. Paper was first introduced in the first century in China. During the Islamic Golden Age, the islamic world expanded the use in the 8th century, and adding the science to build larger mills to make pulp and paper. Paper then made it to Europe in the 11th century. So ink and paper laid the foundation for the mass duplication of data. But how to duplicate? We passed knowledge down verbally for tens of thousands of years. Was it accurate with each telling? Maybe. And then we preserved our stories in a written form for a couple thousand years in a one to one capacity. The written word was done manually, one scroll or book at a time. And so they were expensive. But a family could keep them from generation to generation and they were accurate across the generations. Knowledge passed down in written form and many a manuscript was copied ornately, with beautiful pictures drawn on the page. But in China they were again innovating. Woodblock printing goes back at least to the second century to print designs on cloth. But had grown to include books by the seventh century. The Diamond Sutra was a Tang Dynasty book from 868 that may be the first printed book, using wood blocks that had been carved in reverse. And moveable type came along in 1040, from Bi Sheng in China. He carved letters into clay. Wang Chen in China then printed a text on farming practices called Nung Shu in 1297 and added a number of innovations to the Chinese presses. And missionaries and trade missions from Europe to China likely brought reports home, including copies of the books. Intaglio printing emerged where lines were cut, etched, or engraved into metal plates, dipped into ink and then pressed onto paper. Similar tactics had been used by goldsmiths for some time. But then a goldsmith named Johannes Gutenberg began to experiment using similar ideas just adding the concept of moveable type. He used different alloys to get the letter pressing just right - including antimony, lead, and tin. He created a matrix to mold new type blocks, which we now refer to as a hand mould. He experimented with different kinds of oil and water-based inks. And vellum and paper. And so Gutenberg would get credit for inventing the printing press in 1440. This took the basic concept of the screw press, which the Romans introduced in the first century to press olives and wine and added moveable type with lettering made of metal. He was at it for a few years. Just one problem, he needed to raise capital in order to start printing at a larger scale. So he went Johann Fust and took out a loan for 800 guilders. He printed a few projects and then thought he should start printing Bibles. So he took out another loan from Fust for 800 more guilders to print what we now call the Gutenberg Bible and printed indulgences from the church as well. By 1455 he’d printed 180 copies of the Bible and seemed on the brink of finally making a profit. But the loan from Fust at 6% interest had grown to over 2,000 guilders and once Fust’s son-in-law was about to run the press, he sued Gutenberg, ending up with Gutenberg’s workshop and all of the Bibles basically bankrupting Gutenberg by 1460. He would die in 1468. The Mainz Psalter was commissioned by the Mainz archbishop in 1457 and Fust along with Peter Schöffer, a Gutenberg assistant, would use the press to become the first book to be printed with the mark of the printer. They would continue to print books and Schöffer added putting dates in books, colored ink, type-founding, punch cutting, and other innovations. And Schöffer’s sons would carry on the art, as did his grandson. As word spread of the innovation, Italians started printing presses by 1470. German printers went to the Sorbonne and by 1476 they set up companies to print. Printing showed up in Spain in 1473, England in 1476, and Portugal by 1495. In a single generation, the price of books plummeted and the printed word exploded, with over 20 million works being printed by 1500 and 10 times that by 1600. Before Gutenberg, a single scribe could spend years copying only a few editions of a book before the printing press and with a press, up to 3,600 pages a day could be printed. The Catholic Church had the market on bibles and facing a cash crunch, Pope Alexander VI threatened to excommunicate printing manuscripts. In two decades, John Calvin and Martin Luther changed the world with their books - and Copernicus followed quickly by other scientists published works, even with threats of miscommunication or the Inquisition. As presses grew, new innovative uses also grew. We got the first newspaper in 1605. Literacy rates were going up, people were becoming more educated and science and learning were spreading in ways it had never done before. Freedom to learn became freedom of thought and Christianity became fragmented as other thinkers had other ideas of spirituality. We were ready for the Enlightenment. Today we can copy and paste text from one screen to the next on our devices. We can make a copy of a single file and have tens of thousands of ancient or modern works available to us in an instant. In fact, plenty of my books are available to download for free on sites with or without mine or my publisher’s consent. Or we can just do a quick Google search and find most any book we want. And with the ubiquity of literacy we moved from printed paper to disks to online and our content creation has exploded. 90% of the data in the world was created in the past two years. We are producing over 2 quintillion bytes of data daily. Over 4 and a half billion people are connected, What’s crazy is that’s nearly 3 and a half billion people who aren’t online. Imagine having nearly double the live streamers on Twitch and dancing videos on TikTok! I have always maintained a large physical library. And while writing many of these episodes and the book it’s only grown. Because some books just aren’t available online, even if you’re willing to pay for them. So here’s a parting thought I’d like to leave you with today: history is also full of anomalies or moments when someone got close to a discovery but we would have to wait thousands of years for it to come up again. The Phaistos Disc is a Minoan fired clay tablet from Greece. It was made by stamping Minoan hieroglyphs onto the clay. And just like sometimes it seems something may have come before its time, we also like to return to the classics here and there. Up until the digital age, paper was one of the most important industries in the world. Actually, it still is. But this isn’t to say that we haven’t occasionally busted out parchment for uses in manual writing. The Magna Carta and the US Constitution were both written on parchment. So think about what you see that is before its time, or after. And keep a good relationship with your venture capitalists so they don’t take the printing presses away.
12/8/2020 • 13 minutes, 23 seconds
The Scientific Revolution: Copernicus to Newton
Following the Renaissance, Europe had an explosion of science. The works of the Greeks had been lost during the Dark Ages while civilizations caught up to the technical progress. Or so we were taught in school. Previously, we looked at the contributions during the Golden Age of the Islamic Empires and the Renaissance when that science returned to Europe following the Holy Wars. The great thinkers from the Renaissance pushed boundaries and opened minds. But the revolution coming after them would change the very way we thought of the world. It was a revolution based in science and empirical thought, lasting from the middle of the 1500s to late in the 1600s. There are three main aspects I’d like to focus on in terms of taking all the knowledge of the world from that point and preparing it to give humans enlightenment, what we call the age after the Scientific Revolution. These are new ways of reasoning and thinking, specialization, and rigor. Let’s start with rigor. My cat jumps on the stove and burns herself. She doesn’t do it again. My dog gets too playful with the cat and gets smacked. Both then avoid doing those things in the future. Early humans learn that we can forage certain plants and then realize we can take those plants to another place and have them grow. And then we realize they grow best when planted at certain times of the year. And watching the stars can provide guidance on when to do so. This evolved over generations of trial and error. Yet we believed those stars revolved around the earth for much of our existence. Even after designing orreries and mapping the heavens, we still hung on to this belief until Copernicus. His 1543 work “On The Revolutions of the Heavenly Spheres” marks the beginning of the Scientific Revolution. Here, he almost heretically claimed that the stars in fact revolved around the sun, as did the Earth. This wasn’t exactly new. Aristarchus had theorized this heliocentric model in Ancient Greece. Ptolemy had disagreed in Almagest, where he provided tables to compute location and dates using the stars. Tables that had taken rigor to produce. And that Ptolemaic system came to be taken for granted. It worked fine. The difference was, Copernicus had newer technology. He had newer optics, thousands more years of recorded data (some of which was contributed by philosophers during the golden age of Islamic science), the texts of ancient astronomers, and newer ecliptical tables and techniques with which to derive them. Copernicus didn’t accept what he was taught but instead looked to prove or disprove it with mathematical rigor. The printing press came along in 1440 and 100 years later, Luther was lambasting the church, Columbus discovered the New World, and the printing press helped disseminate information in a way that was less controllable by governments and religious institutions who at times felt threatened by that information. For example, Outlines of Pyrrhonism from first century Sextus Empiricus was printed in 1562, adding skepticism to the growing European thought. In other words, human computers were becoming more sentient and needed more input. We couldn’t trust what the ancients were passing down and the doctrine of the church was outdated. Others began to ask questions. Johannes Keppler published Mysterium Cosmographicum in 1596, in defense of Copernicus. He would go on to study math, such as the relationship between math and music, and the relationship between math and the weather. And in 1604 published Astronomiae Pars Optica, where he proposed a new method to measure eclipses of the moon. He would become the imperial mathematician to Emperor Rudolf II, where he could work with other court scholars. He worked on optical theory and wrote Astronomiae Pars Optica, or The Optical Part of Astronomy. He published numerous other works that pushed astronomy, optics, and math forward. His Epitome of Copernican Astronomy would go further than Copernicus, assigning ellipses to the movements of celestial bodies and while it didn’t catch on immediately, his inductive reasoning and the rigor that followed, was enough to have him conversing with Galileo. Galileo furthered the work of Copernicus and Kepler. He picked up a telescope in 1609 and in his lifetime saw magnification go from 3 to 30 times. This allowed him to map Jupiter’s moons, proving the orbits of other celestial bodies. He identified sunspots. He observed the strength of motions and developed formulas for inertia and parabolic trajectories. We were moving from deductive reasoning, or starting our scientific inquiry with a theory - to inductive reasoning, or creating theories based on observation. Galileos observations expanded our knowledge of Venus, the moon, and the tides. He helped to transform how we thought, despite ending up in an Inquisition over his findings. The growing quantity and types of systematic experimentation represented a shift in values. Emiricism, observing evidence for yourself, and the review of peers - whether they disagreed or not. These methods were being taught in growing schools but also in salons and coffee houses and, as was done in Athens, in paid lectures. Sir Francis Bacon argued about only basing scientific knowledge on inductive reasoning. We now call this the Baconian Method, which he wrote about in 1620 when he published his book, New method, or Novum Organum in latin. This was the formalization of eliminative induction. He was building on if not replacing the inductive-deductive method in Aristotle’s Organon. Bacon was the Attorney General of England and actually wrote Novum while sitting as the Lord Chancellor of England, who presides over the House of Lords and also is the highest judge, or was before Tony Blair. Bacon’s method built on ancient works from not only Aristotle but also Al-Biruni, al-Haytham, and many others. And has influenced generations of scientists, like John Locke. René Descartes helped lay the further framework for rationalism, coining the term “I think therefore I am.” He became by many accounts the father of modern Western Philosophy and asked what can we be certain of, or what is true? This helped him rethink various works and develop Cartesian geometry. Yup, he was the one who developed standard notation in 1637, a thought process that would go on to impact many other great thinkers for generations - especially with the development of calculus. As with many other great natural scientists or natural philosophers of the age, he also wrote on the theory of music, anatomy, and some of his works could be considered a protopsychology. Another method that developed in the era was empiricism, which John Locke proposed in An Essay Concerning Human Understanding in 1689. George Berkeley, Thomas Hobbes, and David Hume would join that movement and develop a new basis for human knowledge in that empirical tradition that the only true knowledge accessible to our minds was that based on experience. Optics and simple machines had been studied and known of since antiquity. But tools that deepened the understating of sciences began to emerge during this time. We got the steam digester, new forms of telescopes, vacuum pumps, the mercury barometer. And, most importantly for this body of work - we got the mechanical calculator. Robert Boyle was influenced by Galileo, Bacon, and others. He gave us Boyle’s Law, explaining how the pressure of gas increases as the volume of a contain holding the gas decreases. He built air pumps. He investigated how freezing water expands, he experimented with crystals. He experimented with magnetism, early forms of electricity. He published the Skeptical Chymist in 1660 and another couple of dozen books. Before him, we had alchemy and after him, we had chemistry. One of his students was Robert Hooke. Hooke. Hooke defined the law of elasticity, He experimented with everything. He made music tones from brass cogs that had teeth cut in specific proportions. This is storing data on a disk, in a way. Hooke coined the term cell. He studied gravitation in Micrographia, published in 1665. And Hooke argued, conversed, and exchanged letters at great length with Sir Isaac Newton, one of the greatest scientific minds of all time. He gave the first theory on the speed of sound, Newtonian mechanics, the binomials series. He also gave us Newton’s Rules for Science which are as follows: We are to admit no more causes of natural things than such as are both true and sufficient to explain their appearances. Therefore to the same natural effects we must, as far as possible, assign the same causes. The qualities of bodies, which admit neither intension nor remission of degrees, and which are found to belong to all bodies within the reach of our experiments, are to be esteemed the universal qualities of all bodies whatsoever. In experimental philosophy we are to look upon propositions collected by general induction from phenomena as accurately or very nearly true, notwithstanding any contrary hypotheses that may be imagined, until such time as other phenomena occur, by which they may either be made more accurate, or liable to exceptions These appeared in Principia, which gave us the laws of motion and a mathematical description of gravity leading to universal gravitation. Newton never did find the secret to the Philosopher’s Stone while working on it, although he did become the Master of the Royal Mint at a pivotal time of recoining, and so who knows. But he developed the first reflecting telescope and made observations about prisms that led to his book Optics in 1704. And ever since he and Leibniz developed calculus, high school and college students alike have despised him. Leibniz also did a lot of work on calculus but was a great philosopher as well. His work on logic All our ideas are compounded from a very small number of simple ideas, which form the alphabet of human thought. Complex ideas proceed from these simple ideas by a uniform and symmetrical combination, analogous to arithmetical multiplication. This would ultimately lead to the algebra of concepts and after a century and a half of great mathematicians and logicians would result in Boolean algebra, the zero and one foundations of computing, once Claude Shannon gave us information theory a century after that. Blaise Pascal was another of these philosopher mathematician physicists who also happened to dabble in inventing. I saved him for last because he didn’t just do work on probability theory, do important early work on vacuums, give us Pascal’s Triangle for binomial coefficients, and invent the hydraulic press. Nope. He also developed Pascal’s Calculator, an early mechanical calculator that is the first known to have worked. He didn’t build it to do much, just help with the tax collecting work he was doing for his family. The device could easily add and subtract two numbers and then loop through those tasks in order to do rudimentary multiplication and division. He would only build about 50, but the Pascaline as it came to be known was an important step in the history of computing. And that Leibniz guy, he invented the Leibniz wheels to make the multiplication automatic rather than just looping through addition steps. It wouldn’t be until 1851 that the Arithmometer made a real commercial go at mechanical calculators in a larger and more business like way. While Tomas, the inventor of that device is best known for his work on the calculator today, his real legacy is the 1,000 families who get their income from the insurance company he founded, which is still in business as GAN Assurances, and the countless families who have worked there or used their services. That brings us to the next point about specializations. Since the Egyptians and Greeks we’ve known that the more specialists we had in fields, the more discoveries they made. Many of these were philosophers or scientists. They studied the stars and optics and motions and mathematics and geometry for thousands of years, and an increasingly large amount of information was available to generations that followed starting with the written words first being committed to clay tablets in Mesopotamia. The body of knowledge had grown to the point where one could study a branch of science, such as mathematics, physics, astronomy, biology, and chemistry for their entire lives - improving each field in their own way. Every few generations, this transformed societal views about nature. We also increased our study of anatomy, with an increase in or return to the dissection of human corpses, emerging from the time when that was not allowed. And these specialties began to diverge into their own fields in the next generations. There was certainly still collaboration, and in fact the new discoveries only helped to make science more popular than ever. Given the increased popularity, there was more work done, more theories to prove or disprove, more scholarly writings, which were then given to more and more people through innovations to the printing press, and a more and more literate people. Seventeenth century scientists and philosophers were able to collaborate with members of the mathematical and astronomical communities to effect advances in all fields. All of this rapid change in science since the end of the Renaissance created a groundswell of interest in new ways to learn about findings and who was doing what. There was a Republic of Letters, or a community of intellectuals spread across Europe and America. These informal networks sprang up and spread information that might have been considered heretical before transmitted through secret societies of intellectuals and through encrypted letters. And they fostered friendships, like in the early days of computer science. There were groups meeting in coffee houses and salons. The Royal Society of London sprang up in 1600. Then the British Royal Society was founded in 1660. They started a publication called Philosophical Transactions in 1665. There are over 8,000 members of the society, which runs to this day with fellows of the society including people like Robert Hooke and fellows would include Newton, Darwin, Faraday, Einstein, Francis Crick, Turing, Tim Berners-Lee, Elon Musk, and Stephen Hawking. And this inspired Colbert to establish the French Academy of Sciences in 1666. They swapped papers, read one another’s works, and that peer review would evolve into the journals and institutions we have today. There are so many more than the ones mentioned in this episode. Great thinkers like Otto von Guericke, Otto Brunfels, Giordano Bruno, Leonard Fuchs, Tycho Brahe, Samuel Hartlib, William Harvey, Marcello Malpighi, John Napier, Edme Mariotte, Santorio Santorio, Simon Stevin, Franciscus Sylvius, John Baptist van Helmont, Andreas Vesalius, Evangelista Torricelli, Francois Viete, John Wallis, and the list goes on. Now that scientific communities were finally beyond where the Greeks had left off like with Plato’s Academy and the letters sent by ancient Greeks. The scientific societies had emerged similarly, centuries later. But the empires had more people and resources and traditions of science to build on. This massive jump in learning then prepared us for a period we now call the Enlightenment, which then opened minds and humanity was ready to accept a new level of Science in the Age of Enlightenment. The books, essays, society periodicals, universities, discoveries, and inventions are often lost in the classroom where the focus can be about the wars and revolutions they often inspired. But those who emerged in the Scientific Revolution acted as guides for the Enlightenment philosophers, scientists, engineers, and thinkers that would come next. But we’ll have to pick that back up in the next episode!
12/5/2020 • 20 minutes, 50 seconds
The First Analog Computer: The Antikythera Device
Sponges are some 8,000 species of animals that grow in the sea that lack tissues and organs. Fossil records go back over 500 million years and they are found throughout the world. Two types of sponges are soft and can be used to hold water that can then be squeezed out or used to clean. Homer wrote about using Sponges as far back as the 7th century BCE, in the Odyssey. Hephaestus cleaned his hands with one - much as you and I do today. Aristotle, Plato, the Romans, even Jesus Christ all discussed cleaning with sponges. And many likely came from places like the Greek island of Kalymnos, where people have harvested and cultivated sponges in the ocean since that time. They would sail boats with glass bottoms looking for sponges and then dive into the water, long before humans discovered diving equipment, carrying a weight, cut the sponge and toss it into a net. Great divers could stay on the floor of the sea for up to 5 minutes. Some 2,600 years after Homer, diving for sponges was still very much alive and well in the area. The people of Kalymnos have been ruled by various Greek city states, the Roman Empire, the Byzantines, Venetians, and still in 1900, the Ottomans. Archaeologist Charles Newton had excavated a Temple of Apollo on the island in the 1850s just before he’d then gone to Turkey to excavate one of the Seven Wonders of the Ancient World: The Mausoleum of Halicarnassus, built by Mausolus - such a grand tomb that we still call buildings that are tombs mausoleums in his honor, to this day. But 1900 was the dawn of a new age. Kalymnos had grown to nearly 1,000 souls. Proud of their Greek heritage, the people of the island didn’t really care what world power claimed their lands. They carved out a life in the sea, grew food and citrus, drank well, made head scarfs, and despite the waning Ottomon rule, practiced Orthodox Christianity. The sponges were still harvested from the floor of the sea rather than made from synthetic petroleum products. Captain Dimitrios Kontos and his team of sponge divers are sailing home from a successful run to the Symi island, just as they’d done for thousands of years, when someone spots something. They were off the coast of Antikythera, another Greek island that has been inhabited since the 4th or 5th millennia BCE, which had been a base for Cilician pirates from the 4th to 1st centuries BCE and at the time the southern most point in Greece. They dove down and after hearing stories from the previous archaeological expedition, knew they were on to something. Something old. They brought back a few smaller artifacts like a bronze arm - as proof of their find, noting the seabed was littered with statues that looked like corpses. They recorded the location and returned home. They went to the Greek government in Athens, thinking they might get a reward for the find, where Professor Ikonomu took them to meet with the Minister of Education, Spyriodon, Stais. He offered to have his divers bring up the treasure in exchange for pay equal to the value of the plunder and the Greek government sent a ship to help winch up the treasures. They brought up bronze and marble statues, and pottery. When they realized the haul was bigger than they thought, the navy sent a second ship. They used diving suits, just as those were emerging technology. One diver died. The ship turned out to be over 50 meters and the wreckage strewn across 300 meters. The shipwreck happened somewhere between 80 and 50 BCE. It was carrying cargo from Asia Minor probably to Rome, sank not by pirates, which had just recently been cleared from the area but likely crashed during a storm. There are older shipwrecks, such as the Dokos from around 2200 BCE and just 60 miles east of Sparta, but few have given up as precious of cargo. We still don’t know how the ship came to be where it was but there is speculation that it was sailing from Rhodes to Rome, for a parade marking victories of Julius Caesar. Everything brought up went on to live at the National Museum of Archaeology in Athens. There were fascinating treasures to be cataloged and so it isn’t surprising that between the bronze statues, the huge marble statues of horses, glassware, and other Greek treasures that a small corroded bronze lump in a wooden box would go unloved. That is, until archaeologist Valerios Stais noticed a gear wheel in it. He thought it must belong to an ancient clock, but that was far too complex for the Greeks. Or was it? It is well documented that Archimedes had been developing the use of gearwheels. And Hero of Alexandria had supposedly developed a number of great mechanical devices while at the Library of Alexandria. Kalymnos was taken by Italians in the Italo-Turkish War in 1912. World War I came and went. After the war, the Ottoman Empire fell and with Turkish nationalists taking control, they went to war with Greece. The Ottoman Turks killed between 750,000 and 900,000 Greeks. The Second Hellenic Republic came and went. World War II came and went. And Kylamnos was finally returned to Greece from Italy. With so much unrest, archeology wasn’t on nearly as many minds. But after the end of World War II, a British historian of science who was teaching at Yale at the time, took interest in the device. His name was Derek de Solla Price. In her book, Decoding the Heavens, Jo Marchant takes us through a hundred year journey where scientists and archaeologists use the most modern technology available to them at the time to document the device and publish theories as to what it could have been used for. This began with drawings and moved into X-ray technology becoming better and more precise with each generation. And this mirrors other sciences. We make observations, or theories, as to the nature of the universe only to be proven right or wrong when the technology of the next generation uncovers more clues. It’s a great book and a great look at the history of archaeology available in different stages of the 19th century. She tells of times before World War II, when John Svoronos and Adolf Wilhelm uncovered the first inscriptions and when Pericles Redials was certain the device was a navigational instrument used to sail the ship. She tells of Theophanidis publishing a theory it might be driven by a water clock in 1934. She weaves in Jeaques Cousteau and Maria Savvatianou and Gladys Weinberg and Peter Throckmorton and Price and Wang Ling and Arthur C. Clarke and nuclear physicist Charalambos Karakolos and Judith Field and Michael Wright and Allan Bromley and Alan Crawley and Mike Edmunds and Tony Freeth and Nastulus, a tenth century astronomer in Baghdad. Reverse engineering the 37 gears took a long time. I mean, figuring up the number of teeth per gear, how they intersected, what drove them, and then trying to figure out why this prime number or what calendar cycle this other thing might have represented. Because the orbit isn’t exactly perfect and the earth is tilted and all kinds of stuff. Each person unraveled their own piece and it’s a fantastic journey through history and discovery. So read the book and we’ll skip to what exactly the Antikypthera Device was. Some thought it an astrolabe, which had begun use around 200 BCE - and which measured the altitude of the sun or stars to help sailors navigate the seas. Not quite. Some theorized it was a clock, but not the kind we use to tell time today. More to measure aspects of the celestial bodies than minutes. After generations of scientists studied it, most of the secrets of the device are now known. We know it was an orrery - a mechanical model of the solar system. It was an analog computer, driven by a crank, and predicted the positions of various celestial bodies and when eclipses would occur many, many decades in advance - and on a 19 year cycle that was borrowed from cultures far older than the Greeks. The device would have had some kind of indicator, like gems or glass orbs that moved around representing the movements of Jupiter, Mars, Mercury, Saturn, and Venus. It showed the movements of the sun and moon, representing the 365 days of the year as a solar calendar and the 19-year lunar cycle inherited from the Babylonians - and those were plotted relative to the zodiac, or 12 constellations. It forecast eclipses and the color of each eclipse. And phases of the moon. Oh and for good measure it also tracked when the Olympic Games were held. About that one more thing with calculating the Olympiad - One aspect of the device that I love, and most clockwork devices in fact, is the analogy that can be made to a modern micro service architecture in software design. Think of a wheel in clockwork. Then think of each wheel being a small service or class of code. That triggers the next and so-on. The difference being any service could call any other and wouldn’t need a shaft or the teeth of only one other wheel to interact - or even differential gearing. Reading the story of decoding the device, it almost feels like trying to decode someone else’s code across all those services. I happen to believe that most of the stories of gods are true. Just exaggerated a bit. There probably was a person named Odin with a son named Thor or a battle of the Ten Kings in India. I don’t believe any of them were supernatural but that over time their legends grew. Those legends often start to include that which the science of a period cannot explain. The more that science explains, the less of those legends, or gods, that we need. And here’s the thing. I don’t think something like this just appears out of nowhere. It’s not the kind of thing a lone actor builds in a workshop in Rhodes. It’s the kind of device that evolves over time. One great crafter adds another idea and another philosopher influences another. There could have been a dozen or two dozen that evolved over time, the others lost to history. Maybe melted down to forge bronze weapons, hiding in a private collection, or sitting in a shipwreck or temple elsewhere, waiting to be discovered. The Greek philosopher Thales was said to have built a golden orb. Hipparchus of Rhodes was a great astronomer. The Antikythera device was likely built between 200 and 100 BC, when he would have been alive. Was he consulted on during the creation, or involved? Between Thales and Hipparchus, we got Archimedes, Euclid, Pythagoras, Aristotle, Philo, Ctesibius, and so many others. Their books would be in the Library of Alexandria for anyone to read. You could learn of the increasingly complicated Ctesibius water clocks along with their alarms or the geometry of Euclid or the inventions of Philo. Or you could read about how Archimedes continued that work and added a chime. We can assign the device to any of them - or its’ heritage. And we can assume that as with legends of the gods, it was an evolution of science, mathematics, and engineering. And that the science and technology wasn’t lost, as has been argued, but instead moved around as great thinkers moved around. Just as the water clock had been in use since nearly 4000 BCE in modern day India and China and become increasingly complicated over time until the Greeks called them clepsydra and anaphoric clocks. Yet replacing water with gears wasn’t considered for awhile. Just as it took Boolean algebra and flip-flop circuits to bring us into the age of binary and then digital computing. The power of these analog computers could have allowed for simple mathematic devices, like deriving angles or fractions when building. But given that people gotta’ eat and ancient calculation devices and maps of the heavens helped guide when to plant crops, that was first in the maslovian hierarchy of technological determinism. So until our next episode consider this: what technology is lying dormant at the bottom of the sea in your closet. Buried under silt but waiting to be dug up by intrepid divers and put into use again, in a new form. What is the next generation of technical innovation for each of the specialties you see? Maybe it helps people plant crops more effectively, this time using digital imagery to plot where to place a seed. Or maybe it’s to help people zero in on information that matters or open trouble tickets more effectively or share their discoveries or claim them or who knows - but I look forward to finding out what you come up with and hopefully some day telling the origin story of that innovation!
11/28/2020 • 16 minutes, 26 seconds
The Evolution and Spread of Science and Philosophy from the Classical Age to the Age of Science
The Roman Empire grew. Philosophy and the practical applications derived from great thinkers were no longer just to impress peers or mystify the commoners into passivity but to help humans do more. The focus on practical applications was clear. This isn’t to say there weren’t great Romans. We got Seneca, Pliny the Elder, Plutarch, Tacitus, Lucretius, Plotinus, Marcus Aurelius, one of my favorite Hypatia, and as Christianity spread we got the Cristian Philosophers in Rome such as Saint Augustine. The Romans reached into new lands and those lands reached back, with attacks coming by the Goths, Germanic tribes, Vandals, and finally resulting in the sack of Rome. They had been weakened by an overreliance on slaves, overspending on military to fuel the constant expansion, government corruption due to a lack of control given the sheer size of the empire, and the need to outsource the military due to the fact that Roman citizens needed to run the empire. Rome would split in 285 and by the fourth century fell. Again, as empires fall new ones emerge. As the Classical Period ended in each area with the decline of the Roman Empire, we were plunged into the Middle Ages, which I was taught was the Dark Ages in school. But they weren’t dark. Byzantine, the Eastern Roman Empire survived. The Franks founded Francia in northern Gaul. The Celtic Britons emerged. The Visigoths setup shop in Northern Spain. The Lombards in Northern Italy. The Slavs spread through Central and Eastern Europe and the Latin language splintered into the Romance languages. And that spread involved Christianity, whose doctrine often classed with the ancient philosophies. And great thinkers weren’t valued. Or so it seemed when I was taught about the Dark Ages. But words matter. The Prophet Muhammad was born in this period and Islamic doctrine spread rapidly throughout the Middle East. He united the tribes of Medina and established a Constitution in the sixth century. After years of war with Mecca, he later seized the land. He then went on to conquer the Arabian Peninsula, up into the lands of the Byzantines and Persians. With the tribes of Arabia united, Muslims would conquer the last remains of Byzantine Egypt, Syria, Mesopotamia and take large areas of Persia. This rapid expansion, as it had with the Greeks and Romans, led to new trade routes, and new ideas finding their way to the emerging Islamic empire. In the beginning they destroyed pagan idols but over time adapted Greek and Roman technology and thinking into their culture. They Brough maps, medicine, calculations, and agricultural implants. They learned paper making from the Chinese and built paper mills allowing for an explosion in books. Muslim scholars in Baghdad, often referred to as New Babylon given that it’s only 60 miles away. They began translating some of the most important works from Greek and Latin and Islamic teachings encouraged the pursuit of knowledge at the time. Many a great work from the Greeks and Romans is preserved because of those translations. And as with each empire before them, the Islamic philosophers and engineers built on the learning of the past. They used astrolabes in navigation, chemistry in ceramics and dyes, researched acids and alkalis. They brought knowledge from Pythagoras and Babylonians and studied lines and spaces and geometry and trigonometry, integrating them into art and architecture. Because Islamic law forbade dissections, they used the Greek texts to study medicine. The technology and ideas of their predecessors helped them retain control throughout the Islamic Golden Age. The various Islamic empires spread East into China, down the African coast, into Russia, into parts of Greece, and even North into Spain where they ruled for 800 years. Some grew to control over 10 million square miles. They built fantastic clockworks, documented by al-Jazari in the waning days of the golden age. And the writings included references to influences in Greece and Rome, including the Book of Optics by Ibn Al-Haytham in the ninth century, which is heavily influenced by Ptolemy’s book, Optics. But over time, empires weaken. Throughout the Middle Ages, monarchs began to be deposed by rising merchant classes, or oligarchs. What the framers of the US Constitution sought to block with the way the government is structured. You can see this in the way the House of Lords had such power in England even after the move to a constitutional monarchy. And after the fall of the Soviet Union, Russia has moved more and more towards a rule by oligarchs first under Yeltsin and then under Putin. Because you see, we continue to re-learn the lessons learned by the Greeks. But differently. Kinda’ like bell bottoms are different than all the other times they were cool each time they come back. The names of European empires began to resemble what we know today: Wales, England, Scotland, Italy, Croatia, Serbia, Sweden, Denmark, Portugal, Germany, and France were becoming dominant forces again. The Catholic Church was again on the rise as Rome practiced a new form of conquering the world. Two main religions were coming more and more in conflict for souls: Christianity and Islam. And so began the Crusades of the High Middle Ages. Crusaders brought home trophies. Many were books and scientific instruments. And then came the Great Famine followed quickly by the Black Death, which spread along with trade and science and knowledge along the Silk Road. Climate change and disease might sound familiar today. France and England went to war for a hundred years. Disruption in the global order again allows for new empires. Ghengis Khan built a horde of Mongols that over the next few generations spread through China, Korea, India, Georgia and the Caucasus, Russia, Central Asia and Persia, Hungary, Lithuania, Bulgaria, Vietnam, Baghdad, Syria, Poland, and even Thrace throughout the 11th to 13th centuries. Many great works were lost in the wars, although the Mongols often allowed their subjects to continue life as before, with a hefty tax of course. They would grow to control 24 million square kilometers before the empires became unmanageable. This disruption caused various peoples to move and one was a Turkic tribe fleeing Central Asia that under Osman I in the 13th century. The Ottomon empire he founded would go Islamic and grow to include much of the former Islamic regime as they expanded out of Turkey, including Greece Northern Africa. Over time they would also invade and rule Greece and almost all the way north to Kiev, and south through the lands of the former Mesopotamian empires. While they didn’t conquer the Arabian peninsula, ruled by other Islamic empires, they did conquer all the way to Basra in the South and took Damascus, Medina, and Mecca, and Jerusalem. Still, given the density of population in some cities they couldn’t grow past the same amount of space controlled in the days of Alexander. But again, knowledge was transferred to and from Egypt, Greece, and the former Mesopotamian lands. And with each turnover to a new empire more of the great works were taken from these cradles of civilization but kept alive to evolve further. And one way science and math and philosophy and the understanding of the universe evolved was to influence the coming Renaissance, which began in the late 13th century and spread along with Greek scholars fleeing the Ottoman Turks after the fall of Constantinople throughout the Italian city-states and into England, France, Germany, Poland, Russia, and Spain. Hellenism was on the move again. The works of Aristotle, Ptolemy, Plato, and others heavily influenced the next wave of mathematicians, astronomers, philosophers, and scientists. Copernicus studied Aristotle. Leonardo Da Vinci gave us the Mona Lisa, the Last Supper, the Vitruvian Man, Salvator Mundi, and Virgin of the Rocks. His works are amongst the most recognizable paintings of the Renaissance. But he was also a great inventor, sketching and perhaps building automata, parachutes, helicopters, tanks, and along the way putting optics, anatomy, hydrodynamics and engineering concepts in his notebooks. And his influences certainly included the Greeks and Romans, including the Roman physician Galen. Given that his notebooks weren’t published they offer a snapshot in time rather than a heavy impact on the evolution of science - although his influence is often seen as a contribution to the scientific revolution. Da Vinci, like many of his peers in the Renaissance, learned the great works of the Greeks and Romans. And they learned the teachings in the Bible. They they didn’t just take the word of either and they studied nature directly. The next couple of generations of intellectuals included Galileo. Galileo, effectively as with Socrates and countless other thinkers that bucked the prevailing political or religious climate of the time, by writing down what he saw with his own eyeballs. He picked up where Copernicus left off and discovered the four moons of Jupiter and astronomers continued to espouse that the the sun revolved around the Earth Galileo continued to prove it was in fact suspended in space and map out the movement of the heavenly bodies. Clockwork, which had been used in the Greek times, as proven with the Antikypthera device and mentions of Archytas’s dove. Mo Zi and Lu Ban built flying birds. As the Greeks and then Romans fell, that automata as with philosophy and ideas moved to the Islamic world. The ability to build a gear with a number of teeth to perform a function had been building over time. As had ingenious ways to put rods and axles and attach differential gearing. Yi Xing, a Buddhist monk in the Tang Dynasty, would develop the escapement, along with Liang Lingzan in the seventeenths century and the practice spread through China and then spread from there. But now clockwork would get pendulums, springs, and Robert Hook would give us the escapement in 1700, making clocks accurate. And that brings us to the scientific revolution, when most of the stories in the history of computing really start to take shape. Thanks to great thinkers, philosophers, scientists, artists, engineers, and yes, merchants who could fund innovation and spread progress through formal and informal ties - the age of science is when too much began happening too rapidly to really be able to speak about it meaningfully. The great mathematics and engineering led to industrialization and further branches of knowledge and specializations - eventually including Boolean algebra and armed with thousands of years of slow and steady growth in mechanics and theory and optics and precision, we would get early mechanical computing beginning the much more quick migration out of the Industrial and into the Information Age. These explosions in technology allowed the British Empire to grow to control 34 million square kilometers of territory and the Russian empire to grow to control 17 million before each overextended. Since writing was developed, humanity has experienced a generation to generation passing of the torch of science, mathematics, and philosophy. From before the Bronze Age, ideas were sometimes independently perceived or sometimes spread through trade from the Chinese, Indian, Mesopotamian, and Egyptian civilizations (and others) through traders like the Phoenicians to the Greeks and Persians - then from the Greeks to the Romans and the Islamic empires during the dark ages then back to Europe during the Renaissance. And some of that went both ways. Ultimately, who first introduced each innovation and who influenced whom cannot be pinpointed in a lot of cases. Greeks were often given more credit than they deserved because I think most of us have really fond memories of toga parties in college. But there were generations of people studying all the things and thinking through each field when their other Maslovian needs were met - and those evolving thoughts and philosophies were often attributed to one person rather than all the parties involved in the findings. After World War II there was a Cold War - and one of the ways that manifested itself was a race to recruit the best scientists from the losing factions of that war, namely Nazi scientists. Some died while trying to be taken to a better new life, as Archimedes had died when the Romans tried to make him an asset. For better or worse, world powers know they need the scientists if they’re gonna’ science - and that you gotta’ science to stay in power. When the masses start to doubt science, they’re probably gonna’ burn the Library of Alexandria, poison Socrates, exile Galileo for proving the planets revolve around Suns and have their own moons that revolve around them, rather than the stars all revolving around the Earth. There wasn’t necessarily a dark age - but given what the Greeks and Romans and Chinese thinkers knew and the substantial slowdown in those in between periods of great learning, the Renaissance and Enlightenment could have actually come much sooner. Think about that next time you hear people denying science. To research this section, I read and took copious notes from the following and apologize that each passage is not credited specifically but it would just look like a regular expressions if I tried: The Evolution of Technology by George Basalla. Civilizations by Filipe Fernández-Armesto, A Short History of Technology: From The Earliest Times to AD 1900 from TK Derry and Trevor I Williams, Communication in History Technology, Culture, Leonardo da vinci by Walter Isaacson, Society from David Crowley and Paul Heyer, Timelines in Science, by the Smithsonian, Wheels, Clocks, and Rockets: A History of Technology by Donald Cardwell, a few PhD dissertations and post-doctoral studies from journals, and then I got to the point where I wanted the information from as close to the sources as I could get so I went through Dialogues Concerning Two New Sciences from Galileo Galilei, Mediations from Marcus Aurelius, Pneumatics from Philo of Byzantium, The Laws of Thought by George Boole, Natural History from Pliny The Elder, Cassius Dio’s Roman History, Annals from Tacitus, Orations by Cicero, Ethics, Rhetoric, Metaphysics, and Politics by Aristotle, Plato’s Symposium and The Trial & Execution of Socrates. For a running list of all books used in this podcast see the GitHub page at https://github.com/krypted/TheHistoryOfComputingPodcast/blob/master/Books.md
11/24/2020 • 20 minutes, 15 seconds
The Evolution and Spread of Science and Philosophy from the Bronze Age to The Classical Age
Science in antiquity was at times devised to be useful and at other times to prove to the people that the gods looked favorably on the ruling class. Greek philosophers tell us a lot about how the ancient world developed. Or at least, they tell us a Western history of antiquity. Humanity began working with bronze some 7,000 years ago and the Bronze Age came in force in the centuries leading up to 3,000 BCE. By then there were city-states and empires. The Mesopotamians brought us the wheel in around 3500 BCE, and the chariot by 3200 BCE. Writing formed in Sumeria, a city state of Mesopotamia, in 3000 BCE. Urbanization required larger cities and walls to keep out invaders. King Gilgamesh built huge walls. They used a base 60 system to track time, giving us the 60 seconds and 60 minutes to get to an hour. That sexagesimal system also gave us the 360 degrees in a circle. They plowed fields and sailed. And sailing led to maps, which they had by 2300 BCE. And they gave us the Epic, with the Epic of Gilgamesh which could be old as 2100 BCE. At this point, the Egyptian empire had grown to 150,000 square kilometers and the Sumerians controlled around 20,000 square kilometers. Throughout, they grew a great trading empire. They traded with China, India and Egypt with some routes dating back to the fourth millennia BCE. And commerce and trade means the spread of not only goods but also ideas and knowledge. The earliest known writing of complete sentences in Egypt came to Egypt a few hundred years after it did in Mesopotamia, as the Early Dynastic period ended and the Old Kingdom, or the Age of the Pyramids. Perhaps over a trade route. The ancient Egyptians used numerals, multiplications, fractions, geometry, architecture, algebra, and even quadratic equations. Even having a documented base 10 numbering system on a tomb from 3200 BCE. We also have the Moscow Mathematical Papyrus, which includes geometry problems, the Egyptian Mathematical Leather Roll, which covers how to add fractions, the Berlin Papyrus with geometry, the Lahun Papyri with arithmetical progressions to calculate the volume of granaries, the Akhmim tablets, the Reisner Papyrus, and the Rhind Mathematical Papyrus, which covers algebra and geometry. And there’s the Cairo Calendar, an ancient Egyptian papyrus from around 1200 BCE with detailed astronomical observations. Because the Nile flooded, bringing critical crops to Egypt. The Mesopotamians traded with China as well. As the Shang dynasty from the 16th to 11th centuries BCE gave way to the Zhou Dynasty, which went from the 11th to 3rd centuries BCE and the Bronze Age gave way to the Iron Age, science was spreading throughout the world. The I Ching is one of the oldest Chinese works showing math, dating back to the Zhou Dynasty, possibly as old as 1000 BCE. This was also when the Hundred Schools of Thought began, which Conscious inherited around the 5th century BCE. Along the way the Chinese gave us the sundial, abacus, and crossbow. And again, the Bronze Age signaled trade empires that were spreading ideas and texts from the Near East to Asia to Europe and Africa and back again. For a couple thousand years the transfer of spices, textiles and precious metals fueled the Bronze Age empires. Along the way the Minoan civilization in modern Greece had been slowly rising out of the Cycladic culture. Minoan artifacts have been found in Canaanite palaces and as they grew they colonized and traded. They began a decline around 1500 BCE, likely due to a combination of raiders and volcanic eruptions. The crash of the Minoan civilization gave way to the Myceneaen civilization of early Greece. Competition for resources and land in these growing empires helped to trigger wars. Those in turn caused violence over those resources. Around 1250 BCE, Thebes burned and attacks against city states cities increased, sometimes by emerging empires of previously disassociated tribes (as would happen later with the Vikings) and sometimes by other city-states. This triggered the collapse of Mycenaen Greece, the splintering of the Hittites, the fall of Troy, the absorption of the Sumerian culture into Babylon, and attacks that weakened the Egyptian New Kingdom. Weakened and disintegrating empires leave room for new players. The Iranian tribes emerged to form the Median empire in today’s Iran. The Assyrians and Scythians rose to power and the world moved into the Iron age. And the Greeks fell into the Greek Dark Ages until they slowly clawed their way out of it in the 8th century BCE. Around this time Babylonian astronomers, in the capital of Mesopomania, were making astronomical diaries, some of which are now stored in the British Museum. Greek and Mesopotamian societies weren’t the only ones flourishing. The Indus Valley Civilization had blossomed from 2500 to 1800 BCE only to go into a dark age of its own. Boasting 5 million people across 1,500 cities, with some of the larger cities reaching 40,000 people - about the same size as Mesopotamian cities. About two thirds are in modern day India and a third in modern Pakistan, an empire that stretched across 120,000 square kilometers. As the Babylonian control of the Mesopotamian city states broke up, the Assyrians began their own campaigns and conquered Persia, parts of Ancient Greece, down to Ethiopia, Israel, the Ethiopia, and Babylon. As their empire grew, they followed into the Indus Valley, which Mesopotamians had been trading with for centuries. What we think of as modern Pakistan and India is where Medhatithi Gautama founded the anviksiki school of logic in the 6th century BCE. And so the modern sciences of philosophy and logic were born. As mentioned, we’d had math in the Bronze Age. The Egyptians couldn’t have built pyramids and mapped the stars without it. Hammurabi and Nebuchadnezzar couldn’t have built the Mesopotamian cities and walls and laws without it. But something new was coming as the Bronze Age began to give way to the Iron Age. The Indians brought us the first origin of logic, which would morph into an almost Boolean logic as Pāṇini codified Sanskrit grammar linguistics and syntax. Almost like a nearly 4,000 verse manual on programming languages. Panini even mentions Greeks in his writings. Because they apparently had contact going back to the sixth century BCE, when Greek philosophy was about to get started. The Neo-Assyrian empire grew to 1.4 million square kilometers of control and the Achaeminid empire grew to control nearly 5 million square miles. The Phoenicians arose out of the crash of the Late Bronze Age, becoming important traders between the former Mesopotamian city states and Egyptians. As her people settled lands and Greek city states colonized lands, one became the Greek philosopher Thales, who documented the use of loadstones going back to 600 BCE when they were able to use magnetite which gets its name from the Magnesia region of Thessaly, Greece. He is known as the first philosopher and in the time of Socrates even had become one of the Seven Sages which included according to Socrates. “Thales of Miletus, and Pittacus of Mytilene, and Bias of Priene, and our own Solon, and Cleobulus of Lindus, and Myson of Chenae, and the seventh of them was said to be Chilon of Sparta.” Many of the fifth and sixth century Greek philosophers were actually born in colonies on the western coast of what is now Turkey. Thales’s theorum is said to have originated in India or Babylon. But as we see a lot in the times that followed, it is credited to Thales. Given the trading empires they were all a part of though, they certainly could have brought these ideas back from previous generations of unnamed thinkers. I like to think of him as the synthesizers that Daniel Pink refers to so often in his book A Whole New Mind. Thales studied in Babylon and Egypt, bringing thoughts, ideas, and perhaps intermingled them with those coming in from other areas as the Greeks settled colonies in other lands. Given how critical astrology was to the agricultural societies, this meant bringing astronomy, math to help with the architecture of the Pharoes, new ways to use calendars, likely adopted through the Sumerians, coinage through trade with the Lydians and then Persians when they conquered the Lydians, Babylon, and the Median. So Thales taught Anaximander who taught Pythagoras of Samos, born a few decades later in 570 BCE. He studied in Egypt as well. Most of us would know the Pythagorean theorem which he’s credited for, although there is evidence that predated him from Egypt. Whether new to the emerging Greek world or new to the world writ large, his contributions were far beyond that, though. They included a new student oriented way of life, numerology, the idea that the world is round, numerology, applying math to music and applying music to lifestyle, and an entire school of philosophers emerged from his teachings to spread Pythagoreanism. And the generations of philosophers that followed devised both important philosophical contributions and practical applications of new ideas in engineering. The ensuing schools of philosophy that rose out of those early Greeks spread. By 508 BCE, the Greeks gave us Democracy. And oligarchy, defined as a government where a small group of people have control over a country. Many of these words, in fact, come from Greek forms. As does the month of May, names for symbols and theories in much of the math we use, and many a constellation. That tradition began with the sages but grew, being spread by trade, by need, and by religious houses seeking to use engineering as a form of subjugation. Philosophy wasn’t exclusive to the Greeks or Indians, or to Assyria and then Persia through conquering the lands and establishing trade. Buddha came out of modern India in the 5th to 4th century BCE around the same time Confucianism was born from Confucious in China. And Mohism from Mo Di. Again, trade and the spread of ideas. However, there’s no indication that they knew of each other or that Confucious could have competed with the other 100 schools of thought alive and thriving in China. Nor that Buddhism would begin spreading out of the region for awhile. But some cultures were spreading rapidly. The spread of Greek philosophy reached a zenith in Athens. Thales’ pupil Anaximander also taught Anaximenes, the third philosopher of the Milesian school which is often included with the Ionians. The thing I love about those three, beginning with Thales is that they were able to evolve the school of thought without rejecting the philosophies before them. Because ultimately they knew they were simply devising theories as yet to be proven. Another Ionian was Anaxagoras, who after serving in the Persian army, which ultimately conquered Ionia in 547 BCE. As a Greek citizen living in what was then Persia, Anaxagoras moved to Athens in 480 BCE, teaching Archelaus and either directly or indirectly through him Socrates. This provides a link, albeit not a direct link, from the philosophy and science of the Phoenicians, Babylonians, and Egyptians through Thales and others, to Socrates. Socrates was born in 470 BCE and mentions several influences including Anaxagoras. Socrates spawned a level of intellectualism that would go on to have as large an impact on what we now call Western philosophy as anyone in the world ever has. And given that we have no writings from him, we have to take the word of his students to know his works. He gave us the Socratic method and his own spin on satire, which ultimately got him executed for effectively being critical of the ruling elite in Athens and for calling democracy into question, corrupting young Athenian students in the process. You see, in his life, the Athenians lost the Peloponnesian War to Sparta - and as societies often do when they hit a speed bump, they started to listen to those who call intellectuals or scientists into question. That would be Socrates for questioning Democracy, and many an Athenian for using Socrates as a scape goat. One student of Socrates, Critias, would go on to lead a group called the Thirty Tyrants, who would terrorize Athenians and take over the government for awhile. They would establish an oligarchy and appoint their own ruling class. As with many coups against democracy over the millennia they were ultimately found corrupt and removed from power. But the end of that democratic experiment in Greece was coming. Socrates also taught other great philosophers, including Xenophon, Antisthenes, Aristippus, and Alcibiades. But the greatest of his pupils was Plato. Plato was as much a scientist as a philosopher. He had works of Pythagoras, studied the Libyan Theodorus. He codified a theory of Ideas, in Forms. He used as examples, the Pythagorean theorem and geometry. He wrote a lot of the dialogues with Socrates and codified ethics, and wrote of a working, protective, and governing class, looking to produce philosopher kings. He wrote about the dialectic, using questions, reasoning and intuition. He wrote of art and poetry and epistemology. His impact was vast. He would teach mathemetics to Eudoxus, who in turn taught Euclid. But one of his greatest contributions the evolution of philosophy, science, and technology was in teaching Aristotle. Aristotle was born in 384 BCE and founded a school of philosophy called the Lyceum. He wrote about rhetoric, music, poetry, and theater - as one would expect given the connection to Socrates, but also expanded far past Plato, getting into physics, biology, and metaphysics. But he had a direct impact on the world at the time with his writings on economics politics, He inherited a confluence of great achievements, describing motion, defining the five elements, writing about a camera obscure and researching optics. He wrote about astronomy and geology, observing both theory and fact, such as ways to predict volcanic eruptions. He made observations that would be proven (or sometimes disproven) such as with modern genomics. He began a classification of living things. His work “On the Soul” is one of the earliest looks at psychology. His study of ethics wasn’t as theoretical as Socrates’ but practical, teaching virtue and how that leads to wisdom to become a greater thinker. He wrote of economics. He writes of taxes, managing cities, and property. And this is where he’s speaking almost directly to one of his most impressive students, Alexander the Great. Philip the second of Macedon hired Plato to tutor Alexander starting in 343. Nine years later, when Alexander inherited his throne, he was armed with arguably the best education in the world combined with one of the best trained armies in history. This allowed him to defeat Darius in 334 BCE, the first of 10 years worth of campaigns that finally gave him control in 323 BCE. In that time, he conquered Egypt, which had been under Persian rule on and off and founded Alexandria. And so what the Egyptians had given to Greece had come home. Alexander died in 323 BCE. He followed the path set out by philosophers before him. Like Thales, he visited Babylon and Egypt. But he went a step further and conquered them. This gave the Greeks more ancient texts to learn from but also more people who could become philosophers and more people with time to think through problems. By the time he was done, the Greeks controlled nearly 5 million square miles of territory. This would be the largest empire until after the Romans. But Alexander never truly ruled. He conquered. Some of his generals and other Greek aristocrats, now referred to as the Diadochi, split up the young, new empire. You see, while teaching Alexander, Aristotle had taught two other future kings : Ptolemy I Soter and Cassander. Cassander would rule Macedonia and Ptolemy ruled Egypt from Alexandria, who with other Greek philosophers founded the Library of Alexandria. Ptolemy and his son amassed 100s of thousands of scrolls in the Library from 331 BC and on. The Library was part of a great campus of the Musaeum where they also supported great minds starting with Ptolemy I’s patronage of Euclid, the father of geometry, and later including Archimedes, the father of engineering, Hipparchus, the founder of trigonometry, Her, the father of math, and Herophilus, who codified the scientific method and countless other great hellenistic thinkers. The Roman Empire had begin in the 6th century BCE. By the third century BCE they were expanding out of the Italian peninsula. This was the end of Greek expansion and as Rome conquered the Greek colonies signified the waning of Greek philosophy. Philosophy that helped build Rome both from a period of colonization and then spreading Democracy to the young republic with the kings, or rex, being elected by the senate and by 509 BCE the rise of the consuls. After studying at the Library of Alexandria, Archimedes returned home to start his great works, full of ideas having been exposed to so many works. He did rudimentary calculus, proved geometrical theories, approximated pi, explained levers, founded statics and hydrostatics. And his work extended into the practical. He built machines, pulleys, the infamous Archimedes’ screw pump, and supposedly even a deathly heat ray of lenses that could burn ships in seconds. He was sadly killed by Roman soldiers when Syracuse was taken. But, and this is indicative of how Romans pulled in Greek know-how, the Roman general Marcus Claudius Marcellus was angry that he lost an asset, who could have benefited his war campaigns. In fact, Cicero, who was born in the first century BCE mentioned Archimedes built mechanical devices that could show the motions of the planetary bodies. He claimed Thales had designed these and that Marcellus had taken one as his only personal loot from Syracuse and donated it to the Temple of Virtue in Rome. The math, astronomy, and physics that go into building a machine like that was the culmination of hundreds, if not thousands of years of building knowledge of the Cosmos, machinery, mathematics, and philosophy. Machines like that would have been the first known computers. Machines like the first or second century Antikythera mechanism, discovered in 1902 in a shipwreck in Greece. Initially thought to be a one-off, the device is more likely to represent the culmination of generations of great thinkers and doers. Generations that came to look to the Library of Alexandria as almost a Mecca. Until they didn’t. The splintering of the lands Alexander conquered, the cost of the campaigns, the attacks from other empires, and the rise of the Roman Empire ended the age of Greek Enlightenment. As is often the case when there is political turmoil and those seeking power hate being challenged by the intellectuals, as had happened with Socrates and philosophers in Athens at the time, Ptolemy VIII caused The Library of Alexandria to enter into a slow decline that began with the expulsion of intellectuals from Alexandria in 145BC. This began a slow decline of the library until it burned, first with a small fire accidentally set by Caesar in 48 BCE and then for good in the 270s. But before the great library was gone for good, it would produce even more great engineers. Heron of Alexandria is one of the greatest. He created vending machines that would dispense holy water when you dropped a coin in it. He made small mechanical archers, models of dancers, and even a statue of a horse that could supposedly drink water. He gave us early steam engines two thousand years before the industrial revolution and ran experiments in optics. He gave us Heron’s forumula and an entire book on mechanics, codifying the known works on automation at the time. In fact, he designed a programmable cart using strings wrapped around an axle, powered by falling weights. Claudius Ptolemy came to the empire from their holdings in Egypt, living in the first century. He wrote about harmonics, math, astronomy, computed the distance of the sun to the earth and also computed positions of the planets and eclipses, summarizing them into more simplistic tables. He revolutionized map making and the properties of light. By then, Romans had emerged as the first true world power and so the Classical Age. To research this section, I read and took copious notes from the following and apologize that each passage is not credited specifically but it would just look like a regular expressions if I tried: The Evolution of Technology by George Basalla. Civilizations by Filipe Fernández-Armesto, A Short History of Technology: From The Earliest Times to AD 1900 from TK Derry and Trevor I Williams, Communication in History Technology, Culture, Leonardo da vinci by Walter Isaacson, Society from David Crowley and Paul Heyer, Timelines in Science, by the Smithsonian, Wheels, Clocks, and Rockets: A History of Technology by Donald Cardwell, a few PhD dissertations and post-doctoral studies from journals, and then I got to the point where I wanted the information from as close to the sources as I could get so I went through Dialogues Concerning Two New Sciences from Galileo Galilei, Mediations from Marcus Aurelius, Pneumatics from Philo of Byzantium, The Laws of Thought by George Boole, Natural History from Pliny The Elder, Cassius Dio’s Roman History, Annals from Tacitus, Orations by Cicero, Ethics, Rhetoric, Metaphysics, and Politics by Aristotle, Plato’s Symposium and The Trial & Execution of Socrates.
11/21/2020 • 31 minutes, 24 seconds
From Antiquity to Bitcoin: A Brief History of Currency, Banking, and Finance
Today we’re going to have a foundational episode, laying the framework for further episodes on digital piracy, venture capital, accelerators, Bitcoin, PayPal, Square, and others. I’ll try to keep from dense macro and micro economics but instead just lay out some important times from antiquity to the modern financial system so we can not repeat all this in those episodes. I apologize to professionals in these fields whose life work I am about to butcher in oversimplification. Like a lot of nerds who found myself sitting behind a keyboard writing code, I read a lot of science fiction growing up. There are dystopian and utopian outlooks on what the future holds for humanity give us a peak into what progress is. Dystopian interpretations tell of what amount to warlords and a fragmentation of humanity back to what things were like thousands of years ago. The utopian interpretations often revolve around questions about how society will react to social justice, or a market in equilibrium. The dystopian science fiction represents the past of economics and currency. And the move to online finances and digital currency tracks against what science fiction told us was coming in a future more utopian world. My own mental model of economics began with classes on micro and macro economics in college but evolved when I was living in Verona, Italy. We visited several places built by a family called the Medici’s. I’d had bank accounts up until then but that’s the first time I realized how powerful banking and finance as an institution was. Tombs, villas, palaces. The Medici built lasting edifices to the power of their clan. They didn’t invent money, but they made enough to be on par with the richest modern families. It’s easy to imagine humans from the times of hunter-gatherers trading an arrowhead for a chunk of meat. As humanity moved to agriculture and farming, we began to use grain and cattle as currency. By 8000 BC people began using tokens for trade in the Middle East. And metal objects came to be traded as money around 5,000 BC. And around 3,000 PC we started to document trade. Where there’s money and trade, there will be abuse. By 1,700 BC early Mesopotamian even issued early regulations for the banking industry in the Code of Hammurabi. By then private institutions were springing up to handle credit, deposits, interest, and loans. Some of which was handled on clay tablets. And that term private is important. These banking institutions were private endeavors. As the Egyptian empire rose, farmers could store grain in warehouses and then during the Ptolemeic era began to trade the receipts of those deposits. We can still think of these as tokens and barter items though. Banking had begun around 2000 BC in Assyria and Sumeria but these were private institutions effectively setting their own splintered and sometimes international markets. Gold was being used but it had to be measured and weighed each time a transaction was made. Until the Lydian Stater. Lydia was an empire that began in 1200 BC and was conquered by the Persians around 546 BC. It covered the modern Western Anatolia, Salihli, Manisa, and Turkey before the Persians took it. One of their most important contributions to the modern world was the first state sponsored coinage, in 700BC. The coins were electrum, which is a mix of gold and silver. And here’s the most important part. The standard weight was guaranteed by an official stamp. The Lydian king Croesus then added the concept of bimetallic coinage. Or having one coin made of gold and the other of silver. Each had a different denomination where the lower denomination was one dozen of the higher. They then figured out a way to keep counterfeit coins off the market with a Lydian stone, the color of which could be compared to other marks made by gold coins. And thus modern coinage was born. And the Lydian merchants became the merchants that helped move goods between Greece and Asia, spreading the concept of the coin. Cyrus the second defeated the Lydians and Darius the Great would issue the gold daric, with a warrior king wielding a bow. And so heads of state adorned coins. As with most things in antiquity, there are claims that China or India introduced coins first. Bronzed shells have been discovered in the ruins of Yin, the old capital of the Shang dynasty dating back hundreds of years before the Lydians. But if we go there this episode will be 8 hours long. Exodus 22:25-27 “If you lend money to my people—to any poor person among you—never act like a moneylender. Charge no interest.” Let’s put that bible verse in context. So we have coins and banks. And international trade. It’s mostly based on the weight of the coins. Commerce rises and over the centuries banks got so big they couldn’t be allowed to fail without crashing the economy of an empire. Julius Caeser expands the empire of Rome and gold flows in from conquered lands. One thing that seems constant through history is that interest rates from legitimate lenders tend to range from 3 to 14 percent. Anything less and you are losing money. Anything more and you’ve penalized the borrower to the point they can’t repay the loan. The more scarce capital the more you have to charge. Like the US in the 80s. So old Julius meets an untimely fate, there are wars, and Augustus manages to solidify the empire and Augustus reformed taxes and introduced a lot of new services to the state, building roads, establishing a standing army, the Praetorian Guard, official fire fighting and police and established a lot of the old Roman road systems through the empire that Rome is now known so well for. It was an over 40 year reign and one of the greatest in history. But greatness is expensive. Tiberius had to bail out banks and companies in the year 33. Moneylending sucks when too many people can’t pay you back. Augustus had solidified the Roman Empire and by the time Tiberius came around Rome was a rich import destination. Money was being leant abroad and interest rates and so there was less and less gold in the city. Interest rates had plummeted to 4 percent. Again, we’re in a time when money is based on the weight of a coin and there simply weren’t enough coins in circulation due to the reach of the empire. And so for all my Libertarian friends - empires learned the hard way that business and commerce are essential services and must be regulated. If money cannot be borrowed then crime explodes. People cannot be left to starve. Especially when we don’t all live on land that can produce food any more. Any time the common people are left behind, there is a revolt. The more the disparity the greater the revolt. The early Christians were heavily impacted by the money lending practices in that era between Julius Caeser and Tiberius and the Bible as an economic textbook is littered with references to usury, showing the blame placed on emerging financial markets for the plight of the commoner. Progress often involves two steps forward and one back to let all of the people in a culture reap the rewards of innovations. The Roman Empire continued on gloriously for a long, long time. Over time, Rome fell. Other empires came and went. As they did, they minted coins to prove how important the ruling faction was. It’s easy to imagine a farmer in the dark ages following the collapse of the Roman Empire dying and leaving half of the farm to each of two children. Effectively each owns one share. That stock can then be used as debt and during the rise of the French empire, 12th century courretiers de change found they could regulate debts as brokers. The practice grew. Bankers work with money all day. They get crafty and think of new ways to generate income. The Venetians were trading government securities and in 1351 outlawed spreading rumors to lower the prices of those - and thus market manipulation was born. By 1409 Flemish traders began to broker the trading of debts in Bruges at an actual market. Italian companies began issuing shares and joint stock companies were born allowing for colonization of the American extensions to European powers. That colonization increased the gold supply in Europe five fold, resulting in the first great gold rush. European markets, flush with cash and speculation and investments, grew and by 1611 in Amsterdam the stock market was born. The Dutch East India Company sold shares to the public and brought us options, bonds and derivatives. Dutch perpetual bonds were introduced and one issued in 1629 is still paying dividends. So we got the bond market for raising capital. Over the centuries leading to the industrial revolution, banking, finance, and markets became the means with which capitalism and private property replaced totalitarian regimes, the power of monarchs, and the centralized control of production. As the markets rose, modern economics were born, with Adam Smith codifying much of the known works at that point, including those from French physiocrats. The gold standard began around 1696 and gained in popularity. The concept was to allow paper money to be freely convertible into a pre-defined amount of gold. Therefore, paper money could replace gold and still be backed by gold just as it was in antiquity. By 1789 we were running a bit low on gold so introduced the bimetallic standard where silver was worth one fifteenth of gold and a predefined market ratio was set. Great thinking in economics goes back to antiquity but since the time of Tiberius, rulers had imposed regulation. This had been in taxes to pay for public goods and bailing out businesses that had to get bailed out - and tariffs to control the movement of goods in and out of a country. To put it simply, if too much gold left the country, interest rates would shoot up, inflation would devalue the ability to buy goods and as people specialized in industries, those who didn’t produce food, like the blacksmiths or cobblers, wouldn’t be able to buy food. And when people can’t buy food, bad things happen. Adam Smith believed in self-regulation though, which he codified in his seminal work Wealth of Nations, in 1776. He believed that what he called the “invisible hand” of the market would create economic stability, which would lead to prosperity for everyone. And that became the framework for modern capitalistic endeavors for centuries to come. But not everyone agreed. Economics was growing and there were other great thinkers as well. Again, things fall apart when people can’t get access to food and so Thomas Malthus responded with a theory that the rapidly growing populations of the world would outgrow the ability to feed all those humans. Where Smith had focused on the demand for goods, Malthus focused on scarcity of supply. Which led to another economist, Karl Marx, to see the means of production as key to providing the Maslovian hierarchy. He saw capitalism as unstable and believed the creation of an owner (or stock trader) class and a working class was contrary to finding balance in society. He accurately predicted the growing power of business and how that power would control and so hurt the worker at the benefit of the business. We got marginalize, general equilibrium theory, and over time we could actually test theories and the concepts that began with Smith became a science, economics, with that branch known as neoclassical. Lots of other fun things happen in the world. Bankers begin instigating innovation and progress. Booms or bull markets come, markets over index and/or supplies become scarce and recessions or bear markets ensue. Such is the cycle. To ease the burdens of an increasingly complicated financial world, England officially adopted the gold standard in 1821 which led to the emergence of the international gold standard, adopted by Germany in 1871 and by 1900, most of the world. Gaining in power and influence, the nations of the world stockpiled gold up until World War I in 1914. The international political upheaval led to a loss of faith in the gold standard and the global gold supply began to fall behind the growth in the global economy. JP Morgan dominated Wall Street in what we now called the Gilded age. He made money by reorganizing and consolidating railroad businesses throughout America. He wasn’t just the banker, he was the one helping become more efficient, digging into how the businesses worked and reorganizing and merging corporate structures. He then financed Edison’s research and instigated the creation of General Electric. He lost money investing on a Tesla project when Tesla wanted to go wireless. He bought Carnegie Steel in 1901, the first modern buyout that gave us US Steel. The industrialists from the turn of the century increased productivity at a rate humanity had never seen. We had the biggest boom market humanity had ever seen and then when the productivity gains slowed and the profits and earnings masked the slowdown in output a bubble of sorts formed and the market crashed in 1929. These markets are about returns on investments. Those require productivity gains as they are usually based margin, or the ability to sell more goods without increasing the cost - thus the need for productivity gains. That crash in 1929 sent panic through Wall Street and wiped out investors around the world. Consumer confidence, and so spending and investment was destroyed. With a sharp reduction needed in supply, industrial output faltered and workers were laid off, creating a vicious cycle. The crash also signaled the end of the gold standard. The pound and franc were mismanaged, commodity prices, new power Germany was having trouble repaying war debts, commodity prices collapsed, and thinking a reserve of gold would keep them legitimate, countries raised interest rates, further damaging the global economy. High interest rates reduce investment. England finally suspended the gold standard in 1931 which sparked other countries to do the same, with the US raising the number of dollars per ounce of gold from $20 to $35 and so obtaining enough gold to back the US dollar as the de facto standard. Meanwhile, science was laying the framework for the next huge boom - which would be greater in magnitude, margins, and profits. Enter John Maynard Keynes and Keynesian economics, the rise of macroeconomics. In a departure from neoclassical economics he believed that the world economy had grown to the point that aggregate supply and demand would not find equilibrium without government intervention. In short, the invisible hand would need to be a visible hand by the government. By then, the Bolsheviks had established the Soviet Union and Mao had founded the communist party in China. The idea that there had been a purely capitalist society since the time the Egyptian government built grain silos or since Tiberius had rescued the Roman economy with bailouts was a fallacy. The US and other governments began spending, and incurring debt to do so, and we began to dig the world out of a depression. But it took another world war to get there. And that war did more than just end the Great Depression. World War II was one of the greatest rebalancing of powers the world has known - arguably even greater than the fall of the Roman and Persian empires and the shift between Chinese dynasties. In short, we implemented a global world order of sorts in order to keep another war like that from happening. Globalism works for some and doesn’t work well for others. It’s easy to look on the global institutions built in that time as problematic. And organizations like the UN and the World Bank should evolve so they do more to lift all people up, so not as many around the world feel left behind. The systems of governance changed world economics.The Bretton Woods Agreement would set the framework for global currency markets until 1971. Here, all currencies were valued in relation to the US dollar which based on that crazy rebalancing move now sat on 75% of the worlds gold. The gold was still backed at a rate of $35 per ounce. And the Keynesian International Monetary Fund would begin managing the balance of payments between nations. Today there are 190 countries in the IMF Just as implementing the gold standard set the framework that allowed the investments that sparked capitalists like JP Morgan, an indirect financial system backed by gold through the dollar allowed for the next wave of investment, innovation, and so productivity gains. This influx of money and investment meant there was capital to put to work and so bankers and financiers working with money all day derived new and witty instruments with which to do so. After World War II, we got the rise of venture capital. These are a number of financial instruments that have evolved so qualified investors can effectively make bets on a product or idea. Derivatives of venture include incubators and accelerators. The best example of the early venture capital deals would be when Ken Olson and Harlan Anderson raised $70,000 in 1957 to usher in the age of transistorized computing. DEC rose to become the second largest computing company - helping revolutionize knowledge work and introduce a new wave of productivity gains and innovation. They went public in 1968 and the investor made over 500 times the investment, receiving $38 million in stock. More importantly, he stayed friends and a confidant of Olson and invested in over 150 other companies. The ensuing neoclassical synthesis of economics basically informs us that free markets are mostly good and efficient but if left to just Smith’s invisible hand, from time to time they will threaten society as a whole. Rather than the dark ages, we can continue to evolve by keeping markets moving and so large scale revolts at bay. As Aasimov effectively pointed out in Foundation - this preserves human knowledge. And strengthens economies as we can apply math, statistics, and the rising computers to help apply monetary rather than fiscal policy as Friedman would say, to keep the economy in equilibrium. Periods of innovation like we saw in the computer industry in the post-war era always seem to leave the people the innovation displaces behind. When enough people are displaced we return to tribalism, nationalism, thoughts of fragmentation, and moves back into the direction of dystopian futures. Acknowledging people are left behind and finding remedies is better than revolt and retreating from progress - and showing love to your fellow human is just the right thing to do. Not doing so creates recessions like the ups and downs of the market in the years as gaps between innovative periods formed. The stock market went digital in 1966, allowing more and more trades to be processed every day. Instinet was founded in 1969 allowing brokers to make after hour trades. NASDAQ went online in 1970, removing the floor or trading market that had been around since the 1600s. And as money poured in, ironically gold reserves started to go down a little. Just as the Romans under Tiberius saw money leave the country as investment, US gold was moving to other central banks to help rebuild countries, mostly those allied with NATO, to rebuild their countries. But countries continued to release bank notes to pay to rebuild, creating a period of hyperinflation. As with other times when gold became scarce, interest rates became unpredictable, moving from 3 to 17 percent and back again until they began to steadily decline in 1980. Gold would be removed from the London market in 1968 and other countries began to cash out their US dollars for gold. Belgium, the Netherlands, then Britain cashed in their dollars for gold, and much as had happened under the reign of Tiberius, there wasn’t enough to sustain the financial empires created. This was the turning point for the end of the informal links back to the gold standard. By 1971 Nixon was forced to sever the relationship between the dollar and gold and the US dollar, by then the global standard going back to the Bretton Woods Agreement, became what’s known as fiat money. The Bretton Woods agreement was officially over and the new world order was morphing into something else. Something that was less easily explainable to common people. A system where the value of currency was based not on the link to gold but based on the perception of a country, as stocks were about to move from an era of performance and productivity to something more speculative. Throughout the 80s more and more orders were processed electronically and by 1996 we were processing online orders. The 2000s saw algorithmic and high frequency trading. By 2001 we could trade in pennies and the rise of machine learning created billionaire hedge fund managers. Although earlier versions were probably more just about speed. Like if EPS is greater than Expected EPS and guidance EPS is greater than EPS then buy real fast, analyze the curve and sell when it tops out. Good for them for making all the moneys but while each company is required to be transparent about their financials, the high frequency trading has gone from rewarding companies with high earnings to seeming like more a social science where the rising and falling was based on confidence about an industry and the management team. It became harder and harder to explain how financial markets work. Again, bankers work with money all day and come up with all sorts of financial instruments to invest in with their time. The quantity and types of these became harder to explain. Junk bonds, penny stocks, and to an outsider strange derivatives. And so moving to digital trading is only one of the ways the global economy no longer makes sense to many. Gold and other precious metals can’t be produced at a rate faster than humans are produced. And so they had to give way to other forms of money and currency, which diluted the relationship between people and a finite, easy to understand, market of goods. As we moved to a digital world there were thinkers that saw the future of currency as flowing electronically. Russian cyberneticist Kitov theorized electronic payments and then came ATMs back in the 50s, which the rise of digital devices paved the way to finally manifest themselves over the ensuing decades. Credit cards moved the credit market into more micro-transactional, creating industries where shop-keepers had once kept debits in a more distributed ledger. As the links between financial systems increased and innovators saw the rise of the Internet on the way, more and more devices got linked up. This combined with the libertarianism shown by many in the next wave of Internet pioneers led people to think of ways for a new digital currency. David Chaum thought up ecash in 1983, to use encrypted keys, much as PGP did for messages, to establish a digital currency. In 1998, Nick Szabo came up with the idea for what he called bitgold, a digital currency based on cryptographic puzzles and the solved puzzles would be sent to a public registry using a public key where the party who solved the puzzle would receive a private key. This was kinda’ like using a mark on a Lydian rock to make sure coins were gold. He didn’t implement the system but had the initial concept that it would work similar to the gold standard - just without a central authority, like the World Bank. This was all happening concurrently with the rise of ubiquitous computing, the move away from checking to debit and credit cards, and the continued mirage that clouded what was really happening in the global financial system. There was a rise in online e-commerce with various sites emerging to buy products in a given industry online. Speculation increased creating a bubble around Internet companies. That dot com bubble burst in 2001 and markets briefly retreated from the tech sector. Another bull market was born around the rise of Google, Netflix, and others. Productivity gains were up and a lot of money was being put to work in the market, creating another bubble. Markets are cyclical and need to be reigned back in from time to time. That’s not to minimize the potentially devastating impacts to real humans. The Global Financial Crisis of 2008 came along for a number of reasons, mostly tied to the bursting of a housing bubble to oversimplify the matter. The lack of liquidity with banks caused a crash and the lack of regulation caused many to think through the nature of currency and money in an increasingly globalized and digital world. After all, if the governments of the world couldn’t protect the citizenry of the world from seemingly unscrupulous markets then why not have completely deregulated markets where the invisible hand does so? Which brings us to the rise of cryptocurrencies. Who is John Galt? Bitcoin was invented by Satoshi Nakamoto, who created the first blockchain database and brought the world into peer-to-peer currency in 2009 when bitcoin .1 was released. Satoshi mined block 0 of bitcoin for 50 bitcoins. Over the next year Satoshi mined a potential of about a million bitcoins. Back then a bitcoin was worth less than a penny. As bitcoin grew and the number of bitcoins mined into the blockchain increased, the scarcity increased and the value skyrocketed reaching over $15 billion as of this writing. Who is Satoshi Nakamoto? No one knows - the name is a pseudonym. Other cryptocurrencies have risen such as Etherium. And the market has largely been allowed to evolve on its own, with regulators and traditional financiers seeing it as a fad. Is it? Only time will tell. There is about an estimated 200,000 tonnes of gold in the world worth about 93 trillion dollars if so much of it weren’t stuck in necklaces and teeth buried in the ground. The US sits on the largest stockpile of it today, at 8,000 tonnes worth about a third of a trillion dollars, then Germany, Italy, and France. By contrast there are 18,000,000 bitcoins with a value of about $270 billion, a little less than the US supply of gold. By contrast the global stock market is valued at over $85 trillion. The global financial markets are vast. They include the currencies of the world and the money markets that trade those. Commodity markets, real estate, the international bond and equity markets, and derivative markets which include contracts, options, and credit swaps. This becomes difficult to conceptualize because as one small example in the world financial markets, over $190 billion is traded on stock markets a day. Seemingly, rather than running on gold reserves, markets are increasingly driven by how well they put debt to work. National debts are an example of that. The US National Debt currently stands at over $27 trillion dollars. Much is held by our people as bonds, although some countries hold some as security as well, including governments like Japan and China, who hold about the same amount of debt if you include Hong Kong with China. But what does any of that mean? The US GDP sits at about $22.3 trillion dollars. So we owe a little more than we make in a year. Much as many families with mortgages, credit cards, etc might owe about as much as they make. And roughly 10% of our taxes go to pay interest. Just as we pay interest on mortgages. Most of this is transparent. As an example, government debt is often held in the form of a treasury bond. The treasury.gov website lists who holds what bonds: https://ticdata.treasury.gov/Publish/mfh.txt. Nearly every market discussed here can be traced to a per-transaction basis, with many transactions being a matter of public record. And yet, there is a common misconception that people think the market is controlled by a small number of people. Like a cabal. But as with most perceived conspiracies, the global financial markets are much more complex. There are thousands of actors who think they are acting rationally who are simply speculating. And there are a few who are committing a crime by violating or inorganically manipulating markets, as has been illegal since the Venetians passed their first laws on the matter. Most day traders will eventually lose all of their money. Most market manipulators will eventually go to jail. But there’s a lot of grey in between. And that can’t entirely be planned for. At the beginning of this episode I mentioned it was a prelude to a deeper dive into digital piracy, venture capital, Bitcoin, PayPal, Square, and others. Piracy, because it potentially represents the greatest redistribution of wealth since the beginning of time. Baidu and Alibaba have made their way onto public exchanges. ANT group has the potential to be the largest IPO in history. Huawei is supposedly owned by employees. You can also buy stocks in Russian banking, oil, natural gas, and telecom. Does this mean that the split created when the ideas of Marx became a political movement that resulted in communist regimes is over? No. These have the potential of creating a bubble. One that will then need correcting, maybe even based on intellectual property damage claims. The seemingly capitalistic forays made by socialist or communist countries just go to show that there really isn’t and has never been a purely capitalist, socialist, or communist market. Instead, they’re spectrums separated by a couple of percentages of tax here and there to pay for various services or goods to the people that each nation holds as important enough to be universal to whatever degree that tax can provide the service or good. So next time you hear “you don’t want to be a socialist country, do you?” Keep in mind that every empire in history has simply been somewhere in a range from a free market to a state-run market. The Egyptians provided silos, the Lydians coined gold, the Romans built roads and bailed out banks, nations adopted gold as currency, then build elaborate frameworks to gain market equilibrium. Along the way markets have been abused and then regulated and then deregulated. The rhetoric used to day though is really a misdirection play handed down by people with ulterior motives. You know, like back in the Venetian times. I immediately think of dystopian futures when I feel I’m being manipulated. That’s what charlatans do. That’s not quite so necessary in a utopian outlook.
11/8/2020 • 39 minutes, 20 seconds
How Not To Network A Nation: The Russian Internet That Wasn't
I just finished reading a book by Ben Peters called How Not To Network A Nation: The Uneasy History of the Soviet Internet. The book is an amazing deep dive into the Soviet attempts to build a national information network primarily in the 60s. The book covers a lot of ground and has a lot of characters, although the most recurring is Viktor Glushkov, and if the protagonist isn’t the Russian scientific establishment, perhaps it is Viktor Glushkov. And if there’s a primary theme, it’s looking at why the Soviets were unable to build a data network that covered the Soviet Union, allowing the country to leverage computing at a micro and a macro scale The final chapter of the book is one of the best summaries and most insightful I’ve ever read on the history of computers. While he doesn’t directly connect the command and control heterarchy of the former Soviet Union to how many modern companies are run, he does identify a number of ways that the Russian scientists were almost more democratic, or at least in their zeal for a technocratic economy, than the US Military-Industrial-University complex of the 60s. The Sources and Bibliography is simply amazing. I wish I had time to read and listen and digest all of the information that went into the making if this amazing book. And the way he cites notes that build to conclusions. Just wow. In a previous episode, we covered the memo, “Memorandum for Members and Affiliates of the Intergalactic Computer Network” - sent by JCR Licklider in 1963. This was where the US Advanced Research Projects Agency instigated a nationwide network for research. That network, called ARPAnet, would go online in 1969, and the findings would evolve and change hands when privatized into what we now call the Internet. We also covered the emergence of Cybernetics, which Norbert Wiener defined in 1948 as a the systems-based science of communication and automatic control systems - and we covered the other individuals influential in its development. It’s easy to draw a straight line between that line of thinking and the evolution that led to the ARPAnet. In his book, Peters shows how Glushkov uncovered cybernetics and came to the same conclusion that Licklider had, that the USSR needed a network that would link the nation. He was a communist and so the network would help automate the command economy of the growing Russian empire, an empire that would need more people managing it than there were people in Russia, if the bureaucracy continued to grow at a pace that was required to do the manual computing to get resources to factories and good to people. He had this epiphany after reading Wiener’s book on cybernetics - which had been hidden away from the Russian people as American propaganda. Glushkov’s contemporary, Anatoly Kitov had come to the same realization back in 1959. By 1958 the US had developed the Semi-Automatic Ground Environment, or SAGE. The last of that equipment went offline in 1984. The environment was a system of networked radar equipment that could be used as eyes in the sky to detect a Soviet attack. It was crazy to think about that a few years ago, but think today about a radar system capable of detecting influence in elections and maybe notsomuch any more. SAGE linked computers built by IBM. The Russians saw defense as cost prohibitive. Yet at Stalin’s orders they began to develop a network of radar sites in a network of sorts around Moscow in the early 50s, extending to Leningrad. They developed the BESM-1 mainframe in 1952 to 1953 and while Stalin was against computing and western cybernetic doctrine outside of the military, as in America, they were certainly linking sites to launch missiles. Lev Korolyov worked on BESM and then led the team to build the ballistic missile defense system. So it should come as no surprise that after a few years Soviet scientists like Glushkov and Kitov would look to apply military computing know-how to fields like running the economics of the country. Kitov had seen technology patterns before they came. He studied nuclear physics before World War II, then rocketry after the war, and he then went to the Ministry of Defence at Bureau No 245 to study computing. This is where he came in contact with Wiener’s book on Cybernetics in 1951, which had been banned in Russia at the time. Kitov would work on ballistic missiles and his reputation in the computing field would grow over the years. Kitov would end up with hundreds of computing engineers under his leadership, rising to the rank of Colonel in the military. By 1954 Kitov was tasked with creating the first computing center for the Ministry of Defence. They would take on the computing tasks for the military. He would oversee the development of the M-100 computer and the transition into transistorized computers. By 1956 he would write a book called “Electronic Digital Computers” and over time, his views on computers grew to include solving problems that went far beyond science and the military. Running company Kitov came up with the Economic Automated Management System in 1959. This was denied because the military didn’t want to share their technology. Khrushchev sent Brezhnev, who was running the space program and an expert in all things tech, to meet with Kitov. Kitov was suggesting they use this powerful network of computer centers to run the economy when the Soviets were at peace and the military when they were at war. Kitov would ultimately realize that the communist party did not want to automate the economy. But his “Red Book” project would ultimately fizzle into one of reporting rather than command and control over the years. The easy answer as to why would be that Stalin had considered computers the tool of imperialists and that feeling continued with some in the communist party. The issues are much deeper than that though and go to the heart of communism. You see, while we want to think that communism is about the good of all, it is irrational to think that people will act ways in their own self-interest. Microeconomics and macroeconomics. And automating command certainly seems to reduce the power of those in power who see that command taken over by a machine. And so Kitov was expelled from the communist party and could no longer hold a command. Glushkov then came along recommending the National Automated System for Computation and Information Processing, or OGAS for short, in 1962. He had worked on computers in Kyiv and then moved to become the Director of the Computer Center in Ukraine at the Academy of Science. Being even more bullish on the rise of computing, Glushkov went further even added an electronic payment system on top of controlling a centrally planned economy. Computers were on the rise in various computer centers and other locations and it just made sense to connect them. And they did at small scales. As was done at MIT, Glushkov built a walled garden of researchers in his own secluded nerd-heaven. He too made a grand proposal. He too saw the command economy of the USSR as one that could be automated with a computer, much as many companies around the world were employing ERP solutions in the coming decades. The Glushkov proposal continued all the way to the top. They were able to show substantial return on investment yet the proposal to build OGAS was ultimately shot down in 1970 after years of development. While the Soviets were attempting to react to the development of the ARPAnet, they couldn’t get past infighting. The finance minister opposed it and flatly refused. There were concerns about which ministry the system would belong to and basically political infighting much as I’ve seen at many of the top companies in the world (and increasingly in the US government). A major thesis of the book is that the Soviet entrepreneurs trying to build the network acted more like capitalists than communists and Americans building our early networks acted more like socialists than capitalists. This isn’t about individual financial gains though. Glushkov and Kitov in fact saw how computing could automate the economy to benefit everyone. But a point that Peters makes in the book is centered around informal financial networks. Peters points out that Blat, the informal trading of favors that we might call a black market or corruption, was common place. An example he uses in the book is that if a factory performs at 101% of expected production the manager can just slide under the radar. But if they perform at 120% then those gains will be expected permanently and if they ever dip below the expected productivity, they might meet a poor fate. Thus Blat provides a way to trade goods informally and keep the status quo. A computer doing daily reports would make this kind of flying under the radar of Gosplan, or the Soviet State Planning Committee difficult. Thus factory bosses would likely inaccurately enter information into computers and further the Tolchachs, or pushers, of Blat. A couple of points I’d love to add onto those Peters made, which wouldn’t be obvious without that amazing last paragraph in the book. The first is that I’ve never read Bush, Licklider, or any of the early pioneers claim computers should run a macroeconomy. The closest thing that could run a capitalist economy. And the New York Stock Exchange would begin the process of going digital in 1966 when the Dow was at 990. The Dow sat at about that same place until 1982. Can you imagine that these days? Things looked bad when it dropped to 18,500. And the The London Stock Exchange held out going digital until 1986 - just a few years after the dow finally moved over a thousand. Think about that as it hovers around $26,000 today. And look at the companies and imagine which could get by without computers running their company - much less which are computer companies. There are 2 to 6 billion trades a day. It would probably take more than the population of Russia just to push those numbers if it all weren’t digital. In fact now, there’s an app (or a lot of apps) for that. But the point is, going back to Bush’s Memex, computers were to aid in human decision making. In a world with an exploding amount of data about every domain, Bush had prophesied the Memex would help connect us to data and help us to do more. That underlying tenant infected everyone that read his article and is something I think of every time I evaluate an investment thesis based on automation. There’s another point I’d like to add to this most excellent book. Computers developed in the US were increasingly general purpose and democratized. This led to innovative new applications just popping up and changing the world, like spreadsheets and word processors. Innovators weren’t just taking a factory “online” to track the number of widgets sold and deploying ICBMs - they were foundations for building anything a young developer wanted to build. The uses in education with PLATO, in creativity with Sketchpad, in general purpose languages and operating systems, in early online communities with mail and bulletin boards, in the democratization of the computer itself with the rise of the pc and the rapid proliferation with the introduction of games, and then the democratization of raw information with the rise of gopher and the web and search engines. Miniaturized and in our pockets, those are the building blocks of modern society. And the word democratization to me means a lot. But as Peters points out, sometimes the Capitalists act like Communists. Today we close down access to various parts of those devices by the developers in order to protect people. I guess the difference is now we can build our own but since so many of us do that at #dayjob we just want the phone to order us dinner. Such is life and OODA loops. In retrospect, it’s easy to see how technological determinism would lead to global information networks. It’s easy to see electronic banking and commerce and that people would pay for goods in apps. As the Amazon stock soars over $3,000 and what Jack Ma has done with Alibaba and the empires built by the technopolies at Amazon, Apple, Microsoft, and dozens of others. In retrospect, it’s easy to see the productivity gains. But at the time, it was hard to see the forest through the trees. The infighting got in the way. The turf-building. The potential of a bullet in the head from your contemporaries when they get in power can do that I guess. And so the networks failed to be developed in the USSR and ARPAnet would be transferred to the National Science Foundation in 1985, and the other nets would grow until it was all privatized into the network we call the Internet today, around the same time the Soviet Union was dissolved. As we covered in the episode on the history of computing in Poland, empires simply grow beyond the communications mediums available at the time. By the fall of the Soviet Union, US organizations were networking in a build up from early adopters, who made great gains in productivity increases and signaled the chasm crossing that was the merging of the nets into the Internet. And people were using modems to connect to message boards and work with data remotely. Ironically, that merged Internet that China has splinterneted and that Russia seems poised to splinter further. But just as hiding Wiener’s cybernetics book from the Russian people slowed technological determinism in that country, cutting various parts of the Internet off in Russia will slow progress if it happens. The Soviets did great work on macro and micro economic tracking and modeling under Glushkov and Kitov. Understanding what you have and how data and products flow is one key aspect of automation. And sometimes even more important in helping humans make better-informed decisions. Chile tried something similar in 1973 under Salvador Allende, but that system failed as well. And there’s a lot to digest in this story. But that word progress is important. Let’s say that Russian or Chinese crackers steal military-grade technology from US or European firms. Yes, they get the tech, but not the underlying principals that led to the development of that technology. Just as the US and partners don’t proliferate all of their ideas and ideals by restricting the proliferation of that technology in foreign markets. Phil Zimmerman opened floodgates when he printed the PGP source code to enable the export of military-grade encryption. The privacy gained in foreign theaters contributed to greater freedoms around the world. And crime. But crime will happen in an oppressive regime just as it will in one espousing freedom. So for you hackers tuning in - whether you’re building apps, hacking business, or reingineering for a better tomorrow: next time you’re sitting in a meeting and progress is being smothered at work or next time you see progress being suffocated by a government, remember that those who you think are trying to hold you back either don’t see what you see, are trying to protect their own power, or they might just be trying to keep progress from outpacing what their constituents are ready for. And maybe those are sometimes the same thing, just from a different perspective. Because go fast at all costs not only leaves people behind but sometimes doesn’t build a better mousetrap than what we have today. Or, go too fast and like Kitov you get stripped of your command. No matter how much of a genius you, or your contemporary Glushkov are. The YouTube video called “Internet of Colonel Kitov” has a great quote: “pioneers are recognized by the arrows sticking out of their backs.” But hey, at least history was on their side! Thank you for tuning in to the History of Computing Podcast. We are so, so, so lucky to have you. Have a great day and I hope you too are on the right side of history!
11/2/2020 • 20 minutes, 7 seconds
From The Press To Cambridge Analytica
Welcome to the history of computing podcast. Today we’re going to talk about the use of big data in elections. But first, let’s start with a disclaimer. I believe that these problems outlined in this episode are apolitical. Given the chance to do so I believe most politicians (or marketers), despite their party, would have jumped on what happened with what is outlined in this podcast. Just as most marketers are more than happy to buy data, even when not knowing the underlying source of that data. No offense to the parties but marketing is marketing. Just as it is in companies. Data will be used to gain an advantage in the market. Understanding the impacts of our decisions and the values of others is an ongoing area of growth for all of us. Even when we have quotas on sales qualified leads to be delivered. Now let’s talk about data sovereignty. Someone pays for everything. The bigger and more lucrative the business, the more that has to be paid to keep organizations necessarily formed to support an innovation alive. If you aren’t paying for a good or service, then you yourself are the commodity. In social media, this is represented in the form of a company making their money from data about you and from the ads you see. The only other viable business model used is to charge for the service, like a Premium LinkedIn account as opposed to the ones used by us proletariat. Our devices can see so much about us. They know our financial transactions, where we go, what we buy, what content we consume, and apparently what our opinions and triggers are. Sometimes, that data can be harnessed to show us ads. Ads about things to buy. Ads about apps to install. Ads about elections. My crazy uncle Billy sends me routine invitations to take personality quizzes. No thanks. Never done one. Why? I worked on one of the first dozen Facebook apps. A simple rock, paper, scissors game. At the time, it didn’t at all seem weird to me as a developer that there was an API endpoint to get a list of friends from within my app. It’s how we had a player challenge other players in a game. It didn’t seem weird that I could also get a list of their friends. And it didn’t seem weird that I could get a lot of personal data on people through that app. I mean I had to display their names and photos when they played a game, right? I just wanted to build a screen to invite friends to play the app. I had to show a photo so you could see who you were playing. And to make the game more responsive I needed to store the data in my own SQL tables. It didn’t seem weird then. I guess it didn’t seem weird until it did. What made it weird was the introduction of highly targeted analytics and retargeting. I have paid for these services. I have benefited from these services in my professional life and to some degree I have helped develop some. I’ve watched the rise of large data warehouses. I’ve helped buy phone numbers and other personally identifiable information of humans and managed teams of sellers to email and call those humans. Ad targeting, drip campaigns, lead scoring, and providing very specific messages based on attributes you know about a person are all a part of the modern sales and marketing machine at any successful company. And at some point, it went from being crazy how much information we had about people to being - well, just a part of doing business. The former Cambridge Analytica CEO Alexander Nix once said “From Mad Men in the day to Math Men today.” From Don Draper to Betty’s next husband Henry (a politician) there are informal ties between advertising, marketing and politics. Just as one of the founders of SCL, the parent company of Cambridge Analytica had ties with royals having dated one and gone to school with others in political power. But there have also always been formal ties. Public Occurrences Both Foreign and Domestick was the first colonial newspaper in America and was formally suppressed after its first edition in 1690. But the Boston News-Letter was formally subsidized in 1704. Media and propaganda. Most newspapers were just straight up sponsoring or sponsored by a political platform in the US until the 1830s. To some degree, that began with Ben Franklin’s big brother James Franklin in the early 1700s with the New England Courant. Franklin would create partnerships for content distribution throughout the colonies, spreading his brand of moral virtue. And the papers were stoking the colonies into revolution. And after the revolution Hamilton instigated American Minerva as the first daily paper in New York - to be a Federalist paper. Of course, the Jeffersonian Republicans called him an “incurable lunatic.” And yet they still guaranteed us the freedom of press. And that freedom grew to investigative reporting, especially during the Progressive Era, from the tail end of the 19th century up until the start of the roaring twenties. While Teddy Roosevelt would call them Muckrakers, their tradition extends from Nellie Bly and Fremont Older to Seymour Hersch, Kwitny, even the most modern Woodward and Bernstein. They led to stock reform, civic reforms, uncovering corruption, exposing crime in labor unions, laying bare monopolistic behaviors, improving sanitation and forcing us to confront racial injustices. They have been independent of party affiliation and yet constantly accused over the last hundred years of being against whomever is in power at the time. Their journalism extended to radio and then to television. I think the founders would be proud of how journalism evolved and also unsurprised as to some of the ways it has devolved. But let’s get back to someone is always paying. The people can subscribe to a newspaper but the advertising is a huge source of revenue. With radio and television flying across airwaves and free, advertising exclusively became what paid for content and the ensuing decades became the golden age of that industry. And politicians bought ads. If there is zero chance a politician can win a state, why bother buying ads in that state. That’s a form of targeting with a pretty simple set of data. In Mad Men, Don is sent to pitch the Nixon campaign. There has always been a connection between disruptive new mediums and politics. Offices have been won by politicians able to gain access to early printing presses to spread their messages to the masses, those connected to print media to get articles and advertising, by great orators at the advent of the radio, and by good-looking charismatic politicians first able to harness television - especially in the Mad Men fueled ad exec inspired era that saw the Nixon campaigns in the 60s. The platforms to advertise become ubiquitous, they get abused, and then they become regulated. After television came news networks specifically meant to prop up an agenda, although unable to be directly owned by a party. None are “fake news” per se, but once abused by any they can all be cast in doubt, even if most especially done by the abuser. The Internet was no different. The Obama campaign was really the first that leveraged social media and great data analytics to orchestrate what can be considered to really be the first big data campaign. And after his campaign carried him to a first term the opposition was able to make great strides in countering that. Progress is often followed by lagerts who seek to subvert the innovations of an era. And they often hire the teams who helped with previous implementations. Obama had a chief data scientist, Rayid Ghani. And a chief analytics officer. They put apps in the hands of canvassers and they mined Facebook data from Facebook networks of friends to try and persuade voters. They scored voters and figured out how to influence votes for certain segments. That was supplemented by thousands of interviews and thousands of hours building algorithms. By 2012 they were pretty confident they knew which of the nearly 70 million Americans that put him in the White House. And that gave the Obama campaign the confidence to spend $52 million in online ads against Romney’s $26 million to bring home the win. And through all that the Democratic National Committee ended up with information on 180 million voters. That campaign would prove the hypothesis that big data could win big elections. Then comes the 2016 election. Donald Trump came from behind, out of a crowded field of potential Republican nominees, to not only secure the Republican nomination for president but then to win that election. He won the votes to be elected in the electoral college while losing the popular vote. That had happened when John Quincy Adams defeated Andrew Jackson in 1824, although it took a vote in the House of Representatives to settle that election. Rutherford B Hayes defeated Samuel Tilden in 1876 in the electoral college but lost the popular vote. And it happened again when Grover Cleveland lost to Benjamin Harrison in 1888. And in 2000 when Bush beat Gore. And again when Trump beat Hillary Clinton. And he solidly defeated her in the electoral college with 304 to her 227 votes. Every time it happens, there seems to be plenty of rhetoric about changing the process. But keep in mind the framers built the system for a reason: to give the constituents of every state a minimum amount of power to elect officials that represent them. Those two represent the number of senators for the state and then the state receives one for each member of the house of representatives. States can choose how the electors are instructed to vote. Most states (except Maine and Nebraska) have all of their electors vote for a single ticket, the one that won the state. Most of the states instruct their elector to vote based on who won the popular vote for their state. Once all the electors cast their votes, Congress counts the votes and the winner of the election is declared. So how did he come from behind? One easy place to blame is data. I mean, we can blame data for putting Obama into the White House, or we can accept a message of hope and change that resonated with the people. Just as we can blame data for Trump or accept a message that government wasn’t effective for the people. Since this is a podcast on technology, let’s focus on data for a bit. And more specifically let’s look at the source of one trove of data used for micro-targeting, because data is a central strategy for most companies today. And it was a central part of the past four elections. We see the ads on our phones so we know that companies have this kind of data about us. Machine learning had been on the rise for decades. But a little company called SCL was started In 1990 as the Behavioral Dynamics Institute by a British ad man named Nigel Oakes after leaving Saatchi & Saatchi. Something dangerous is when you have someone like him make this kind of comparison “We use the same techniques as Aristotle and Hitler.” Behavioural Dynamics studied how to change mass behavior through strategic communication - which US Assistant Secretary of Defense for Public Affairs Robert Hastings described in 2008 as the “synchronization of images, actions, and words to achieve a desired effect.” Sounds a lot like state conducted advertising to me. And sure, reminiscent of Nazi tactics. You might also think of it as propaganda. Or “pay ops” in the Vietnam era. And they were involved in elections in the developing world. In places like the Ukraine, Italy, South Africa, Albania, Taiwan, Thailand, Indonesia, Kenya, Nigeria, even India. And of course in the UK. Or at least on behalf of the UK and whether directly or indirectly, the US. After Obama won his second term, SCL started Cambridge Analytica to go after American elections. They began to assemble a similar big data warehouse. They hired people like Brittany Kaiser who’d volunteered for Obama and would become director of Business Development. Ted Cruz used them in 2016 but it was the Trump campaign that was really able to harness their intelligence. Their principal investor was Robert Mercer, former CEO of huge fund Renaissance Technologies. He’d gotten his start at IBM Research working on statistical machine translation and was recruited in the 90s to apply data modeling and computing resources to financial analysis. This allowed them to earn nearly 40% per year on investments. An American success story. He was key in the Brexit vote, donating analytics to Nigel Farage and an early supporter of Breitbart News. Cambridge Analytica would get involved in 44 races in the 2014 midterm elections. By 2016, Project Alamo was running at a million bucks a day in Facebook advertising. In the documentary The Great Hack, they claim this was to harvest fear. And Cambridge Analytica allowed the Trump campaign to get really specific with targeting. So specific that they were able to claim to have 5,000 pieces of data per person. Enter whistleblower Christopher Wylie who claims over a quarter million people took a quick called “This is Your Digital Life” which exposed the data of around 50 million users. That data was moved off Facebook servers and stored in a warehouse where it could be analyzed and fields merged with other data sources without the consent of the people who played the game or the people who were in their friend networks. Dirty tactics. Alexander Nix admitted to using bribery stings and prostitutes to influence politicians. So it should be as no surprise that they stole information on well over 50 million Facebook users in the US alone. And of course then they lied about it when being investigated by the UK for Russian interference and fake news in the lead to the Brexit referendum. Investigations go on. After investigations started piling up, some details started to emerge. This is Your Digital Life was written by Dr Spectre. It gets better. That’s actually Alexandr Kogan for Cambridge Analytica. He had received research funding from the University of St Petersburg and was then lecturing at the Psychology department at the University of Cambridge. It would be easy to make a jump that he was working for the Russkies but here’s the thing, he also got research funding from Canada, China, the UK, and the US. He claimed he didn’t know what the app would be used for. That’s crap. When I got a list of friends and friends friends who I could spider through, I parsed the data and displayed it on a screen as a pick list. He piped it out to a data warehouse. When you do that you know exactly what’s happening with it. So the election comes and goes. Trump wins. And people start asking questions. As they do when one party wins the popular vote and not the electoral college. People misunderstand and think you can win a district due to redistricting in most states and carry the state without realizing most are straight majority. Other Muckraker reporters from around the world start looking into Brexit and US elections and asking questions. Enter Paul-Olivier Dehaye. While an assistant professor at the University of Zurich he was working on Coursera. He started asking about the data collection. The word spread slowly but surely. Then enter American professor David Carroll, who sued Cambridge Analytica to see what data they had on him. Dehaye contributed to his Subject Access request and suddenly the connections between Cambridge Analytica and Brexit started to surface, as did the connection between Cambridge Analytica and the Trump campaign, including photos of the team working with key members of the campaign. And ultimately of the checks cut. Cause there’s always a money trail. I’ve heard people claim that there was no interference in the 2016 elections, in Brexit, or in other elections. Now, if you think the American taxpayer didn’t contribute to some of the antics by Cambridge Analytica before they turned their attention to the US, I think we’re all kidding ourselves. And there was Russian meddling in US elections and illegally obtained materials were used, whether that’s emails on servers then leaked to WikiLeaks or stolen Facebook data troves. Those same tactics were used in Brexit. And here’s the thing, it’s been this way for a long, long time - it’s just so much more powerful today than ever before. And given how fast data can travel, every time it happens, unless done in a walled garden, the truth will come to light. Cambridge Analytica kinda’ shut down in 2017 after all of this came to light. What do I mean by kinda? Well, former employees setup a company called Emerdata Limited who then bought the SCL companies. Why? There were contracts and data. They brought on the founder of Blackwater, Mercer’s daughter Rebekah, and others to serve on the board of directors and she was suddenly the “First Lady of the Alt-Right.” Whether Emerdata got all of the company, they got some of the scraped data from 87 million users. No company with the revenues they had goes away quietly or immediately. Robert Mercer donated the fourth largest amount in the 2016 presenting race. He was also the one who supposedly introduced Trump to Steve Bannon. In the fallout of the scandals if you want to call them that, Mercer stepped down from Renaissance and sold his shares of Breitbart to his daughters. Today, he’s a benefactor of the Make America Number 1 Super PAC and remains one of the top donors to conservative causes. After leaving Cambridge Analytica, Nix was under investigations for a few years before settling with the Federal Trade Commission and agreed to delete illegally obtained data and settled with the UK Secretary of State that he had offered unethical services and agreed to not act as a director of another company for at least 7 years. Brittany Kaiser flees to Thailand and is now a proponent of banning political advertising on Facebook and being able to own your own data. Facebook paid a $5 billion fine for data privacy violations and have overhauled their APIs and privacy options. It’s better but not great. I feel like they’re doing as well as they can and they’ve been accused of tampering with feeds by conservative and liberal media outlets alike. To me, if they all hate you, you’re probably either doing a lot right, or basically screwing all of it up. I wouldn’t be surprised to see fines continue piling up. Kogan left the University of Cambridge in 2018. He founded Philometrics, a firm applying big data and AI to surveys. Their website isn’t up as of the recording of this episode. His Tumblr seems to be full of talk about acne and trying to buy cheat codes for video games these days. Many, including Kogan, have claimed that micro-targeting (or psychographic modeling techniques) against large enhanced sets of data isn’t effective. If you search for wedding rings and I show you ads for wedding rings then maybe you’ll buy my wedding rings. If I see you bought a wedding ring, I can start showing you ads for wedding photographers and bourbon instead. Hey dummy, advertising works. Disinformation works. Analyzing and forecasting and modeling with machine learning works. Sure, some is snake oil. But early adopters made billions off it. Problem is, like that perfect gambling system, you wouldn’t tell people about something if it means you lost your edge. Sell a book about how to weaponize a secret and suddenly you probably are selling snake oil. As for regulatory reactions, can you say GDPR and all of the other privacy regulations that have come about since? Much as Sarbanes-Oxley introduced regulatory controls for corporate auditing and transparency, we regulated the crap out of privacy. And by regulated I mean a bunch of people that didn’t understand the way data is stored and disseminated over APIs made policy to govern it. But that’s another episode waiting to happen. Suffice it to say the lasting impact to the history of computing is both the regulations on privacy and the impact to identity providers and other API endpoints, were we needed to lock down entitlements to access various pieces of information due to rampant abuses. So here’s the key question in all of this: did the data help Obama and Trump win their elections? It might have moved a few points here and there. But it was death by a thousand cuts. Mis-steps by the other campaigns, political tides, segments of American populations desperately looking for change and feeling left behind while other segments of the population got all the attention, foreign intervention, voting machine tampering, not having a cohesive Opponent Party and so many other aspects of those elections also played a part. And as Hari Seldon-esque George Friedman called it in his book, it’s just the Storm Before the Calm. So whether the data did or did not help the Trump campaign, the next question is whether using the Cambridge Analytica data was wrong? This is murky. The data was illegally obtained. The Trump campaign was playing catchup with the maturity of the data held by the opposition. But the campaign can claim they didn’t know that the data was illegally obtained. It is illegal to employ foreigners in political campaigns and Bannon was warned about that. And then-CEO Nix was warned. But they were looking to instigate a culture war according to Christopher Wylie who helped found Cambridge Analytica. And look around, did they? Getting data models to a point where they have a high enough confidence interval that they are weaponizable takes years. Machine learning projects are very complicated, very challenging, and very expensive. And they are being used by every political campaign now insofar as the law allows. To be honest though, troll farms of cheap labor are cheaper and faster. Which is why three more got taken down just a month before the recording of this episode. But AI doesn’t do pillow talk, so eventually it will displace even the troll farm worker if only ‘cause the muckrakers can’t interview the AI. So where does this leave us today? Nearly every time I open Facebook, I see an ad to vote for Biden or an ad to vote for Trump. The US Director of National Intelligence recently claimed the Russians and Iranians were interfering with US elections. To do their part, Facebook will ban political ads indefinitely after the polls close on Nov. 3. They and Twitter are taking proactive steps to stop disinformation on their networks, including by actual politicians. And Twitter has actually just outright banned political ads. People don’t usually want regulations. But just as political ads in print, on the radio, and on television are regulated - they will need to be regulated online as well. As will the use of big data. The difference is the rich metadata collected in micro-targeting, the expansive comments areas, and the anonymity of those commenters. But I trust that a bunch of people who’ve never written a line of code in their life will do a solid job handing down those regulations. Actually, the FEC probably never built a radio - so maybe they will. So as the election season comes to a close, think about this. Any data from large brokers about you is fair game. What you’re seeing in Facebook and even the ads you see on popular websites are being formed by that data. Without it, you’ll see ads for things you don’t want. Like the Golden Girls Season 4 boxed set. Because you already have it. But with it, you’ll get crazy uncle Billy at the top of your feed talking about how the earth is flat. Leave it or delete it, just ask for a copy of it so you know what’s out there. You might be surprised, delighted, or even a little disgusted by that site uncle Billy was looking at that one night you went to bed early. But don’t, don’t, don’t think that any of this should impact your vote. Conservative, green, liberal, progressive, communist, social democrats, or whatever you ascribe to. In whatever elections in your country or state or province or municipality. Go vote. Don’t be intimated. Don’t let fear stand in the way of your civic duty. Don’t block your friends with contrary opinions. If nothing else listen to them. They need to be heard. Even if uncle Billy just can’t be convinced the world is round. I mean, he’s been to the beach. He’s been on an airplane. He has GPS on his phone… And that site. Gross. Thank you for tuning in to this episode of the history of computing podcast. We are so, so, so lucky to have you. Have a great day.
10/28/2020 • 28 minutes, 35 seconds
The Troubled History Of Voting Machines
Voters elect officials in representative democracies who pass laws, interpret laws, enforce laws, or appoint various other representatives to do one of the above. The terms of elected officials, the particulars of their laws, the structure of courts that interpret laws, and the makeup of the bureaucracies that are necessarily created to govern are different in every country. In China, the people elect the People’s Congresses who then elect the nearly 3,000 National People’s Congress members, who then elect the Present and State Councils. The United States has a more direct form of democracy and the people elect a House of Represenatives, a Senate, and a president who the founders intentionally locked into a power struggle to keep any part of the government from becoming authoritarian. Russia is setup similar. In fact, the State Duma, like the House in the US are elected by the people and the 85 States, or federal subjects, then send a pair of delegates to a Federal Council, like the Senate in the US, which has 170 members. It works similarly in many countries. Some, like England, still provide for hereditary titles, such as the House of Lords - but even there, the Sovereign - currently Queen Elizabeth the second, nominates a peer to a seat. That peer is these days selected by the Prime Minister. It’s weird but I guess it kinda’ works. Across democracies, countries communist, socialist, capitalist, and even the constitutional monarchies practice elections. The voters elect these representatives to supposedly do what’s in the best interest of the constituents. That vote cast is the foundation of any democracy. We think our differences are greater than they are, but it mostly boils down to a few percentages of tax and a slight difference in the level of expectation around privacy, whether that expectation is founded or not. 2020 poses a turning point for elections around the world. After allegations of attempted election tampering in previous years, the president of the United States will be voted on. And many of those votes are being carried out by mail. But others will be performed in person at polling locations and done on voting machines. At this point, I would assume that given how nearly every other aspect of American life has a digital equivalent, that I could just log into a web portal and cast my vote. No. That is not the case. In fact, we can’t even seem to keep the voting machines from being tampered with. And we have physical control over those! So how did we get to such an awkward place, where the most important aspect of a democracy is so backwater. Let’s start Maybe it’s ok that voting machines and hacking play less a role than they should. Without being political, there is no doubt that Russia and other foreign powers have meddled in US elections. In fact, there’s probably little doubt we’ve interfered in theirs. Russian troll farms and disinformation campaigns are real. Paul Manafort maintained secret communications with the Kremlin. Former US generals were brought into the administration either during or after the election to make a truce with the Russians. And then there were the allegations about tampering voting machines. Now effectively stealing information about voters from Facebook using insecure API permissions. I get that. Disinformation goes back to posters in the time of Thomas Jefferson. I get that too. But hacking voting machines. I mean, these are vetted, right? For $3,000 to $4,500 each and when bought in bulk orders of 16,000 machines like Maryland bought from Diebold in 2005, you really get what you pay for, right? Wait, did you say 2005? Let’s jump forward to 2017. That’s the year DefCon opened the Voting Machine Hacking Village. And in 2019 not a single voting machine was secured. In fact, one report from the conference said “we fear that the 2020 presidential elections will realize the worst fears only hinted at during the 2016 elections: insecure, attacked, and ultimately distrusted.” I learned to pick locks, use L0phtCrack, run a fuzzer, and so much more at DefCon. Now I guess I’ve learned to hack elections. So again, every democracy in the world has one thing it just has to get right, voting. But we don’t. Why? Before we take a stab at that, let’s go back in time just a little. The first voting machine used in US elections was a guy with a bible. This is pretty much how it went up until the 1900s in most districts. People walked in and told an election official their vote, the votes were tallied on the honor of that person, and everyone got good and drunk. People love to get good and drunk. Voter turnout was in the 85 percent range. Votes were logged in poll books. And the person was saying the name of the official they were voting for with a poll worker writing their name and vote into a pollbook. There was no expectation that the vote would be secret. Not yet at least. Additionally, you could campaign at the polling place - a practice now illegal in most places. Now let’s say the person taking the votes fudged something. There’s a log. People knew each other. Towns were small. Someone would find out. Now digitizing a process usually goes from vocal or physical to paper to digital to database to networked database to machine learning. It’s pretty much the path of technological determinism. As is failing because we didn't account for adjacent advancements in technology when moving a paper process to a digital process. We didn't refactor around the now-computational advances. Paper ballots showed up in the 1800s. Parties would print small fliers that looked like train tickets so voters could show up and drop their ballot off. Keep in mind, adult literacy rates still weren’t all that high at this point. One party could print a ticket that looked kinda’ like the others. All kinds of games were being played. We needed a better way. The 1800s were a hotbed of invention. 1838 saw the introduction of a machine where each voter got a brass ball which was then dropped in machine that used mechanical counters to increment a tally. Albert Henderson developed a precursor to a computer that would record votes using a telegraph that printed ink in a column based on which key was held down. This was in 1850 with US Patent 7521. Edison took the idea to US Patent 90,646 and automated the counters in 1869. Henry Spratt developed a push-button machine. Anthony Beranek continued on with that but made one row per office and reset after the last voter, similar to how machines work today. Jacob Meyers built on Berenek’s work and added levers in 1889 and Alfred Gillespie made the levered machine programmable. He and others formed the US Standard Voting Machine Company and slowly grew it. But something was missing and we’ll step back a little in time. Remember those tickets and poll books? They weren’t standardized. The Australians came up with a wacky idea in 1858 to standardize on ballots printed by the government, which made it to the US in 1888. And like many things in computing, once we had a process on paper, the automation of knowledge work, or tabulating votes would soon be ready to take into computing. Herman Hollerith brought punched card data processing to the US Census in 1890 and punch cards - his company would merge with others at the time to form IBM. Towards the end of the 1890s John McTammany had aded the concept that voters could punch holes in paper to cast votes and even went so far as to add a pneumatic tabulation. They were using rolls of paper rather than cards. And so IBM started tabulating votes in 1936 with a dial based machine that could count 400 votes a minute from cards. Frank Carrell at IBM got a patent for recording ballot choices on standardized cards. The stage was set for the technology to meet paper. By 1958 IBM had standardized punch cards to 40 columns and released the Port-A-Punch for so people in the field could punch information into a card to record findings and then bring it back to a computer for processing. Based on that, Joseph Harris developed the Votomatic punched-cards in 1965 and IBM licensed the technology. In the meantime, a science teacher Reynold Johnson had developed Mark Sense in the 1930s, which over time evolved into optical mark recognition, allowing us to fill in bubbles with a pencil. So rather than punch holes we could vote by filling in a bubble on a ballot. All the pieces were in place and the technology slowly proliferated across the country, representing over a third of votes when Clinton beat Dole and Ross Perot in 1996. And then 2000 came. George W. Bush defeated Al Gore in a bitterly contested and narrow margin. It came down to Florida and issues with the ballots there. By some tallies as few as 300 people decided the outcome of that election. Hanging chads are little pieces of paper that don’t get punched out of a card. Maybe unpunched holes in just a couple of locations caused the entire election to shift between parties. You could get someone drunk or document their vote incorrectly when it was orally provided in the early 1800s or provide often illiterate people with mislabeled tickets prior to the Australian ballots. But this was the first time since the advent of the personal computer, when most people in the US had computers in their homes and when the Internet bubble was growing by the day that there was a problem with voting ballots and suddenly people started wondering why were still using paper. The answer isn’t as simple as the fact that the government moves slowly. I mean, the government can’t maintain the rate of technical innovation and progress anyways. But there are other factors as well. One is secrecy. Anywhere that has voting will eventually have some kind of secret ballots. This goes back to the ancient greeks but also the French Revolution. Secret ballots came to the UK in the 1840s with the Chartists and to the US after the 1884 election. As the democracies matured, the concept of voting rights matured and secret ballots were part of that. Making sure a ballot is secret means we can’t just allow any old person to look at a ballot. Another issue is decentralization. Each state selects their own machines and system and sets dates and requirements. We see that with the capacity and allocation of mail-in voting today. Another issue is cost. Each state also has a different budget. Meaning that there are disparities between how well a given state can reach all voters. When we go to the polls we usually work with volunteers. This doesn’t mean voting isn’t big business. States (and countries) have entire bureaucracies around elections. Bureaucracies necessarily protect themselves. So why not have a national voting system? Some countries do. Although most use electronic voting machines in polling places. But maybe something based on the Internet? Security. Estonia tried a purely Internet vote and due to hacking and malware it was determined to have been a terrible idea. That doesn’t mean we should not try again. The response to the 2000 election results was the Help America Vote Act of 2002 to define standards managed by the Election Assistance Commission in the US. The result was the proliferation of new voting systems. ATM machine maker Diebold entered the US election market in 2002 and quickly became a large player. The CEO ended up claiming he was “committed to helping Ohio deliver its electoral votes to” Bush. They accidentally leaked their source code due to a misconfigured server and they installed software patches that weren’t approved. In short, it was a typical tech empire that grew too fast and hand issues we’ve seen with many companies. Just with way more on the line. After a number of transitions between divisions and issues, the business unit was sold to Election Systems & Software, now with coverage over 42 states. And having sold hundreds of thousands of voting machines, they now have over 60% of the market share in the us. That company goes back to the dissolution of a ballot tabulation division of Westinghouse and the Votronic. They are owned by a private equity firm called the McCarthy Group. They are sue-happy though and stifling innovation. The problems are not just with ES&S. Hart InterCivic and Dominion are the next two biggest competitors, with equal issues. And no voting machine company has a great track record with security. They are all private companies. They have all been accused of vote tampering. None of that has been proven. They have all had security issues. In most of these episodes I try to focus on the history of technology or technocratic philosophy and maybe look to the future. I rarely offer advice or strategy. But there are strategies not being employed. The first strategy is transparency. In life, I assume positive intent. But transparency is really the only proof of that. Any company developing these systems should have transparent financials, provide transparency around the humans involved, provide transparency around the source code used, and provide transparency around the transactions, or votes in this case, that are processed. In an era of disinformation and fake news, transparency is the greatest protection of democracy. Providing transparency around financials can be a minefield. Yes, a company should make a healthy margin to continue innovating. That margin funds innovators and great technology. Financials around elections are hidden today because the companies are private. Voting doesn’t have to become a public utility but it should be regulated. Transparency of code is simpler to think through. Make it open source. Firefox gave us an open source web browser. ToR gave us a transparent anonymity. The mechanisms with which each transaction occurs is transparent and any person with knowledge of open source systems can look for flaws in the system. Those flaws are then corrected as with most common programming languages and protocols by anyone with the technical skills to do so. I’m not the type that thinks everything should be open source. But this should be. There is transparency in simplicity. The more complex a system the more difficult to unravel. The simpler a program, the easier for anyone with a working knowledge of programming to review and if needed, correct. So a voting system should be elegant in simplicity. Verifiability. We could look at poll books in the 1800s and punch the vote counter in the mouth if they counted our vote wrong. The transparency of the transaction was verifiable. Today, there are claims of votes being left buried in fields and fraudulent voters. Technologies like blockchain can protect against that much as currency transactions can be done in bitcoin. I usually throw up a little when I hear the term blockchain bandied about by people who have never written a line of code. Not this time. Let’s take hashing as a fundamental building block. Let’s say you vote for a candidate and the candidate is stored as a text field, or varchar, that is their name (or names) and the position they are running for. We can easily take all of the votes cast by a voter, store them in a json blob, commit them to a database, add a record in a database that contains the vote supplied, and then add a block in chain to provide a second point of verification. The voter would receive a guid randomly assigned and unique to them, thus protecting the anonymity of the vote. The micro-services here are to create a form for them to vote, capture the vote, hash the vote, commit the vote to a database, duplicate the transaction into the voting blockchain, and allow for vote lookups. Each can be exposed from an API gateway that allows systems built by representatives of voters at the federal, state, and local levels to lookup their votes. We now have any person voting capable of verifying that their vote was counted. If bad data is injected at the time of the transaction the person can report the voter fraud and a separate table connecting vote GUIDs to IP addresses or any other PII can be accessed only by the appropriate law enforcement and any attempt by law enforcement to access a record should be logged as well. Votes can be captured with web portals, voting machines that have privileged access, by 1800s voice counts, etc. Here we have a simple and elegant system that allows for transparency, verifiability, and privacy. But we need to gate who can cast a vote. I have a PIN to access by IRS returns using my social security number or tax ID. But federal elections don’t require paying taxes. Nextdoor sent a card to my home and I entered a PIN printed on the card on their website. But that system has many a flaw. Section 303 of the Help America Vote Act of 2002 compels the State Motor Vehicle Office in each state to validate the name, date of birth, Social Security Number, and whether someone is alive. Not every voter drives. Further, not every driver meets voting requirements. And those are different per state. And so it becomes challenging to authenticate a voter. We do so in person, en masse, at every election due to the the staff and volunteers of various election precincts. In Minnesota I provided my drivers license number when I submitted my last ballot over the mail. If I moved since the last time I voted I also need a utility bill to validate my physical address. A human will verify that. Theoretically I could vote in multiple precincts if I were able to fabricate a paper trail to do so. If I did I would go to prison. Providing a web interface unless browsers support a mechanism to validate the authenticity of the source and destination is incredibly dangerous. Especially when state sponsored actors as destinations have been proven to be able to bypass safeguards such as https. And then there’s the source. It used to be common practice to use Social Security Numbers or cards as a form of verification for a lot of things. That isn’t done any more due to privacy concerns and of course due to identity theft. You can’t keep usernames and passwords in a database any more. So the only real answer here is a federated identity provider. This is where OAuth, OpenID Connect, and/or SAML come into play. This is a technology that retains a centralized set of information about people. Other entities then tie into the centralized identity sources and pull information from them. The technology they use to authenticate and authorize users is then one of the protocols mentioned. I’ve been involved in a few of these projects and to be honest, they kinda’ all suck. Identities would need to be created and the usernames and passwords distributed. This means we have to come up with a scheme that everyone in the country (or at least the typically ill-informed representatives we put in place to make choices on our behalf) can agree on. And even if a perfect scheme for usernames is found there’s crazy levels of partisanship. The passwords should be complex but when dealing with all of the factors that come into play it’s hard to imagine consensus being found on what the right level is to protect people but also in a way passwords can be remembered. The other problem with a federated identity is privacy. Let’s say you forget your password. You need information about a person to reset it. There’s also this new piece of information out there that represents yet another piece of personally identifiable information. Why not just use a social security number? That would require a whole other episode to get into but it’s not an option. Suddenly if date of birth, phone number (for two factor authentication), the status of if a human is alive or not, possibly a drivers license number, maybe a social security number in a table somewhere to communicate with the Social Security databases to update the whole alive status. It gets complicated fast. It’s no less private that voter databases that have already been hacked in previous elections though. Some may argue to use biometric markers instead of all the previous whatnot. Take your crazy uncle Larry who thinks the government already collects too much information about him and tells you so when he’s making off-color jokes. Yah, now tell him to scan his eyeball or fingerprint into the database. When he’s done laughing at you, he may show you why he has a conceal and carry permit. And then there’s ownership. No department within an organization I’ve seen wants to allow an identity project unless they get budget and permanent head count. And no team wants another team to own it. When bureaucracies fight it takes time to come to the conclusion that a new bureaucracy needs to be formed if we’re going anywhere. Then the other bureaucracies make the life of the new one hard and thus slow down the whole process. Sometimes needfully, sometimes accidentally, and sometimes out of pure spite or bickering over power. The most logical bureaucracy in the federal government to own such a project would be the social security administration or the Internal Revenue Service. Some will argue states should each have their own identity provider. We need one for taxes, social security, benefits, and entitlement programs. And by the way, we’re at a point in history when people move between states more than ever. If we’re going to protect federal and state elections, we need a centralized provider of identities. And this is going to sound crazy, but the federal government should probably just buy a company who already sells an IdP (like most companies would do if they wanted to build one) rather than contract with one or build their own. If you have to ask why, you’ve never tried to build one yourself or been involved in any large-scale software deployments or development operations at a governmental agency. I could write a book on each. There are newer types of options. You could roll with an IndieAuth Identity Provider, which is a decentralized approach, but that’s for logging into apps using Facebook or Apple or Google - use it to shop and game, not to vote. NIST should make the standards, FedRAMP should provide assessment, and we can loosely follow the model of the European self-sovereign identity framework or ESSIF but build on top of an existing stack so we don’t end up taking 20 years to get there. Organizations that can communicate with an identity provider are called Service Providers. Only FedRAMP certified public entities should be able to communicate with a federal federated identity provider. Let’s just call it the FedIdP. Enough on the identity thing. Suffice it to say, it’s necessary to successfully go from trusting poll workers to being able to communicate online. And here’s the thing about all of this: confidence intervals. What I mean by this is that we have gone from being able to verify our votes in poll books and being able to see other people in our communities vote to trusting black boxes built by faceless people whose political allegiances are unknown. And as is so often the case when the technology fails us, rather than think through the next innovation we retreat back to the previous step in the technological cycle: if that is getting stuck at localized digitization we retreat back to paper. If it is getting stuck at taking those local repositories online we would have retreated back to the localized digital repository. If we’re stuck at punch cards due to hanging chads then we might have to retreat back to voice voting. Each has a lower confidence interval than a verifiable and transparent online alternative. Although the chances of voter fraud by mail are still .00006%, close to a 5 9s. We need to move forward. It’s called progress. The laws of technological determinism are such that taking the process online is the next step. And it’s crucial for social justice. I’ve over-simplified what it will take. Anything done on a national scale is hard. And time consuming. So it’s a journey that should be begun now. In the meantime, there’s a DARPA prize. Given the involvement of a few key DARPA people with DefCon and the findings of voting machine security (whether that computers are online and potentially fallible or physically hackable or just plain bad) DARPA gave a prize to the organization that could develop a tamper proof, open-source voting machine. I actually took a crack at this, not because I believed it to be a way to make money but because after the accusations of interference in the 2016 election I just couldn’t not. Ultimately I decided this could be solved with an app in single app mode, a printer to produce a hash and a guid, and some micro-services but that the voting machine was the wrong place for the effort and that the effort should instead be put into taking voting online. Galois theory gives us a connection from field theory and group theory. You simplify field theory problems so they can be solved by group theory. And I’ve oversimplified the solution for this problem. But just as with studying the roots of polynomials, sometimes simplicity is elegance rather than hubris. In my own R&D efforts I struggle to understand when I’m exuding each. The 2020 election is forcing many to vote by mail. As with other areas that have not gotten the innovation they needed, we’re having to rethink a lot of things. And voting in person at a polling place should certainly be one. As should the cost of physically delivering those ballots and the human cost to get them entered. The election may or may not be challenged by luddites who refuse to see the technological determinism staring them in the face. This is a bipartisan issue. No matter who wins or loses the other party will cry foul. It’s their job as politicians. But it’s my job as a technologist to point out that there’s a better way. The steps I outlined in this episode might be wrong. But if someone can point out a better way, I’d like to volunteer my time and focus to propelling it forward. And dear listener, think about this. When progress is challenged what innovation can you bring or contribute to that helps keep us from retreating to increasingly analog methods. Herman Hollerith brought the punch card, which had been floating around since the Jacquard loom in 1801. Those were individuals who moved technology forward in fundamental ways. In case no one ever told you, you have even better ideas locked away in your head. Thank you for letting them out. And thank you for tuning in to this episode of the History of Computing Podcast. We are so, so lucky to have you.
10/20/2020 • 32 minutes, 33 seconds
The Intergalactic Memo That Was The Seed Of The Internet
JCR Licklider sent a memo called "Memorandum For Members and Affiliates of the Intergalactic Computer Network" in 1963 that is quite possibly the original spark that lit the bonfire called The ARPANet, that was the nascent beginnings of what we now called the Internet. In the memo, “Lick” as his friends called him, documented early issues in building out a time-sharing network of computers available to research scientists of the early 60s. The memo is a bit long so I’ll include quotes followed by explanations or I guess you might call them interpretations. Let’s start with the second paragraph: The need for the meeting and the purpose of the meeting are things that I feel intuitively, not things that I perceive in clear structure. I am afraid that that fact will be too evident in the following paragraphs. Nevertheless, I shall try to set forth some background material and some thoughts about possible interactions among the various activities in the overall enterprise for which, as you may have detected in the above subject, I am at a loss for a name. Intuition, to me, is important. Lick had attended conferences on cybernetics and artificial intelligence going back to the 40s. He had been MIT faculty and was working for a new defense research organization. He was a visionary. The thing is, let’s call his vision a hypothesis. During the 1960s, the Soviets would attempt to build multiple networks similar to ARPANet. Thing is, much like a modern product manager, he chunked the work to be done up and had various small teams tackle parts of projects, each building a part but in the whole proving the theory in a decentralized way. As compared to Soviet projects that went all-in. A couple of paragraphs later, Lick goes on to state: In pursuing the individual objectives, various members of the group will be preparing executive the monitoring routines, languages amd [sic.] compilers, debugging systems and documentation schemes, and substantive computer programs of more or less general usefulness. One of the purposes of the meeting–perhaps the main purpose–is to explore the possibilities for mutual advantage in these activities–to determine who is dependent upon whom for what and who may achieve a bonus benefit from which activities of what other members of the group. It will be necessary to take into account the costs as well as the values, of course. Nevertheless, it seems to me that it is much more likely to be advantageous than disadvantageous for each to see the others’ tentative plans before the plans are entirely crystalized. I do not mean to argue that everyone should abide by some rigid system of rules and constraints that might maximize, for example, program interchangeability. Here, he’s acknowledging that stakeholders have different needs, goals and values, but stating that if everyone shared plans the outcome could be greater across the board. He goes on to further state that: But, I do think that we should see the main parts of the several projected efforts, all on one blackboard, so that it will be more evident than it would otherwise be, where network-wide conventions would be helpful and where individual concessions to group advantage would be most important. These days we prefer a whiteboard or maybe even a Miro board. But this act of visualization would let research from disparate fields, like Paul Baran at RAND working on packet switching at the time, be pulled in to think about how networks would look and work. While the government was providing money to different institutes the research organizations were autonomous and by having each node able to operate on their own rather than employ a centralized approach, the network could be built such that signals could travel along multiple paths in case one path broke down, thus getting at the heart of the matter - having a network that could survive a nuclear attach provided some link or links survived. He then goes on to state: It is difficult to determine, of course, what constitutes “group advantage.” Even at the risk of confusing my own individual objectives (or ARPA’s) with those of the “group,” however, let me try to set forth some of the things that might be, in some sense, group or system or network desiderata. This is important. In this paragraph he acknowledges his own motive, but sets up a value proposition for the readers. He then goes on to lay out a future that includes an organization like what we now use the IETF for in: There will be programming languages, debugging languages, time-sharing system control languages, computer-network languages, data-base (or file-storage-and-retrieval languages), and perhaps other languages as well. It may or may not be a good idea to oppose or to constrain lightly the proliferation of such. However, there seems to me to be little question that it is desireable to foster “transfer of training” among these languages. One way in which transfer can be facilitated is to follow group consensus in the making of the arbitrary and nearly-arbitrary decisions that arise in the design and implementation of languages. There would be little point, for example, in having a diversity of symbols, one for each individual or one for each center, to designate “contents of” or “type the contents of.” The IETF and IEEE now manage the specifications that lay out the structure that controls protocols and hardware respectively. The early decisions made were for a small collection of nodes on the ARPANet and as the nodes grew and the industry matured, protocols began to be defined very specifically, such as DNS, covered in the what, second episode of this podcast. It’s important that Lick didn’t yet know what we didn’t know, but he knew that if things worked out that these governing bodies would need to emerge in order to keep splinter nets at a minimum. At the time though, they weren’t thinking much of network protocols. They were speaking of languages, but he then goes on to lay out a network-control language, which would emerge as protocols. Is the network control language the same thing as the time-sharing control language? (If so, the implication is that there is a common time-sharing control language.) Is the network control language different from the time-sharing control language, and is the network-control language common to the several netted facilities? Is there no such thing as a network-control language? (Does one, for example, simply control his own computer in such a way as to connect it into whatever part of the already-operating net he likes, and then shift over to an appropriate mode?) In the next few paragraphs he lays out a number of tasks that he’d like to accomplish - or at least that he can imagine others would like to accomplish, such as writing programs to run on computers, access files over the net, or read in teletypes remotely. And he lays out storing photographs on the internet and running applications remotely, much the way we do with microservices today. He referrs to information retrieval, searching for files based on metadata, natural language processing, accessing research from others, and bringing programs into a system from a remote repository, much as we do with cpan, python imports, and github today. Later, he looks at how permissions will be important on this new network: here is the problem of protecting and updating public files. I do not want to use material from a file that is in the process of being changed by someone else. There may be, in our mutual activities, something approximately analogous to military security classification. If so, how will we handle it? It turns out that the first security issues were because of eased restrictions on resources. Whether that was viruses, spam, or just accessing protected data. Keep in mind, the original network was to facilitate research during the cold war. Can’t just have commies accessing raw military research can we? As we near the end of the memo, he says: The fact is, as I see it, that the military greatly needs solutions to many or most of the problems that will arise if we tried to make good use of the facilities that are coming into existence. Again, it was meant to be a military network. It was meant to be resilient and withstand a nuclear attack. That had already been discussed in meetings before this memo. Here, he’s shooting questions to stakeholders. But consider the name of the memo, Memorandum For Members and Affiliates of the Intergalactic Computer Network. Not “A” network but “the” network. And not just any network, but THE Intergalactic Network. Sputnik had been launched in 1957. The next year we got NASA. Eisenhower then began the process that resulted in the creation of ARPA to do basic research so the US could leapfrog the Soviets. The Soviets had beaten the US to a satellite by using military rocketry to get to space. The US chose to use civilian rocketry and so set a standard that space (other than the ICBMs) would be outside the cold war. Well, ish. But here, we were mixing military and civilian research in the hallowed halls of universities. We were taking the best and brightest and putting them into the employ of the military without putting them under the control of the military. A relationship that worked well until the Mansfield Amendment to the 1970 Military Authorization Act ended the military funding of research that didn’t have a direct or apparent relationship to specific military function. What happened between when Lick started handing out grants to people he trusted and that act would change the course of the world and allow the US to do what the Soviets and other countries had been tinkering with, effectively develop a nationwide link of computers to provided for one of the biggest eras of collaborative research the world has ever seen. What the world wanted was an end to violence in Vietnam. What they got was a transfer of technology from the military industrial complex to corporate research centers like Xerox PARC, Digital Equipment Corporation, and others. Lick then goes on to wrap the memo up: In conclusion, then, let me say again that I have the feeling we should discuss together at some length questions and problems in the set to which I have tried to point in the foregoing discussion. Perhaps I have not pointed to all the problems. Hopefully, the discussion may be a little less rambling than this effort that I am now completing. The researchers would continue to meet. They would bring the first node of the ARPANET online in 1969. In that time they’d also help fund research such as the NLS, or oN-Line System. That eventually resulted in mainstreaming the graphical user interface and the mouse. Lick would found the Information Processing Techniques office and launch Project MAC, the first big, serious research into personal computing. They’d fund Transit, an important navigation system that ran until 1996 when it was replaced by GPS. They built Shakey the robot. And yes, they did a lot of basic military research as well. And today, modern networks are Intergalactic. A bunch of nerds did their time planning and designing and took UCLA online then Stanford, then UCSB and then a PDP10 at the University of Utah. Four nodes, four types of computers. Four operating systems. Leonard Kleinrock and the next generation would then take the torch and bring us into the modern era. But that story is another episode. Or a lot of other episodes. We don’t have a true Cold War today. We do have some pretty intense rhetoric. And we have a global pandemic. Kinda’ makes you wonder what basic research is being funded today and how that will shape the world in the next 57 years, the way this memo has shaped the world. Or given that there were programs in the Soviet Union and other countries to do something similar was it really a matter of technological determinism? Not to take anything away from the hard work put in at ARPA and abroad. But for me at least, the jury is still out on that. But I don’t have any doubt that the next wave of changes will be even more impactful. Crazy to think, right?
10/12/2020 • 15 minutes, 25 seconds
Cybernetics
This prefix “cyber” is pretty common in our vernacular today. Actually it was in the 90s and now seems reserved mostly for governmental references. But that prefix has a rich history. We got cyborg in 1960 from Manfred Clynes and Nathan S. Kline. And X-Men issue 48 in 1968 introduced a race of robots called Cybertrons, likely the inspiration for the name of the planet the Transformers would inhabit as they morphed from the Japanese Microman and Diaclone toys. We got cyberspace from William Gibson in 1982 and cyberpunk from the underground art scene in the 1980s. We got cybersex in the mid-90s with AOL. The term cybercrime rose to prominence in that same timeframe, being formalized in use by the G8 Lyons Group on High-Tech Crime. And we get cybercafes, cyberstalking, cyberattack, cyberanarchism, cyberporn, and even cyberphobia of all those sound kinda’ ick. And so today, the word cyber is used to prefix a meaning around the culture of computers, information technology, and virtual reality and the meaning is pretty instantly identifiable. But where did it come from? The word is actually short for cybernetic, which is greek for skilled in steering or governing. And Cybernetics is a multi-disciplinary science, or psuedo-science according to who you talk to, that studies systems. And it’s defined in its most truest form with the original 1948 definition from the author who pushed it into the mainstream, Norbert Wiener: “the scientific study of control and communication in the animal and the machine.” Aaaactually, let’s back up a minute. French physicist André-Marie Ampère coined the term cybernétique in 1934, which he called his attempt to classify human knowledge. His work on electricity and magnetism would result in studies that would earn him the honor of having the Amp named after him. But jump forward to World War Two and after huge strides in General Systems Theory and negative feedback loops and the amazing work done at Bell Labs, we started getting MIT’s Jay Forrester (who would invent computer memory) and Gordon Brown, who defined automatic-feedback control systems and solidified servomechanisms, or servos in engineering applying systems thinking all over the place, which also resulted in Forrester applying that thinking to Management, resulting in the MIT Sloan School of Management. And Deming applied these concepts to process, resulting in Total Quality Management which has been a heavy influence on what we call Six Sigma today. And John Boyd would apply systems thinking and feedback loops into military strategy. So a lot of people around the world were taking a deeper look at process and feedback and loops and systems in general. During World War II, systems thinking was on the rise. And seeing the rise of the computer, Norbert Wiener worked on anti-aircraft guns and was looking into what we now call information theory at about the same time Claude Shannon was. Whereas Claude Shannon went on to formalize Information Theory, Wiener formalized his work as cybernetics. He had published “A simplification in the logic of relations” in 1914, so he wasn’t new to this philosophy of melding systems and engineering. But things were moving quickly. ENIAC had gone live in 1947. Claud Shannon published a paper in 1948 that would emerge as a book called “A Mathematical Theory of Communication” by 1949. So Wiener published his book called Cybernetics, or the Control and Communication in the Animal and the Machine, in 1948. And Donald Mackay was releasing his book on Multiplication and division by electronic analogue methods in 1948 in England. Turing’s now infamous work during World War II had helped turn the tides and after the war he was working on the Automatic Computing Engine. John von Neumann had gone from developing game theory to working on the Manhattan Project and nuclear bombs and working with ENIAC to working on computing at Princeton and starting to theorize on cellular automata. J.C.R. Licklider was just discovering the computer while working on psychoacoustics research at Harvard - work that would propel him to become the Johnny Appleseed of computing and the instigator at the center of what we now call the Internet and personal computers. Why am I mentioning so many of the great early thinkers in computing? Because while Wiener codified, he was not alone responsible for Cybernetics. In fact, the very name Cybernetics had been the name of a set of conferences held from 1946 to 1953 and organized by the Josiah Macy, Jr foundation. These conferences and that foundation are far more influential in Western computing in the 50s and 60s, and the principals that sprang from that and went around the world than credit is usually given. All of those people mentioned and dozens of others who are responsible for so many massive, massive discoveries were at those conferences and in the clubs around the world that sprang up from their alumni. They were looking for polymaths who could connect dots and deep thinkers in specialized fields to bring science forward through an interdisciplinary lens. In short, we had gone beyond a time when a given polymath could exceed at various aspects of the physical sciences and into a world where we needed brilliant specialists connected with those polymaths to gain quantum leaps in one discipline, effectively from another. And so Wiener took his own research and sprinkled in bits from others and formalized Cybernetics in his groundbreaking book. From there, nearly every discipline integrated the concept of feedback loops. Plato, who the concept can be traced back to, would have been proud. And from there, the influence was massive. The Cold War Military-Industrial-University complex was coming into focus. Paul Baran from RAND would read McCullough and Pitts’ work from Cybernetcs and neural nets and use that as inspiration for packet switching. That work and the work of many others in the field is now the basis for how computers communicate with one another. The Soviets, beginning with Glushkov, would hide Cybernetics and dig it up from time to time restarting projects to network their cities and automate the command and control economy. Second order cybernetics would emerge to address observing systems and third order cybernetics would emerge as applied cybernetics from the first and second order. We would get system dynamics, behavioral psychology, cognitive psychology, organizational theory, neuropsychology, and the list goes on. The book would go into a second edition in 1965. While at MIT, Wiener was also influential in early theories around robotics and automation. Applied cybernetics. But at the Dartmouth workshop in 1956, John McCarthy along with Marvin Minsky and Claude Shannon would effectively split the field into what they called artificial intelligence. The book Emergence is an excellent look at applying the philosophies to ant colonies and analogizing what human enterprises can extract from that work. Robotics is made possible by self-correcting mechanisms in the same way learning organizations and self-organization factor in. Cybernetics led to control theory, dynamic systems, and even chaos theory. We’ve even grown to bring biocybernetics into ecology, and synthetic and systems biology. Engineering and even management. The social sciences have been heavily inspired by cybernetics. Attachment theory, the cognitive sciences, and psychovector analysis are areas where psychology took inspiration. Sociology, architecture, law. The list goes on. And still, we use the term artificial intelligence a lot today. This is because we are more focused on productivity gains and the truths the hard sciences can tell us with statistical modeling than with the feedback loops and hard study we can apply to correcting systems. I tend to think this is related to what we might call “trusting our guts.” Or just moving so fast that it’s easier to apply a simplistic formula to an array to find a k-nearest neighbor than it is to truly analyze patterns and build feedback loops into our systems. It’s easier to do things because “that’s the way we’ve always done that” than to set our ego to the side and look for more efficient ways. That is, until any engineer on a production line at a Toyota factory can shut the whole thing down due to a defect. But even then it’s easier to apply principles from lean manufacturing than to truly look at our own processes, even if we think we’re doing so by implementing the findings from another. I guess no one ever said organizational theory was easy. And so whether it’s the impact to the Internet, the revolutions inspired in applied and sciences, or just that Six Sigma Blackbelt we think we know, we owe Wiener and all of the others involved in the early and later days of Cybernetics a huge thank you. The philosophies they espoused truly changed the world. And so think about this. The philosophies of Adam Smith were fundamental to a new world order in economics. At least, until Marx inspired Communism and the Great Depression inspired English economist John Maynard Keynes to give us Keynesian economics. Which is still applied to some degree, although one could argue incorrectly with Stimulus checks when compared to the New Deal. Necessity is the mother of invention. So what are the new philosophies emerging from the hallowed halls of academia? Or from the rest of the world at large? What comes after Cybernetics and Artificial Intelligence? Is a tough economy when we would expect the next round of innovative philosophy that could then be applied to achieve the same kinds of productivity gains we got out of the digitization of the world? Who knows. But I’m an optimist that we can get inspired - or I wouldn’t have asked. Thank you for tuning in to this episode of the history of computing podcast. We are so lucky to have you. Have a great day.
10/8/2020 • 12 minutes, 50 seconds
PGP and the First Amendment
I was giving a talk at DefCon one year and this guy starts grilling me at the end of the talk about the techniques Apple was using to encrypt home directories at the time with new technology called Filevault. It went on a bit, so I did that thing you sometimes have to do when it’s time to get off stage and told him we’d chat after. And of course he came up - and I realized he was really getting at the mechanism used to decrypt and the black box around decryption. He knew way more than I did about encryption so I asked him who he was. When he told me, I was stunned. Turns out that like me, he enjoyed listening to A Prairie Home Companion. And on that show, Garrison Keillor would occasionally talk about Ralph’s Pretty Good Grocery in a typical Minnesota hometown he’d made up for himself called Lake Wobegon. Zimmerman liked the name and so called his new encryption tool PGP, short for Pretty Good Privacy. It was originally written to encrypt messages being sent to bulletin boards. That original tool didn’t require any special license, provided it wasn’t being used commercially. And today, much to the chagrin of the US government at the time, it’s been used all over the world to encrypt emails, text files, text messages, directories, and even disks. But we’ll get to that in a bit. Zimmerman had worked for the Nuclear Weapons Freeze Campaign in the 80s after getting a degree in computer science fro Florida Atlantic University in 1978. And after seeing the government infiltrate organizations organizing Vietnam protests, he wanted to protect the increasingly electronic communications of anti-nuclear protests and activities. The world was just beginning to wake up to a globally connected Internet. And the ARPAnet had originally been established by the military industrial complex, so it was understandable that he’d want to keep messages private that just happened to be flowing over a communications medium that many in the defense industry knew well. So he started developing his own encryption algorithm called BassOmatic in 1988. That cipher used symmetric keys with control bits and pseudorandom number generation as a seed - resulting in 8 permutation tables. He named BassOmatic after a Saturday Night Live skit. I like him more and more. He’d replace BassOmatic with IDEA in version 2 in 1992. And thus began the web of trust, which survives to this day in PGP, OpenPGP, and GnuPG. Here, a message is considered authentic based on it being bound to a public key - one that is issued in a decentralized model where a certificate authority issues a public and private key where messages can only be encrypted or signed with the private key and back then you would show your ID to someone at a key signing event or party in order to get a key. Public keys could then be used to check that the individual you thought was the signer really is. Once verified then a separate key could be used to encrypt messages between the parties. But by then, there was a problem. The US government began a criminal investigation against Zimmerman in 1993. You see, the encryption used in PGP was too good. Anything over a 40 bit encryption key was subject to US export regulations as a munition. Remember, the Cold War. Because PGP used 128 bit keys at a minimum. So Zimmerman did something that the government wasn’t expecting. Something that would make him a legend. He went to MIT Press and published the PGP source code in a physical book. Now, you could OCR the software, run it through a compiler. Suddenly, his code was protected as an exportable book by the First Amendment. The government dropped the investigation and found something better to do with their time. And from then on, source code for cryptographic software became an enabler of free speech, which has been held up repeatedly in the appellate courts. So 1996 comes along and PGP 3 is finally available. This is when Zimmerman founds PGP as a company so they could focus on PGP full-time. Due to a merger with Viacrypt they jumped to PGP 5 in 1997. Towards the end of 1997 Network Associates acquired PGP and they expanded to add things like intrusion detection, full disk encryption, and even firewalls. Under Network Associates they stopped publishing their source code and Zimmerman left in 2001. Network Associates couldn’t really find the right paradigm and so merged some products together and what was PGP commandline ended up becoming McAfee E-Business Server in 2013. But by 2002 PGP Corporation was born out of a few employees securing funding from Rob Theis to help start the company and buy the rest of the PGP assets from Network Associates. They managed to grow it enough to sell it for $300 million to Symantec and PGP lives on to this day. But I never felt like they were in it just for the money. The money came from a centralized policy server that could do things like escrow keys. But for that core feature of encrypting emails and later disks, I really always felt like they wanted a lot of that free. And you can buy Symantec Encryption Desktop and command it from a server, S/MIME and OpenPGP live on in ways that real humans can encrypt their communications, some of which in areas where their messages might get them thrown in jail. By the mid-90s, mail wasn’t just about the text in a message. It was more. RFC934 in 1985 had started the idea of encapsulating messages so you could get metadata. RFC 1521 in 1993 formalized MIME and by 1996, MIME was getting really mature in RFC2045. But by 1999 we wanted more and so S/MIME went out as RFC 2633. Here, we could use CMS to “cryptographically enhance” a MIME body. In other words, we could suddenly encrypt more than the text of an email and it since it was an accepted internet standard, it could be encrypted and decrypted with standard mail clients rather than just with a PGP client that didn’t have all the bells and whistles of pretty email clients. That included signing information, which by 2004 would evolve to include attributes for things like singingTime, SMIMECapabilities, algorithms and more. Today, iOS can use S/MIME and keys can be stored in Exchange or Office 365 and that’s compatible with any other mail client that has S/MIME support, making it easier than ever to get certificates, sign messages, and encrypt messages. Much of what PGP was meant for is also available in OpenPGP. OpenPGP is defined by the OpenPGP Working Group and you can see the names of some of these guardians of privacy in RFC 4880 from 2007. Names like J. Callas, L. Donnerhacke, H. Finney, D. Shaw, and R. Thayer. Despite the corporate acquisitions, the money, the reprioritization of projects, these people saw fit to put powerful encryption into the hands of real humans and once that pandoras box had been opened and the first amendment was protecting that encryption as free speech, to keep it that way. Use Apple Mail, GPGTools puts all of this in your hands. Use Android, get FairEmail. Use Windows, grab EverDesk. This specific entry felt a little timely. Occasionally I hear senators tell companies they need to leave backdoors in products so the government can decrypt messages. And a terrorist forces us to rethink that basic idea of whether software that enables encryption is protected by freedom of speech. Or we choose to attempt to ban a company like WeChat, testing whether foreign entities who publish encryption software are also protected. Especially when you consider whether Tencent is harvesting user data or if the idea they are doing that is propaganda. For now, US courts have halted a ban on WeChat. Whether it lasts is one of the more intriguing things I’m personally watching these days, despite whatever partisan rhetoric gets spewed from either side of the isle, simply for the refinement to the legal interpretation that to me began back in 1993. After over 25 years we still continue to evolve our understanding of what truly open and peer reviewed cryptography being in the hands of all of us actually means to society. The inspiration for this episode was a debate I got into about whether the framers of the US Constitution would have considered encryption, especially in the form of open source public and private key encryption, to be free speech. And it’s worth mentioning that Washington, Franklin, Hamilton, Adams, and Madison all used ciphers to keep their communications private. And for good reason as they knew what could happen should their communications be leaked, given that Franklin had actually leaked private communications when he was the postmaster general. Jefferson even developed his own wheel cipher, which was similar to the one the US army used in 1922. It comes down to privacy. The Constitution does not specifically call out privacy; however, the first Amendment guarantees the privacy of belief, the third, the privacy of home, the fourth, privacy against unreasonable search and the fifth, privacy of of personal information in the form of the privilege against self-incrimination. And giving away a private key is potentially self-incrimination. Further, the ninth Amendment has broadly been defined as the protection of privacy. So yes, it is safe to assume they would have supported the transmission of encrypted information and therefore the cipher used to encrypt to be a freedom. Arguably the contents of our phones are synonymous with the contents of our homes though - and if you can have a warrant for one, you could have a warrant for both. Difference is you have to physically come to my home to search it - whereas a foreign government with the same keys might be able to decrypt other data. Potentially without someone knowing what happened. The Electronic Communications Privacy Act of 1986 helped with protections but with more and more data residing in the cloud - or as with our mobile devices synchronized with the cloud, and with the intermingling of potentially harmful data about people around the globe potentially residing (or potentially being analyzed) by people in countries that might not share the same ethics, it’s becoming increasingly difficult to know what is the difference between keeping our information private, which the framers would likely have supported and keeping people safe. Jurisprudence has never kept up with the speed of technological progress, but I’m pretty sure that Jefferson would have liked to have shared a glass of his favorite drink, wine, with Zimmerman. Just as I’m pretty sure I’d like to share a glass of wine with either of them. At Defcon or elsewhere!
9/28/2020 • 14 minutes, 17 seconds
1996: A Declaration of the Independence of Cyberspace
Welcome to the History of Computing Podcast, where we explore the history of information technology. Because understanding the past prepares us to innovate (and sometimes cope with) the future! Today we’re going to cover a paper by one of the more colorful characters in the history of computing. John Perry Barlow wrote songs for the Grateful Dead, ran a cattle ranch, was a founder of the Electronic Frontier Foundation, was a founder of the Freedom of the Press Foundation, was a fellow emeritus at Harvard, and early Internet pioneer. A bit more of the old-school libertarian, he believed the Internet should be free. And to this end, he published an incredibly influential paper in Davos, Switzerland in 1996. That paper did as much during the foundational years of the still-nascent Internet as anything else. And so here it is. ————— A Declaration of the Independence of Cyberspace Governments of the Industrial World, you weary giants of flesh and steel, I come from Cyberspace, the new home of Mind. On behalf of the future, I ask you of the past to leave us alone. You are not welcome among us. You have no sovereignty where we gather. We have no elected government, nor are we likely to have one, so I address you with no greater authority than that with which liberty itself always speaks. I declare the global social space we are building to be naturally independent of the tyrannies you seek to impose on us. You have no moral right to rule us nor do you possess any methods of enforcement we have true reason to fear. Governments derive their just powers from the consent of the governed. You have neither solicited nor received ours. We did not invite you. You do not know us, nor do you know our world. Cyberspace does not lie within your borders. Do not think that you can build it, as though it were a public construction project. You cannot. It is an act of nature and it grows itself through our collective actions. You have not engaged in our great and gathering conversation, nor did you create the wealth of our marketplaces. You do not know our culture, our ethics, or the unwritten codes that already provide our society more order than could be obtained by any of your impositions. You claim there are problems among us that you need to solve. You use this claim as an excuse to invade our precincts. Many of these problems don't exist. Where there are real conflicts, where there are wrongs, we will identify them and address them by our means. We are forming our own Social Contract. This governance will arise according to the conditions of our world, not yours. Our world is different. Cyberspace consists of transactions, relationships, and thought itself, arrayed like a standing wave in the web of our communications. Ours is a world that is both everywhere and nowhere, but it is not where bodies live. We are creating a world that all may enter without privilege or prejudice accorded by race, economic power, military force, or station of birth. We are creating a world where anyone, anywhere may express his or her beliefs, no matter how singular, without fear of being coerced into silence or conformity. Your legal concepts of property, expression, identity, movement, and context do not apply to us. They are all based on matter, and there is no matter here. Our identities have no bodies, so, unlike you, we cannot obtain order by physical coercion. We believe that from ethics, enlightened self-interest, and the commonweal, our governance will emerge. Our identities may be distributed across many of your jurisdictions. The only law that all our constituent cultures would generally recognize is the Golden Rule. We hope we will be able to build our particular solutions on that basis. But we cannot accept the solutions you are attempting to impose. In the United States, you have today created a law, the Telecommunications Reform Act, which repudiates your own Constitution and insults the dreams of Jefferson, Washington, Mill, Madison, DeToqueville, and Brandeis. These dreams must now be born anew in us. You are terrified of your own children, since they are natives in a world where you will always be immigrants. Because you fear them, you entrust your bureaucracies with the parental responsibilities you are too cowardly to confront yourselves. In our world, all the sentiments and expressions of humanity, from the debasing to the angelic, are parts of a seamless whole, the global conversation of bits. We cannot separate the air that chokes from the air upon which wings beat. In China, Germany, France, Russia, Singapore, Italy and the United States, you are trying to ward off the virus of liberty by erecting guard posts at the frontiers of Cyberspace. These may keep out the contagion for a small time, but they will not work in a world that will soon be blanketed in bit-bearing media. Your increasingly obsolete information industries would perpetuate themselves by proposing laws, in America and elsewhere, that claim to own speech itself throughout the world. These laws would declare ideas to be another industrial product, no more noble than pig iron. In our world, whatever the human mind may create can be reproduced and distributed infinitely at no cost. The global conveyance of thought no longer requires your factories to accomplish. These increasingly hostile and colonial measures place us in the same position as those previous lovers of freedom and self-determination who had to reject the authorities of distant, uninformed powers. We must declare our virtual selves immune to your sovereignty, even as we continue to consent to your rule over our bodies. We will spread ourselves across the Planet so that no one can arrest our thoughts. We will create a civilization of the Mind in Cyberspace. May it be more humane and fair than the world your governments have made before. ——— Thank you to John Perry Barlow for helping keep the Internet as de-regulated as it can be. Today, as we are overwhelmed by incorrect tweets (no matter what side of the politically isle you fall on), disinformation, and political manipulation, we have to rethink this foundational concept. And I hope we keep coming back to the same realization - the government has no sovereignty where we gather. Thank you for tuning in to this episode of the history of computing podcast. We are so, so lucky to have you. Have a great day.
9/15/2020 • 8 minutes, 57 seconds
Claude Shannon and the Origins of Information Theory
The name Claude Shannon has come up 8 times so far in this podcast. More than any single person. We covered George Boole and the concept that Boolean is a 0 and a 1 and that using Boolean algebra, you can abstract simple circuits into practically any higher level concept. And Boolean algebra had been used by a number of mathematicians, to perform some complex tasks. Including by Lewis Carroll in Through The Looking Glass to make words into math. And binary had effectively been used in morse code to enable communications over the telegraph. But it was Claude Shannon who laid the foundation for making a theory that took both the concept of communicating over the telegraph and applying Boolean algebra to get to a higher level of communication possible. And it all starts with bits, which we can thank Shannon for. Shannon grew up in Gaylord, Michigan. His mother was a high school principal and his grandfather had been an inventor. He built a telegraph as a child, using a barbed wire fence. But barbed wire isn’t the greatest conducer of electricity and so… noise. And thus information theory began to ruminate in his mind. He went off to the University of Michigan and got a Bachelors in electrical engineering and another in math. A perfect combination for laying the foundation of the future. And he got a job as a research assistant to Vannevar Bash, who wrote the seminal paper, As We May Think. At that time, Bush was working at MIT on The Thinking Machine, or Differential Analyzer. This was before World War II and they had no idea, but their work was about to reshape everything. At the time, what we think of as computers today, were electro-mechanical. They had gears that were used for the more complicated tasks, and switches, used for simpler tasks. Shannon devoted his masters thesis to applying Boolean algebra, thus getting rid of the wheels, which moved slowly, and allowing the computer to go much faster. He broke down Boole’s Laws of Thought into a manner it could be applied to parallel circuitry. That paper was called A Symbolic Analysis of Relay and Switching Circuits in 1937 and helped set the stage for the Hackers revolution that came shortly thereafter at MIT. At the urging of Vannevar Bush, he got his PhD in Biology, pushing genetics forward by theorizing that you could break the genetic code down into a matrix. The structure of DNA would be discovered by George Gamow in 1953 and Watson and Crick would discover the helix and Rosalind Franklin would use X-ray crystallography to capture the first photo of the structure. He headed off to Princeton in 1940 to work at the Institute for Advanced Study, where Einstein and von Neumann were. He quickly moved over to the National Defense Research Committee, as the world was moving towards World War II. A lot of computing was going into making projectiles, or bombs, more accurate. He co-wrote a paper called Data Smoothing and Prediction in Fire-Control Systems during the war. He’d gotten a primer in early cryptography, reading The Gold-Bug by Edgar Allan Poe as a kid. And it struck his fancy. So he started working on theories around cryptography, everything he’d learned forming into a single theory. He would have lunch with Alan Turning during the war. He would And it was around this work that he first coined the term “information theory” in 1945. A universal theory of communication gnawed at him and formed during this time, from the Institute, to the National Defense Research Committee, to Bell Labs, where he helped encrypt communications between world leaders. He hid it from everyone, including failed relationships. He broke information down into the smallest possible unit, a bit, short for a binary digit. He worked out how to compress information that was most repetitive. Similar to how morse code compressed the number of taps on the electrical wire by making the most common letters the shortest to send. Eliminating redundant communications established what we now call compression. Today we use the term lossless compression frequently in computing. He worked out that the minimum amount of information to send would be H = - Sigma Pi log2 Pi - or entropy. His paper, put out while he was at Bell, was called “A mathematical theory or communication” and came out in 1948. You could now change any data to a zero or a one and then compress it. Further, he had to find a way to calculate the maximum amount of information that could be sent over a communication channel before it became garbled, due to loss. We now call this the Shannon Limit. And so once we have that, he derived how to analyze information with math to correct for noise. That barbed wire fence could finally be useful. This would be used in all modern information connectivity. For example, when I took my Network+ we spent an inordinate amount of time learning about Carrier-sense multiple access with collision detection (CSMA/CD) - a media access control (MAC) method that used carrier-sensing to defer transmissions until no other stations are transmitting. And as his employer, Bell Labs helped shape the future of computing. Along with Unix, C, C++, the transistor, the laser, information theory is a less tangible yet given what we all have in our pockets on on our wrists these days, more tangible discovery. Having mapped the limits, Bell started looking to reach the limit. And so the digital communication age was born when the first modem would come out of his former employer, Bell Labs, in 1958. And just across the way in Boston, ARPA would begin working on the first Interface Message Processor in 1967, the humble beginnings of the Internet. His work done, he went back to MIT. His theories were applied to all sorts of disciplines. But he comes in less and less. Over time we started placing bits on devices. We started retrieving those bits. We started compressing data. Digital images, audio, and more. It would take 35 or so years He consulted with the NSA on cryptography. In 1949 he published Communication Theory of Secrecy Systems, pushed cryptography to the next level. His paper Prediction and Entropy of Printed English in 1951 practically created the field of natural language processing, which evolved into various branches of machine learning. He helped give us the Nyquist–Shannon sampling theorem, used in aliasing, deriving maximum throughput, RGB, and of course signal to noise. He loved games. In 1941 he theorized the Shannon Number, or the game-tree complexity of chess. In case you’re curious, the reason deep blue can win at chess is that it can brute force 10 to the 120th power. His love of games continued and in 1949 he presented Programming a Computer for Playing Chess. That was the first time we thought about computers playing chess. And he’d have a standing bet that a computer would beat a human grand master at chess by 2001. Garry Kasparov lost to Deep Blue in 1997. That curiosity extended far beyond chess. He would make Theseus in 1950 - a maze with a mouse that learned how to escape, using relays from phone switches. One of the earliest forms of machine learning. In 1961 he would co-invent the first wearable computer to help win a game of roulette. That same year he designed the Minivan 601 to help teach how computers worked. So we’ll leave you with one last bit of information. Shannon’s maxim is that “the enemy knows the system.” I used to think it was just a shortened version of Kerckhoffs's principle, which is that it should be possible to understand a cryptographic system, for example, modern public key ciphers, but not be able to break the encryption without a private key. Thing is, the more I know about Shannon the more I suspect that what he was really doing was giving the principle a broader meaning. So think about that as you try and decipher what is and what is not disinformation in such a noisy world. Lots and lots of people would cary on the great work in information theory. Like Kullback–Leibler divergence, or relative entropy. And we owe them all our thanks. But here’s the thing about Shannon: math. He took things that could have easily been theorized - and he proved them. Because science can refute disinformation. If you let it.
9/9/2020 • 11 minutes, 27 seconds
A Retrospective On Google, On Their 22nd Birthday
We are in strange and uncertain times. The technology industry has always managed to respond to strange and uncertain times with incredible innovations that lead to the next round of growth. Growth that often comes with much higher rewards and leaves the world in a state almost unimaginable in previous iterations. The last major inflection point for the Internet, and computing in general, was when the dot come bubble burst. The companies that survived that time in the history of computing and stayed true to their course sparked the Web 2.0 revolution. And their shareholders were rewarded by going from exits and valuations in the millions in the dot com era, they went into the billions in the Web 2.0 era. None as iconic as Google. They finally solved how to make money at scale on the Internet and in the process validated that search was a place to do so. Today we can think of Google, or the resulting parent Alphabet, as a multi-headed hydra. The biggest of those heads includes Search, which includes AdWords and AdSense. But Google has long since stopped being a one-trick pony. They also include Google Apps, Google Cloud, Gmail, YouTube, Google Nest, Verily, self-driving cars, mobile operating systems, and one of the more ambitious, Google Fiber. But how did two kids going to Stanford manage to become the third US company to be valued at a trillion dollars? Let’s go back to 1998. The Big Lebowski, Fear and Loathing in Las Vegas, There’s Something About Mary, The Truman Show, and Saving Private Ryan were in the theaters. Puff Daddy hadn’t transmogrified into P Diddy. And Usher had three songs in the Top 40. Boyz II Men, Backstreet Boys, Shania Twain, and Third Eye Blind couldn’t be avoided on the airwaves. They’re now pretty much relegated to 90s disco nights. But technology offered a bright spot. We got the first MP3 player, the Apple Newton, the Intel Celeron and Xeon, the Apple iMac, MySQL, v.90 Modems, StarCraft, and two Stanford students named Larry Page and Sergey Brin took a research project they started in 1996 with Scott Hassan, and started a company called Google (although Hassan would leave Google before it became a company). There were search engines before Page and Brin. But most produced search results that just weren’t that great. In fact, most were focused on becoming portals. They took their queue from AOL and other ISPs who had springboarded people onto the web from services that had been walled gardens. As they became interconnected into a truly open Internet, the amount of diverse content began to explode and people just getting online found it hard to actually find things they were interested in. Going from ISPs who had portals to getting on the Internet, many began using a starting page like Archie, LYCOS, Jughead, Veronica, Infoseek, and of course Yahoo! Yahoo! Had grown fast out of Stanford, having been founded by Jerry Yang and David Filo. By 1998, the Yahoo! Page was full of text. Stock tickers, links to shopping, and even horoscopes. It took a lot of the features from the community builders at AOL. The model to take money was banner ads and that meant keeping people on their pages. Because it wasn’t yet monetized and in fact acted against the banner loading business model, searching for what you really wanted to find on the Internet didn’t get a lot of love. The search engines or portals of the day had pretty crappy search engines compared to what Page and Brin were building. They initially called the search engine BackRub back in 1996. As academics (and the children of academics) they knew that the more papers that sited another paper, the more valuable the paper was. Applying that same logic allowed them to rank websites based on how many other sites linked into it. This became the foundation of the original PageRank algorithm, which continues to evolve today. The name BackRub came from the concept of weighting based on back links. That concept had come from a tool called RankDex, which was developed by Robin Li who went on to found Baidu. Keep in mind, it started as a research project. The transition from research project meant finding a good name. Being math nerds they landed on "Google" a play on "googol", or a 1 followed by a hundred zeros. And within a year they were still running off University of Stanford computers. As their crawlers searched the web they needed more and more computing time. So they went out looking for funding and in 1998 got $100,000 from Sun Microsystems cofounder Andy Bechtolsheim. Jeff Bezos from Amazon, David Cheriton, Ram Shriram and others kicked in some money as well and they got a million dollar round of angel investment. And their algorithm kept getting more and more mature as they were able to catalog more and more sites. By 1999 they went out and raised $25 million from Kleiner Perkins and Sequoia Capital, insisting the two invest equally, which hadn’t been done. They were frugal with their money, which allowed them to weather the coming storm when the dot com bubble burst. They build computers to process data using off the shelf hardware they got at Fry’s and other computer stores, they brought in some of the best talent in the area as other companies were going bankrupt. They also used that money to move into offices in Palo Alto and in 2000 started selling ads through a service they called AdWords. It was a simple site and ads were text instead of the banners popular at the time. It was an instant success and I remember being drawn to it after years of looking at that increasingly complicated Yahoo! Landing page. And they successfully inked a deal with Yahoo! to provide organic and paid search, betting the company that they could make lots of money. And they were right. The world was ready for simple interfaces that provided relevant results. And the results were relevant for advertisers who could move to a pay-per-click model and bid on how much they wanted to pay for each click. They could serve ads for nearly any company and with little human interaction because they spent the time and money to build great AI to power the system. You put in a credit card number and they got accurate projections on how successful an ad would be. In fact, ads that were relevant often charged less for clicks than those that weren’t. And it quickly became apparent that they were just printing money on the back of the new ad system. They brought in Eric Schmidt to run the company, per the agreement they made when they raised the $25 million and by 2002 they were booking $400M in revenue. And they operated at a 60% margin. These are crazy numbers and enabled them to continue aggressively making investments. The dot com bubble may have burst, but Google was a clear beacon of light that the Internet wasn’t done for. In 2003 Google moved into a space now referred to as the Googleplex, in Mountain View California. In a sign of the times, that was land formerly owned by Silicon Graphics. They saw how the ad model could improved beyond paid placement and banners and acquired is when they launched AdSense. They could afford to with $1.5 billion in revenue. Google went public in 2004, with revenues of $3.2 billion. Underwritten by Morgan Stanley and Credit Suisse, who took half the standard fees for leading the IPO, Google sold nearly 20 million shares. By then they were basically printing money. By then the company had a market cap of $23 billion, just below that of Yahoo. That’s the year they acquired Where 2 Technologies to convert their mapping technology into Google Maps, which was launched in 2005. They also bought Keyhole in 2004, which the CIA had invested in, and that was released as Google Earth in 2005. That technology then became critical for turn by turn directions and the directions were enriched using another 2004 acquisition, ZipDash, to get real-time traffic information. At this point, Google wasn’t just responding to queries about content on the web, but were able to respond to queries about the world at large. They also released Gmail and Google Books in 2004. By the end of 2005 they were up to $6.1 billion in revenue and they continued to invest money back into the company aggressively, looking not only to point users to pages but get into content. That’s when they bought Android in 2005, allowing them to answer queries using their own mobile operating system rather than just on the web. On the back of $10.6 billion in revenue they bought YouTube in 2006 for $1.65 billion in Google stock. This is also when they brought Gmail into Google Apps for Your Domain, now simply known as G Suite - and when they acquired Upstartle to get what we now call Google Docs. At $16.6 billion in revenues, they bought DoubleClick in 2007 for $3.1 billion to get the relationships DoubleClick had with the ad agencies. They also acquired Tonic Systems in 2007, which would become Google Slides. Thus completing a suite of apps that could compete with Microsoft Office. By then they were at $16.6 billion in revenues. The first Android release came in 2008 on the back of $21.8 billion revenue. They also released Chrome that year, a project that came out of hiring a number of Mozilla Firefox developers, even after Eric Schmidt had stonewalled doing so for six years. The project had been managed by up and coming Sundar Pichai. That year they also released Google App Engine, to compete with Amazon’s EC2. They bought On2, reCAPTCHA, AdMob, VOIP company Gizmo5, Teracent, and AppJet in 2009 on $23.7 Billion in revenue and Aardvark, reMail, Picnic, DocVerse, Episodic, Plink, Agnilux, LabPixies, BumpTop, Global IP Solutions, Simplify Media, Ruba.com, Invite Media, Metaweb, Zetawire, Instantiations, Slide.com, Jambool, Like.com, Angstro, SocialDeck, QuickSee, Plannr, BlindType, Phonetic Arts, and Widevine Technologies in 2010 on 29.3 billion in revenue. In 2011, Google bought Motorola Mobility for $12.5 billion to get access to patents for mobile phones, along with another almost two dozen companies. This was on the back of nearly $38 billion in revenue. The battle with Apple intensified when Apple removed Google Maps from iOS 6 in 2012. But on $50 billion in revenue, Google wasn’t worried. They released the Chromebook in 2012 as well as announcing Google Fiber to be rolled out in Kansas City. They launched Google Drive They bought Waze for just shy of a billion dollars in 2013 to get crowdsourced data that could help bolster what Google Maps was doing. That was on 55 and a half billion in revenue. In 2014, at $65 billion in revenue, they bought Nest, getting thermostats and cameras in the portfolio. Pichai, who had worked in product on Drive, Gmail, Maps, and Chromebook took over Android and by 2015 was named the next CEO of Google when Google restructured with Alphabet being created as the parent of the various companies that made up the portfolio. By then they were up to 74 and a half billion in revenue. And they needed a new structure, given the size and scale of what they were doing. In 2016 they launched Google Home, which has now brought AI into 52 million homes. They also bought nearly 20 other companies that year, including Apigee, to get an API management platform. By then they were up to nearly $90 billion in revenue. 2017 saw revenues rise to $110 billion and 2018 saw them reach $136 billion. In 2019, Pichai became the CEO of Alphabet, now presiding over a company with over $160 billion in revenues. One that has bought over 200 companies and employs over 123,000 humans. Google’s mission is “to organize the world's information and make it universally accessible and useful” and it’s easy to connect most of the acquisitions with that goal. I have a lot of friends in and out of IT that think Google is evil. Despite their desire not to do evil, any organization that grows at such a mind-boggling pace is bound to rub people wrong here and there. I’ve always gladly using their free services even knowing that when you aren’t paying for a product, you are the product. We have a lot to be thankful of Google for on this birthday. As Netscape was the symbol of the dot com era, they were the symbol of Web 2.0. They took the mantle for free mail from Hotmail after Microsoft screwed the pooch with that. They applied math to everything, revolutionizing marketing and helping people connect with information they were most interested in. They cobbled together a mapping solution and changed the way we navigate through cities. They made Google Apps and evolved the way we use documents, making us more collaborative and forcing the competition, namely Microsoft Office to adapt as well. They dominated the mobility market, capturing over 90% of devices. They innovated cloud stacks. And here’s the crazy thing, from the beginning, they didn’t make up a lot. They borrowed the foundational principals of that original algorithm from RankDex, Gmail was a new and innovative approach to Hotmail, Google Maps was a better Encarta, their cloud offerings were structured similar to those of Amazon. And the list of acquisitions that helped them get patents or talent or ideas to launch innovative services is just astounding. Chances are that today you do something that touches on Google. Whether it’s the original search, controlling the lights in your house with Nest, using a web service hosted in their cloud, sending or receiving email through Gmail or one of the other hundreds of services. The team at Google has left an impact on each of the types of services they enable. They have innovated business and reaped the rewards. And on their 22nd birthday, we all owe them a certain level of thanks for everything they’ve given us. So until next time, think about all the services you interact with. And think about how you can improve on them. And thank you, for tuning in to this episode of the history of computing podcast.
9/4/2020 • 18 minutes, 44 seconds
The Oregon Trail
The Oregon Trail is a 2100 plus mile wagon route that stretched from the Missouri River to settleable lands in Oregon. Along the way it cuts through Kansas, Nebraska, Wyoming, and Idaho as well. After parts were charted by Lewis and Clark from 1804 to 1806, it was begun by fur traders in 1811 but fin the 1830s Americans began to journey across the trail to settle the wild lands of the Pacific Northwest. And today, Interstates 80 and 84 follow parts of it. But the game is about the grueling journey that people made from 1824 and on, which saw streams of wagons flow over the route in the 1840s. And over the next hundred years it became a thing talked about in textbooks but difficult to relate to in a land of increasing abundance. So flash forward to 1971. America is a very different place than those wagonloads of humans would have encountered in Fort Boise or on the Boeman Trail, both of which now have large cities named after them. Instead, in 1971, NPR produced their first broadcast. Amtrak was created in the US. Greenpeace was founded. Fred Smith created Federal Express. A Clockwork Orange was released. And Don Rawitch wrote The Oregon Trail while he was a senior at Carleton College to help teach an 8th grade history class in Northfield, Minnesota. It’s hard to imagine these days, but this game was cutting edge at the time. Another event in 1971: the Intel 4004 microprocessor comes along, which will change everything in computing in just 10 short years. In 1971, when Apollo 14 landed on the moon, the computer was made of hand-crafted coils and chips and a 10 key pad was used to punch in code. When Ray Tomlinson invented email that year, computers weren’t interactive. When IBM invented the floppy disk that year, no one would have guessed they would some day be used to give school children dissentary all across the world. When he first wrote OREGON, as the game was originally known, Don was using a time shared HP 2100 minicomputer at Pillsbury (yes, the Pillsbury of doughboy fame who makes those lovely, flaky biscuits). THE HP WAS running Time-Share BASIC and Don roped in his roommates, Paul Dillenberger and Bill Heinemann to help out. Back then, the computer wrote output to teletype and took data in using tape terminals. But the kids loved it. They would take a wagon from Independence, Missouri to Willamette Valley, Oregon - making a grueling journey in a covered wagon in 1848. And they might die of dissentary, starvation, mountain fever or any other ailment Rawitch could think of. Gaming on paper tape was awkward, but the kids were inspired. They learned about computers and the history of how the West was settled at the same time. When the class was over, Don printed the code for the game, probably not thinking much would happen with it after that. But then he got hired by the Minnesota Educational Computing Consortium, or MECC, in 1974. Back in the 60s and 70s, Minnesota was a huge hub of computing. Snow White and the Seven Dwarves had offices in the state, and early pioneers of mainframes like Honeywell, Unisys, ERA (and so Control Data Corporation and Cray from there), and IBM, all did a lot of work in the state. The state had funded MECC to build educational software for classrooms following the successes at TIES, or the Total Information for Educational Systems which had brought a time-sharing service on a HP 2000 along with training, and software (which they still do) to Minnesota schools. From there, the state created MECC to create software for schools. Don dug that code from 1971 back up and typed it back into the time sharing computers at MECC. He tweaked it a little and made it available on the CDC Cyber 70 at MECC and before you knew it, thousands of people were playing his game. By 1978 he’d publish the source code in Creative Computing magazine as the Oregon Trail. And then JP O’Malley would modify the basic programming to run on an Apple II and the Apple Pugetsound Program Library Exchange would post the game on their user group. The Oregon Trail 2 would come along that year as well and by 1980, MECC would release it along with better graphics as a part of an Elementary Series of educational titles - but the graphics got better with a full release as a standalone game in 1985. Along the way it had gotten ported for the Atari in 1983 and the Commodore 64 in 1984. But the 1985 version is the one we played in my school. We loved getting to play on the computers in school. The teachers seemed to mostly love getting a break as we were all silent while playing, until we lost one of our party - and then we’d laugh and squeal at the same time! We’d buy oxen, an extra yoke for our wagon, food, bullets, and then we’d set off on our journey to places many of us had never heard of. We’d get diseases, break limbs, get robbed, and watch early versions of cut scenes in 8-bit graphics. And along the way, we learned. We learned about a city called Independence, Missouri. And that life was very different in 1848. We learned about history. We learned about game mechanics. We started with $800. We learned about bartering and how carpenters were better at fixing wagon wheels than bankers were. We tried to keep our party alive and we learned that it’s a good idea to save a little money to ferry across rivers. We learned the rudimentariness of shooting in games, as we tried to kill a bear here and there. We learned that rabbits didn’t give us much meat. We learned to type BANG and WHAM fast so we could shoot animals and later we learned to aim with arrow keys and fire with a space bar. The bison moved slow and gave more meat than the 100 pounds you could carry back to your wagon. So we shot them. We learned carpenters could fix wheels and to conserve enough money to ferry your wagon so you didn’t sink or have one of your party drown. We learned that you got double the points for playing the carpenter and triple for playing the farmer. We wanted to keep our family alive not only because we got to name them (often making fun of our friends in class) but also because they gave us more points. As did the possessions we were able to keep. By 1990 with a changing tide, the game came to DOS and by 1991 it was ported to the Mac. Mouse support was added in 1992 and it came to Windows 3 in 1993. Softkey released The Oregon Trail: Classic Edition. And by 1995 The Oregon Trail made up a third of the MECC budget, raking in $30 million per year, and helped fund other titles. Oregon Trail II came in 95, 3 in 97, 4 in 99, and 5 made it into the new millennia in 2001. All being released for Windows and Mac. And 10 years later it would come to the modern era of console gaming, making it to the Wii and 3DS. And you can learn all of what we learned by playing the game on Archive.org ( https://archive.org/details/msdos_Oregon_Trail_The_1990 ). The Internet Archive page shows the 1990 version that was ported and made available for the Apple II, Macintosh, Windows, and DOS. The Internet Archive page alone has had nearly 7.2 million views. But the game has sold over 65 million copies as well. The Oregon Trail is beloved by many. I see shirts with You Have Died of DIssentary and card versions of the game in stores. I’ve played in Facebook games and mobile versions. It’s even been turned into plays and parodied in TV shows. That wagon is one of the better known symbols of all time in gaming lore. And we still use many of the game mechanics introduced then, in games from Dragon Warrior to the trading and inventory system inspiring the World of Warcraft. We can thank The Oregon Trail for giving our teachers a beak from teaching us in school and giving us a break from learning. Although I suspect we learned plenty. And we can thank MECC for continuing the fine tradition of computer science in Minnesota. And we can thank Don for inspiring millions, many of which went on to create their own games. And thank you, listener, for tuning in to this episode of The History of Computing Podcast. We are so so so lucky to have you. Have a great day! And keep in mind, a steady pace will get you to the end of the trail before the snows come in, with plenty of time to take ferries across the rivers. Rest when you need it. And no, you probably aren’t likely to beat my high score.
8/15/2020 • 11 minutes, 48 seconds
SimCity
SimCity is one those games that helped expand the collective public consciousness of humans. I have a buddy who works on traffic flows in Minneapolis. When I asked how he decided to go into urban planning, he quickly responded with “playing SimCity.” Imagine that, a computer game inspiring a generation of people that wanted to make cities better. How did that come to be? Will Wright was born in 1960. He went to Louisiana State University then Louisiana Tech and then to The New School in New York. By then, he was able to get an AppleII+ and start playing computer games, including Life, a game initially conceived by mathematician John Conway in 1970. A game that expanded the minds of every person that came in contact with it. That game had begun on the PDP, then in BBC BASIC before spreading around. It allowed players to set an initial configuration for cells and watch them mutate over time. After reading about LIFE, Wright wanted to port it to his Apple, so he learned Applesoft BASIC and PASCAL. He tinkered and By 1984 was able to produce a game called Raid on Bungeling Bay. And as many a Minecrafter can tell you, part of the fun was really building the islands in a map editor he built for the game. He happened to also be reading about urban planning and system dynamics. He just knew there was something there. Something that could take part of the fun from Life and editing maps in games and this newfound love of urban planning and give it to regular humans. Something that just might expand our own mental models about where we live and about games. This led him to build software that gamified editing maps. Where every choice we made impacted the map over time. Where it was on us to build the perfect map. That game was called Micropolis and would become SimCity. One problem, none of the game publishers wanted to produce it when it was ready for the Commodore 64 in 1985. After Brøderbund turned him down, he had to go back to the drawing board. So Wright would team up with his friend Jeff Braun who had founded Maxis Software in 1987. They would release SimCity in 1989 for Mac and Amiga and once it had been ported, for the Atari ST, DOS-based PCs, and the ZX Spectrum. Brøderbund did eventually agree to distribute it as it matured. And people started to get this software, staring at a blank slab of land where we zone areas as commercial and residential. We tax areas and can increment those rates, giving us money to zone other areas, provide electricity, water, and other services, and then build parks, schools, hospitals, police stations, etc. The more dense and populous the city becomes, the more difficult the game gets. The population fluctuates and we can tweak settings to grow and shrink the city. I was always playing to grow, until I realized sometimes it’s nice to stabilize and look for harmony instead. And we see the evolution over time. The initial choices we made could impact the ability to grow forever. But unlike Life we got to keep making better and better (or worse and worse) choices over time. We delighted in watching the population explode. In watching the city grow and flourish. And we had to watch parts of our beloved city decay. We raised taxes when we were running out of money and lowered them when population growth was negatively impacted. We built parks and paid for them. We tried to make people love our city. We are only limited in how great a city we can build by our own creativity. And our own ability to place parts of the city alongside the features that let the people live in harmony with the economic and ecological impacts of other buildings and zones. For example, build a power plant as far from residential buildings as you can because people don’t want to live right by a power plant. But running power lines is expensive, so it can’t be too far away in the beginning. The game mechanics motivate us to push the city into progress. To build. To develop. People choose to move to our cities based on how well we build them. It was unlike anything else out there. And it was a huge success. SimCity 2000 came along in 1993. Graphics had come a long way and you could now see the decay in the icons of buildings. It expanded the types of power plants we could build, added churches, museums, prisons and zoos. - each with an impact to the way the city grows. As the understanding of both programming and urban planning grew for the development team, they added city ordinances. The game got more and more popular. SimCity 3000 was the third installment in the series, which came out in 1999. By then, the game had sold over 5 million copies. That’s when they added slums and median incomes to create a classification. And large malls, which negatively impact smaller commercial zones. And toxic waste conversion plants. And prisons, which hits residential areas. And casinos, which increase crime. But each has huge upside as well. As graphics cards continued to get better, the simulation also increased, giving us waterfalls, different types of trees, more realistic grass, and even snow. Maxis even dabbled with using their software to improve businesses. Maxis Business Simulations built software for refineries and health as well. And then came The Sims, which Wright though of after losing his house to a fire in 1991. Here, instead of simulating a whole city of people at once, we simulated a single person, or a Sim. And we attempted to lead a fulfilling life by satisfying the needs and desires of our sim, buying furniture, building larger homes, having a family, and just… well, living life. But the board at Maxis didn’t like the idea. Maxis was acquired by Electronic Arts in 1997. And they were far more into the Sims idea, so The Sims was released in 2000. And it has sold nearly 200 million copies and raked in over $5 billion dollars in sales, making it one of the best-selling games of all times. Even though now it’s free on mobile devices with tons of in app purchases… And after the acquisition of Maxis, SimCity is now distributed by EA. Sim 4 would come along in 2003, continuing to improve the complexity and game play. And with processors getting faster, cities could get way bigger and more complex. SimCity 6 came in 2013, from lead designer Stone Librande and team. They added a Google Earth type of zoom effect to see cities and some pretty awesome road creation tools. And the sounds of cars honking on streets, birds chirping, airplanes flying over, and fans cheering in stadiums were amazing. They added layers so you could look at a colorless model of the city highlighting crime or pollution, to make tracking each of the main aspects of the game easier. Like layers in Photoshop. It was pretty CPU and memory intensive but came with some pretty amazing gameplay. In fact, some of the neighborhood planning has been used to simulate neighborhood development efforts in cities. And the game spread to consoles as well, coming to iPhone and web browsers in 2008. I personally choose not to play any more because I’m not into in-app purchasing. A lot of science fiction films center around two major themes: either societies enter into a phase of utopia or dystopia. The spread of computing into first our living rooms in the form of PCs and then into our pockets via mobile devices has helped push us into the utopian direction. SimCity inspired a generation of city planners and was inspired by more and more mature research done on urban planning. A great step on the route to a utopia and eye opening as to the impact our city planning has on advances towards a dystopian future. We were all suddenly able to envision better city planning and design, making cities friendlier for walking, biking, and being outdoors. Living better. Which is important in a time of continued mass urbanization. Computer games could now be about more than moving a dot with a paddle or controlling a character to shoot other characters. Other games with an eye opening and mind expanding game play were feasible. Like Sid Myers’ Civilization, which came along in 1991. But SimCity, like Life, was another major step on the way to where we are today. And it’s so relatable now that I’ve owned multiple homes and seen the impact of tax rates and services the governments in those areas provide. So thank you to Will Wright. For inspiring better cities. And thank you to the countless developers, designers, and product managers, for continuing the great work at Maxis then EA.
7/29/2020 • 12 minutes, 3 seconds
Pixar
Today, we think of Pixar as the company that gave us such lovable characters as Woody and Buzz Lightyear, Monsters Mike Wazowski and James P Sullivan, Nemo, Elastagirl, and Lightnight McQueen. But all that came pretty late in the history of the company. Let’s go back to the 70s. Star Wars made George Lucas a legend. His company Lucasfilm produced American Graffiti, the Star Wars Francise, the Indiana Jones Francis, The Labrynth, Willow, and many others. Many of those movies were pioneering in the use of visual effects in storytelling. At a time when the use of computer-aided visual effects was just emerging. So Lucas needed world-class computer engineers. Lucas found Ed Catmull and Alvy Ray Smith at the New York Institute of Technology Computer Graphics Lab. They had been hired by the founder, Alexander Schure, to help create the first computer-animated film in the mid-70s. But Lucas hired Catmull (who had been a student of the creator of the first computer graphics software, Sketchpad) and Smith (who had worked on SuperPaint at Xerox PARC) away to run the computer division of Lucasfilm, which by 1979 was simply called the Graphics Group. They created REYES and developed a number of the underlying techniques used in computer graphics today. They worked on movies like Star Trek II where the graphics still mostly stand up nearly 40 years later. And as the group grew, the technology got more mature and more useful. REYES would develop into RenderMan and become one of the best computer graphics products on the market. Pioneering, they won prizes in science and film. RenderMan is still one of the best tools available for computer-generated lighting, shading, and shadowing. John Lasseter joined in 1983. And while everything was moving in the right direction, in the midst of a nasty divorce when he needed the cash, Lucas sold the group as a spin-off to Steve Jobs in 1986. Jobs had just been ousted from Apple and was starting NeXT. He had the vision to bring the computer graphics to homes. They developed The Pixar Image Computer for commercial sales, which would ship just after Jobs took over the company. It went for $135,000 and still required an SGI or Sun computer to work. They’d sell just over 100 in the first two years - most to Disney. The name came from Alvy Ray Smith’s original name he suggested for the computer, Picture Maker. That would get shortened to Pixer, and then Pixar. The technology they developed along the way to the dream of a computer animated film was unparalleled in special effects. But CPUs weren’t going fast enough to keep up. The P-II model came with a 3 gig RAID (when most file systems couldn’t even access that much space), 4 processors, multiple video cards, 2 video processors, a channel for red, blue green, and alpha. It was a beast. But that’s not what we think of when we think of Pixar today. You see, they had always had the desire to make a computer animated movie. And they were getting closer and closer. Sure, selling computers to aid in the computer animation is the heart of why Steve Jobs bought the company - but he, like the Pixar team, is an artist. They started making shorts to showcase what the equipment and software they were making could do. Lasseter made a film called Luxo Jr in 1986 and showed it at SIGGRAPH, which was becoming the convention for computer graphics. They made a movie every year, but they were selling into a niche market and sales never really took off. Jobs pumped more money into the company. He’d initially paid $5 million dollars and capitalized the company with another $5 million. By 1989 he’d pumped $50 million into the company. But when sales were slow and they were bleeding money, Jobs realized the computer could never go down market into homes and that part of the business was sold to Vicom in 1990 for $2 million, who then went bankrupt. But the work Lasseter was doing blending characters that were purely made using computer graphics with delicious storytelling. Their animated short Tin Toy won an Academy Award in 1988. And being an artist, during repeated layoffs, that group just continued to grow. They would release more and more software - and while they weren’t building computers, the software could be run on other computers like Macs and Windows. The one bright spot was that Pixar and the Walt Disney Animation Studio were inseparable. By 1991 though, computers had finally gotten fast enough, and the technology mature enough, to make a computer-animated feature. And this is when Steve Jobs and Lasster sold the idea of a movie to Disney. In fact, they got $24 million to make three features. They got to work on the first of their movie. Smith would leave in 1994, supposedly over a screaming match he had with Jobs over the use of a whiteboard. But if Pixar was turning into a full-on film studio, it was about to realize the original dream they all had of creating a computer-animated motion picture and it’s too bad Smith missed it. That movie was called Toy Story. It would bring in $362 million dollars globally becoming the highest-grossing movie of 1995 and allow Steve Jobs to renegotiate the Pixar deal with Disney and take the company public in 1995. His $60 million investment would convert into over a billion dollars in Pixar stock that became over a hundred thousand shares of Disney stock worth over $4 billion, the largest single shareholder. Those shares were worth $7.4 billion dollars when he passed away in 2011. His wife would sell half in 2017 as she diversified the holdings. 225x on the investment. After Toy Story, Pixar would create Cars, Finding Nemo, Wall-E, Up, Onward, Mosters Inc, Ratatouille, Brave, The Incredibles, and many other films. Movies that have made close to $15 billion dollars. But more importantly, they mainstreamed computer animated films. And another huge impact on the history of computing was that they made Steve Jobs a billionaire and proved to Wall Street that he could run a company. After a time I think of as “the dark ages” at Apple, Jobs came back in 1996, bringing along an operating system and reinventing Apple - giving the world the iMac, the iPod, and the iPhone. And streamlining the concept of multi-media enough that music and later film and then software, would be sold through Apple’s online services, setting the groundwork for Apple to become the most valuable company in the world. So thank you to everyone from Pixar for the lovable characters, but also for inventing so much of the technology used in modern computer graphics - both for film and the tech used in all of our computers. And thank you for the impact on the film industry and keeping characters we can all relate to at the forefront of our minds. And thank you dear listener for tuning in to yet another episode of the History of Computing Podcast. We are so lucky to have you. And lucky to have all those Pixar movies. I think I’ll go watch one now. But I won’t be watching them on the Apple streaming service. It’ll be on Disney service. Funny how that worked out, aint it.
7/16/2020 • 10 minutes, 50 seconds
Sketchpad
Welcome to the History of Computing Podcast, where we explore the history of information technology. Because understanding the past prepares us to innovate (and sometimes cope with) the future! Today we’re going to cover yet another of the groundbreaking technologies to come out of MIT: Sketchpad. Ivan Sutherland is a true computer scientist. After getting his masters from Caltech, he migrated to the land of the Hackers and got a PhD from MIT in 1963. The great Claud Shannon supervised his thesis and Marvin Minsky was on the thesis review committee. But he wasn’t just surrounded by awesome figures in computer science, he would develop a critical piece between the Memex in Vannevar Bush’s “As We May Think” and the modern era of computing: graphics. What was it that propelled him from PhD candidate to becoming the father of computer graphics? The 1962-1963 development of a program called Sketchpad. Sketchpad was the ancestor of the GUI, object oriented programming, and computer graphics. In fact, it was the first graphical user interface. And it was all made possible by the TX-2, a computer developed at the MIT Lincoln Laboratory by Wesley Clark and others. The TX-2 was transistorized and so fast. Fast enough to be truly interactive. A lot of innovative work had come with the TX-0 and the program would effectively spin off as Digital Equipment Corporation and the PDP series of computers. So it was bound to inspire a lot of budding computer scientists to build some pretty cool stuff. Sutherland’s Sketchpad used a light pen. These were photosensitive devices that worked like a stylus but would send light to the display, activating dots on a cathode ray tube (CRT). Users could draw shapes on a screen for the first time. Whirlwind at MIT had allowed highlighting objects, but this graphical interface to create objects was a new thing altogether, inputing data into a computer as an object instead of loading it as code, as could then be done using punch cards. Suddenly the computer could be used for art. There were toggle-able switches that made lines bigger. The extra memory that was pretty much only available in the hallowed halls of government-funded research in the 60s opened up so many possibilities. Suddenly, computer-aided design, or CAD, was here. Artists could create a master drawing and then additional instances on top, with changes to the master reverberating through each instance. They could draw lines, concentric circles, change ratios. And it would be 3 decades before MacPaint would bring the technology into homes across the world. And of course AutoCAD, making Autodesk one of the greatest software companies in the world. The impact of Sketchpad would be profound. Sketchpad would be another of Doug Englebart’s inspirations when building the oN-Line System and there are clear correlations in the human interfaces. For more on NLS, check out the episode of this podcast called the Mother of All Demos, or watch it on YouTube. And Sutherland’s work would inspire the next generation: people who read his thesis, as well as his students and coworkers. Sutherland would run the Information Processing Techniques Office for the US Defense Department Advanced Research Project Agency after Lick returned to MIT. He also taught at Harvard, where he and students developed the first virtual reality system in 1968, decades before it was patented by VPL research in 1984. Sutherland then went to the University of Utah, where he taught Alan Kay who gave us object oriented programming in smalltalk and the concept of the tablet in the Dynabook, and Ed Catmull who co-founded Pixar and many other computer graphics pioneers. He founded Evans and Sutherland, with the man that built the computer science department at the University of Utah and their company launched the careers of John Warnock, the founder of Adobe and Jim Clark, the founder of Silicon Graphics. His next company would be acquired by Sun Microsystems and become Sun Labs. He would remain a Vice President and fellow at Sun and a visiting scholar at Berkeley. For Sketchpad and his other contributions to computing, he would be awarded a Computer Pioneer Award, become a fellow at the ACM, receive a John von Neumann Medal, receive the Kyoto Prize, become a fellow at the Computer History Museum, and receive a Turing Award. I know we’re not supposed to make a piece of software an actor in a sentence, but thank you Sketchpad. And thank you Sutherland. And his students and colleagues who continued to build upon his work.
7/13/2020 • 6 minutes, 31 seconds
One Year Of History Podcasts
The first episode of this podcast went up on July 7th 2019. One year later, we’ve managed to cover a lot of ground, but we’re just getting started. Over 70 episodes and so far, my favorite was on Mavis Beacon Teaches Typing. They may seem disconnected at times, but they’re not. There’s a large outline and it’s all research being included in my next book. The podcast began with an episode on the prehistory of the computer. And we’ve had episodes on the history of batteries, electricity, superconductors, and more - to build up to what was necessary in order for these advances in computing to come to fruition. We’ve celebrated Grace Hopper and her contributions. But we’d like to also cover a lot of other diverse voices in computing. There was a series on Windows, covering Windows 1, 3, , and 95. But we plan to complete that series with a look at 98, Millineum, NT, 2000, and on. We covered Android, CP/M, OS/2 and VMS but want to get into the Apple operating systems, SUN, and Linux, etc. Speaking of Apple… We haven’t gotten started with Apple. We covered the lack of an OS story in the 90s - but there’s a lot to unpack around the founding of Apple, Steve Jobs and Woz, and the re-emergence of Apple and their impact there. And since that didn’t happen in a vacuum, there were a lot of machines in that transition from the PC being a hobbyist market to being a full-blown industry. We talked through Radioshack, Commodore, the Altair, the Xerox Alto, We have covered some early mainframes like the Atanasoff-Berry Computer, ENIAC, the story of Z-1 and Zuse, and even supercomputers like Cray, but still need to tell the later story, bridging the gap between the mainframe, the minicomputer, and traditional servers we might find in a data center today. We haven’t told the history of the Internet. We’ve touched on bits and pieces, but want to get into those first nodes that got put onto ARPAnet, the transition to NSFnet, and the merging of the nets into the Internet. And we covered sites like Friendster, Wikipedia, and even the Netscape browser, but the explosion of the Internet has so many other stories left to tell. Literally a lifetime’s worth. For example, we covered Twitter and Snapchat but Google and Facebook We covered the history of object-oriented languages. We also covered BASIC, PASCAL, FORTRAN, ALGOL, Java, But still want to look at AWS and the modern web service architecture that’s allowed for an explosion of apps and web apps. Mobility. We covered the Palm Pilot and a little on device management, but still need to get into the iPhone and Samsung and the underlying technology that enabled mobility. And enterprise software and compliance. Knowing the past informs each Investment thesis. We covered Y Combinator but there are a lot of other VC/Private equity firms to look at. But what I thought I knew of the past isn’t always correct. As an example, coming from the Apple space, we have a hero worship of Steve Jobs that, for example, reading the Walter Isaacson book often conflicts with. He was a brilliant man, but complicated. And the more I read and research, the more I need to unpack many of own assumptions across the industry. I was here for a lot of this, yet my understanding is still not what it could be. Interviews from people who wrote code to put on lunar landers, who invented technology like spreadsheets, I wish more people could talk about their experiences openly, but even 40 years later, some are still bound by NDAs I’ve learned so much and I look forward to learning so much more!
7/7/2020 • 7 minutes, 16 seconds
The History Of Python
Haarlem, 1956. No, this isn’t an episode about New York, we’re talking Haarlem, Netherlands. Guido Van Rossum is born then, and goes on to college in Amsterdam where he gets a degree in math and computer science. He went on to work at the Centrum Wiskunde & Informatica, or CWI. Here, he worked on BSD Unix and the ABC Programming language, which had been written by Lambert Meertens, Leo Geurts, and Steven Pemberton from CWI. He’d worked on ABC for a few years through the 1980s and started to realize some issues. It had initially been a monolithic implementation, which made it hard to implement certain new features, like being able to access file systems and functions within operating systems. But Meertens was an editor of the ALGOL 68 Report and so ABC did have a lot of the ALGOL 68 influences that are prevalent in a number of more modern languages and could compile for a number of operating systems. It was a great way to spend your 20s if you’re Guido. But after some time building interpreters and operating systems, many programmers think they have some ideas for what they might do if they just… started over. Especially when they hit their 30s. And so as we turned the corner towards the increasingly big hair of the 1990s, Guido started a new hobby project over the holiday break for Christmas 1989. He had been thinking of a new scripting language, loosely based on ABC. One that Unix and C programmers would be interested in, but maybe not as cumbersome as C had become. So he got to work on an interpreter. One that those open source type hackers might be interested in. ALGOL had been great for math, but we needed so much more flexibility in the 90s, unlike bangs. Bangs just needed Aquanet. He named his new creation Python because he loved Monty Python’s Flying Circus. They had a great TV show from 1969 to 1974, and a string of movies in the 70s and early 80s. They’ve been popular amongst people in IT since I got into IT. Python is a funny language. It’s incredibly dynamic. Like bash or a shell, we can fire it up, define a variable and echo that out on the fly. But it can also be procedural, object-oriented, or functional. And it has a standard library but is extensible so you can add libraries to do tons of new things that wouldn’t make sense to be built in (and so bloat and slow down) other apps. For example, need to get started with big array processing for machine learning projects? Install TensorFlow or Numpy. Or according to your machine learning needs you have PyTorch, SciPi, Pandas, and the list goes on. In 1994, 20 developers met at the US National Standards Bureau in Maryland, at the first workshop and the first Python evangelists were minted. It was obvious pretty quickly that the modular nature and ease of scripting, but with an ability to do incredibly complicated tasks, was something special. What was drawing this community in. Well, let’s start with the philosophy, the Zen of Python as Tim Peters wrote it in 1999: Beautiful is better than ugly. Explicit is better than implicit. Simple is better than complex. Complex is better than complicated. Flat is better than nested. Sparse is better than dense. Readability counts. Special cases aren't special enough to break the rules. Although practicality beats purity. Errors should never pass silently. Unless explicitly silenced. In the face of ambiguity, refuse the temptation to guess. There should be one—and preferably only one—obvious way to do it. Although that way may not be obvious at first unless you're Dutch. Now is better than never. Although never is often better than right now.[a] If the implementation is hard to explain, it's a bad idea. If the implementation is easy to explain, it may be a good idea. Namespaces are one honking great idea—let's do more of those! Those are important enough to be semi-official and can be found by entering “import this” into a python shell. Another reason python became important is that it’s multi-paradigm. When I said it could be kinda’ functional. Sure. Use one big old function for everything if you’re moving from COBOL and just don’t wanna’ rethink the world. Or be overly object-oriented when you move from Java and build 800 functions to echo hello world in 800 ways. Wanna map reduce your lisp code. Bring it. Or add an extension and program in paradigms I’ve never heard of. The number of libraries and other ways to extend python out there is pretty much infinite. And that extensibility was the opposite of ABC and why Python is special. This isn’t to take anything away from the syntax. It’s meant to be and is an easily readable language. It’s very Dutch, with not a lot of frills like that. It uses white space much as the Dutch use silence. I wish it could stare at me like I was an idiot the way the Dutch often do. But alas, it doesn’t have eyeballs. Wait, I think there’s a library for that. So what I meant by white space instead of punctuation is that it uses an indent instead of a curly bracket or keyword to delimit blocks of code. Increase the tabbing and you move to a new block. Many programmers do this in other languages just for readability. Python does it for code. Basic statements included, which match or are similar to most languages, include if, for, while, try, raise, except, class, def, with, break, continue, pass, assert, yield, import and print until python 3 when that became a function. It’s amazing what you can build with just a dozen and a half statements in programming. You can have more, but interpreters get slower and compilers get bigger and all that… Python also has all the expressions you’d expect in a modern language, especial lambdas. And methods. And duck typing, or suitability for a method is determined by the properties of an object rather than the type. This can be great. Or a total pain. Which is why they’ll eventually be moving to gradual typing. The types of objects are bool, byte array, bytes, complex, dict, ellipsis (which I overuse), float, frozen set, int, list, NoneType (which I try to never use), NotImplementedType, range, set, str, and tuple so you can pop mixed tapes into a given object. Not to be confused with a thruple, but not to not be confused I guess… Another draw of python was the cross-compiler concept. An early decision was to make python cable to talk to c. This won over the Unix and growing Linux crowds. And today we have cross-compilers for C and C++, Go, .Net, Java, R, machine code, and of course, Java. Python 2 came in 2000. We got a garbage collection system and a few other features and 7 point releases over the next 10 years. Python 3 came in 2008 and represented a big change. It was partially backward-compatible but was the first Python release that wasn’t fully backward-compatible. We have had 7 point releases in the past 10 years as well. 3 brought changes to function print, simpler syntax, moved to storing strings in unicode by default, added a range function, changed how global variables react inside for-loops, implemented a simpler set of rules for order comparisons, and much more. At this point developers were experimenting with deploying microservices. Microservices is an a software development architecture where we build small services, perhaps just a script or a few scripts daisy chained together, that do small tasks. These are then more highly maintainable, more easily testable, often more scalable, can be edited and deployed independently, can be structured around capabilities, and each of the services can be owned by the team that created it with a contract to ensure we don’t screw over other teams as we edit them. Amazon introduced AWS Lambda in 2014 and it became clear quickly that the new micro services paradigm was accelerating the move of many SaaS-based tools to a micro services architecture. Now, teams could build in node or python or java or ruby or c# or heaven forbid Go. They could quickly stand up a small service and get teams able to consume the back end service in a way that is scalable and doesn’t require standing up a server or even a virtual server, which is how we did things in EC2. The containerization concept is nothing new. We had chroot in 1979 with Unix v7 and Solaris brought us containerization in 2004. But those were more about security. Docker had shown up in 2013 and the idea of spinning up a container to run a script and give it its own library and lib container, that was special. And Amazon made it more so. Again, libraries and modularization. And the modular nature is key for me. Let’s say you need to do image processing. Pillow makes it easier to work with images of almost any image type you can think of. For example, it can display an image, convert it into different types, automatically generate thumbnails, run sooth, blur, contour, and even increase the detail. Libraries like that take a lot of the friction out of learning to display and manage images. But Python can also create its own imagery. For example, Matplotlib generates two dimensional graphs and plots points on them. These can look as good as you want them to look and actually allows us to integrate with a ton of other systems. Van Rossum’s career wasn’t all python though. He would go on to work at NIST then CNRI and Zope before ending up at Google in 2005, where he created Mondrian, a code review system. He would go to Dropbox in 2013 and retire from professional life in 2019. He stepped down as the “Benevolent dictator for life” of the Python project in 2018 and sat on the Python Steering Council for a term but is no longer involved. It’s been one of the most intriguing “Transfers of power” I’ve seen but Python is in great hands to thrive in the future. This is the point when Python 2 was officially discontinued, and Python 3.5.x was thriving. By thriving, as of mid-202, there are over 200,000 packages in the Python Package Index. Things from web frameworks and web scraping to automation, to graphical user interfaces, documentation, databases, analytics, networking, systems administrations, science, mobile, image management and processing. If you can think of it, there’s probably a package to help you do it. And it’s one of the easier languages. Here’s the thing. Python grew because of how flexible and easy it is to use. It didn’t have the same amount of baggage as other languages. And that flexibility and modular nature made it great for workloads in a changing and more micro-service oriented world. Or, did it help make the world more micro-service oriented. It was a Christmas hobby project that has now ballooned into one of the most popular languages to write software in the word. You know what I did over my last holiday break? Sleep. I clearly should have watched more Monty Python so the short skits could embolden me to write a language perfect for making the programmers equivalent, smaller, more modular scripts and functions. So as we turn the corner into all the holidays in front of us, consider this while stuck at home, what hobby project can we propel forward and hopefully end up with the same type of impact Guido had. A true revolutionary in his own right. So thank you to everyone involved in python and everyone that’s contributed to those 200k+ projects. And thank you, listeners, for continuing to tun in to the history of computing podcast. We are so lucky to have you.
7/6/2020 • 15 minutes, 44 seconds
The Great Firewall of China
“If you open the window, both fresh air and flies will be blown in.” Deng Xiaoping perfectly summed up the Chinese perspective on the Internet during his 11 year tenure as the president of the People’s Republic of China, a position he held from 1978 to 1989. Yes, he opened up China with a number of market-economy reforms and so is hailed as the “Architect of Modern China.” However, he did so with his own spin. The Internet had been on the rise globally and came to China in 1994. The US had been passing laws since the 1970s to both aid and limit the uses of this new technology, but China was slow to the adoption up until this point. 1997, the Ministry of Public Security prohibits the use of the Internet to “disclose state secrets or injure the interests of the state or society. The US had been going through similar attempts to limit the Internet with the Telecommunications Decency Act in 1996 and the US Supreme Court ended up striking that down in 1997. And this was a turning point for the Internet in the US and in China. Many a country saw what was about to happen and governments were grappling with how to handle the cultural impact of technology that allowed for unfettered globally interconnected humans. By 1998, the Communist Party stepped in to start a project to build what we now call the Great Firewall of China. They took their time and over eight years but a technology that they could fully control. Fang Binxing graduated with a PhD from Harbin Institute of Technology and moved to the National Computer Network Emergency Response technical Team where he became the director in 2000. It’s in this capacity that he took over creating the Great Firewall. They watched what people were putting on the Internet and by 2002 were able to make 300 arrests. They were just getting started and brought 10s of thousands of police in to get their first taste of internet and video monitoring and of this crazy facial recognition technology. By 2003 China was able to launch the Golden Shield Project. Here, they straight-up censored a number of web sites, looking for pro-democracy terms, news sources that spoke out in favor of the Tiananmen Square protests, anyone that covered police brutality, and locked down the freedom of speech. They were able to block blogs and religious organizations, lock down pornography, and block anything the government could consider subversive, like information about the Dalai Lama. And US companies played along. Because money. Organizations like Google and Cisco set up systems in the country and made money off China. But also gave ways around it, like providing proxy servers and VPN software. We typically lump Golden Shield and the Great Firewall of China together, but Golden Shield was built by Shen Changxiang and the Great Firewall is mainly run in the three big internet pipes coming into the country, basically tapping the gateway in and out, where Golden Shield is more distributed and affiliated with public security and so used to monitor domestic connections. As anyone who has worked on proxies and various filters know, blocking traffic is a constantly moving target. The Chinese government blocks IP addresses and ranges. New addresses are always coming online though. They implement liar DNS and hijack DNS, sometimes providing the wrong IP to honeypot certain sites. But people can build local hosts files and do DNS over TLS. They use transparent proxies to block, or filter, specific URLs and URI schemes. That can be keyword based and bypassed by encrypting server names. They also use more advanced filtering options. Like Packet forging where they can do a TCP reset attack which can be thwarted by ignoring the resets. And of course man in the middle attacks, because you know, state owned TLS so they can just replace the GitHub, Google, or iCloud certs - with has each happened. They employ quality of service filtering. This is deep packet inspection that mirrors traffic and then analyze and create packet loss to slow traffic to unwanted sites. This helps thwart VPNs, SSH Tunneling and Tor but can be bypassed by spoofing good traffic, or using pluggable transports. Regrettably that can be as processor intensive as the act of blocking. Garlic routing is used when onion routing can’t be. All of this is aided by machine learning. Because like we said, it’s a constantly moving target. And ultimately, pornography and obscene contact is blocked. Discussion about protests is stomped out. Any descent about whether Hong Kong or Taiwan are part of China is disappeared. Democracy is squashed. By 2006, Chinese authorities could track access both centrally and from local security bureaus. The government could block and watch what the people were doing. Very 1984. By 2008, Internet cafe’s were logging which customers used which machines. Local officials could crack down further than the central government or tow the party line. 2010, Google decides they’re not playing along any more and shuts down their own censoring. 2016, the WTO defines the Great Firewall as a trade barrier. Wikipedia has repeatedly been blocked and unblocked since the Chinese version was launched in 2001 but as of 2019 all Wikipedia versions are completely blocked in China. The effect of many of these laws and engineering projects has been to exert social control over the people of China. But it also acts as a form of protectionism. Giving the people Baidu and not Google means a company like Baidu has a locked in market, making Baidu worth over $42 billion. Sure, Alphabet, the parent of Google, is worth almost a trillion dollars but in their minds, at least China is protecting some market for Baidu. And giving the people Alibaba instead of Amazon gives people the ability to buy goods and China protects a half-trillion dollar market capitalized company, in moneys that would be capitalizing Amazon, who currently stands at $1.3 trillion. Countries like Cuba and Zimbabwe then leverage technology from China to run their own systems. With such a large number of people only able to access parts of the Internet that their government feels is ok, many have referred to the Internet as the Splinternet. China has between 700 and 900 million internet users with over half using broadband and over 500 million using a smart phone. But the government owns the routes they use in the form of CSTNET, ChinaNet, CERNET, and CHINAGBN but expanding to 10 access points in the last few years, to handle the increased traffic. Sites like Tencent and Sina.com provide access to millions of users. With that much traffic they’re now starting to export some technologies, like TikTok, launched in 2016. And whenever a new app or site comes along based in China, it often comes with plenty of suspect. And sometimes that comes with a new version of TikTok that removes potentially harmful activity. And sometimes Baidu Maps and Tianditu are like Google Maps but Chinese like the skit in the show Silicon Valley. Like AliPay for Stripe. Or Soso Baike for Wikipedia. And there are plenty of viral events in China that many Americans miss, like the Black Dorm Boys or Sister Feng. Or “very erotic, very violent” or the Baidu 10 Mythical Creatures and the list goes on. And there’s a China slang like 520 meaning I love You or 995 meaning Help. More examples of splinternetting or just cultural differences? You decide. And the protectionism. That goes a lot of different ways. N Jumps is Chinese slang to refer to the number of people that jump out of windows at Foxconn factories. We benefit from not-great working conditions. The introduction of services and theft of intellectual property would be a place where the price for that benefit is paid in full. And I’ve seen it estimated that roughly a third of sites are blocked by the firewall, a massive percentage and places where some of the top sites do not benefit from Chinese traffic. But suffice it to say that the Internet is a large and sprawling place. And I never want to be an apologist. But some of this is just cultural differences. And who am I to impose my own values on other countries when at least they have the Interwebs - online North Korea. Oh, who am I kidding… Censorship is bad. And the groups that have risen to give people the Internet and rights to access it and help people bypass controls put in place by oppressive governments. Those people deserve our thanks. So thank you to everyone involved. Except the oppressors. And thank you, listeners, for tuning in to this episode of the History of Computing Podcast. Now go install ToR, if only to help those who need to access modern memes to do so. Your work is awesome sauce. Have a great day.
7/1/2020 • 11 minutes, 43 seconds
The Great Web Blackout of 1996
The killing of George Floyd at the hands of police in Minneapolis gave the Black Lives Matter movement a new level of prominence and protesting racial injustice jumped into the global spotlight with protests spreading first to Louisville and then to practically every major city in the world. Protesting is nothing new but the impacts can be seen far and wide. From the civil rights protests and Vietnam War protests in the 60s they are a way for citizens to use their free speech to enact social change. After all, Amendment I states that "Congress shall make no law ... abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble." The 90s was a weird time. In many ways desecularization was gaining momentum in the US and many of the things people feared have turned out to become reality. Many have turned their backs on religion in favor of technology. Neil Gaiman brought this concept to HBO by turning technology into a God. And whether they knew that was what they were worried about or not, the 90s saw a number of movements meant to impose the thought police into intruding into every day life. Battle lines were drawn by people like Tipper Gore, who wanted to slap a label on music and a long and steady backlash to those failures led to many of the culture battles we are fighting with today. These days we say “All Lives Matter” but we often really mean that life was simpler when we went to church. And many go to church still. But not like we used to. Consider this. 70% of Americans went to church in 1976. Now it’s less than half. And less than a third have been to church in the past week. That shouldn’t take anything away from the impact religion has in the lives of many. But a societal shift has been occurring for sure. And the impact of a global, online, interconnected society is often under-represented. Imagine this. We have a way of talking to other humans in practically every country in the world emerging. Before, we paid hefty long distance lines or had written communication that could take days or weeks to be delivered. And along comes this weird new medium that allowed us to talk to almost anyone, almost instantly. And for free. We could put images, sounds, and written words almost anonymously out there and access the same. And people did. The rise of Internet porn wasn’t a thing yet. But we could come home from church and go online and find almost anything. And by anything, it could be porn. Today, we just assume we can find any old kind of porn anywhere but that wasn’t always the case. In fact, we don’t even consider sex education materials or some forms of nudity porn any more. We’ve become desensitized to it. But that wasn’t always the case. And that represented a pretty substantial change. And all societal changes, whether good or bad, deserve a good old fashioned backlash. Which is what the Telecommunications Decency Act title 5 was. But the Electronic Frontier Foundation (or EFF) had been anticipating the backlash. The legislation could fine or even incarcerate people for distributing offensive or indecent content. Battle lines were forming between those who wanted to turn librarians into the arbiters of free speech and those who thought all content should be open. Then as in now, the politicians did not understand the technology. They can’t. It’s not what got them elected. I’ve never judged that. But they understood that the boundaries of free speech were again being tested and they, as they have done for hundreds of years, wanted to try and limit the pushing of the boundaries. Because sometimes progress is uncomfortable. Enter the Blue Ribbon Online Free Speech Campaign, which the EFF was organizing and the Center for Democracy and Technology. The Blue Ribbon campaign encouraged site owners to post images of ribbons on their sites in support. Now, at this point, no one argued these were paid actors. They branded themselves as Netizens and planned to protest. A new breed of protests online and in person. And protest they did. They did not want their Internet or the Internet 25 years later that we have inherited, to be censored. Works of art are free. Access to medical information that some might consider scandalous is free. And yes, porn is often free. We called people who ran websites webmasters back then. They were masters of zeros and ones in HTML. The webmasters thought people making laws didn’t understand what they were trying to regulate. They didn’t. But lawmakers get savvier every year. Just as the Internet becomes harder to understand. People like Shabir Safdar were unsung heroes. Patrick Leahy, the democratic senator from Vermont spoke out. As did Yahoo and Netscape. They wanted to regulate the Internet like they had done the television. But we weren’t having it. And then, surprisingly Bill Clinton signed the CDA into law. The pioneers of the Internet jumped into action. From San Francisco to the CDT in Brussels, they planned to set backgrounds black. I remember it happening but was too young to understand what it meant at the time. I just thought they were cool looking. It was February 8, 1996. And backgrounds were changed for 48 hours. The protests were covered by CNN, Time Magazine, the New York Times, and Wired. It got enough attention so the ACLU jumped into the fight. And ultimately the Act was declared unconstitutional by the US Supreme Court in 1997. Sandra Day O’Connor and Chief Justice William Rehnquist wrote the opinion. It was 9-0. The Internet we have today, for better or worse, was free. As free for posting videos of police killing young black men as it is to post nudes, erotic fiction, or ads to buy viagra. Could it be done again some day? Yes. Will it? Probably. Every few years ago legislators try and implement another form of the act. SOPA, COPA, and the list goes on. But again and again, we find these laws struck down. The thought police had been thwarted. As recent as 2012, Reddit wants to protest against SOPA and PIPA - so they try to repeat the blackout. The protests bring enough attention for the Supreme Court to hear a case and the new laws get overturned. Because free speech. And there’s hate speech sprinkled in there as well. Because the Internet helps surface the best and worst of humanity. But you know what, we’re better off for having all of it out there in the open, as hurtful and wonderful and beautiful and ugly as it all can be, according to our perspectives. And that’s the way it should be. Because the knowledge of all of it helps us to grow and be better and address that which needs to be addressed. And society will always grapple with adapting to technological change. That’s been human nature since Prometheus stole fire and gave it to humanity. Just as we’ve been trying to protect intellectual property and combat piracy and everything else that can but up against accelerating progress. It’s hard to know where the lines should be drawn. And globalism in the form of globally connected computers doesn’t make any of that any easier. So thank you to the heroes who forced this issue to prominence and got the backing to fight it back in the 90s. If it had been over-regulated we might not have the Internet as it is today. Just as it should be. Thank you for helping to protect free speech. Thank you for practicing your free speech. And least of all, thank you for tuning in to this episode of the History of Computing Podcast. Now go protest something!
6/27/2020 • 11 minutes, 17 seconds
America Online (AOL)
Today we’re going to cover America Online, or AOL. The first exposure many people had to “going online’ was to hear a modem connect. And the first exposure many had to electronic mail was the sound “you’ve got mail.” But how did AOL rise so meteorically to help mainstream first going online in walled gardens and then connecting to the Internet? It’s 1983. Steve Case joins a company called Control Video Corporation to bring online services to the now-iconic Atari 2600. CVC was bringing a service called Gameline to allow subscribers to rent games over a dialup connection. Case had grown up in Honolulu and then gone to Williams College in Massachusetts, which until the rise of the Internet culture had been a breeding ground for tech companies. Up to this point, the personal computer market had mostly been for hobbyists, but it was slowly starting to go mainstream. Case saw the power of pushing bits over modems. He saw the rise of ARPAnet and the merger of the nets that would create the Internet. The Internet had begun life as ARPAnet, a US Defense Department project, until 1981, when the National Science Foundation stepped in to start the process of networking non-defense-oriented computers. And by the time Case’s employer Control Video Corporation was trying to rent games for a dollar, something much larger than the video game market was starting to happen. From 1985 to 1993, the Internet, then mostly NSFNET, surged from 2,000 users to 2,000,000 users. In that time, Tim Berners-Lee created the World Wide Web in 1991 at CERN, and Mosaic came out of the National Center for Supercomputing applications, or NCSA at the University of Illinois, quickly becoming the browser everyone wanted to use until Mark Andreeson left to form Netscape. In 1993 NSFNET began the process of unloading the backbone and helped the world develop the Internet. And the AOL story in that time frame was similar to that of many other online services, which we think of today as Internet Service Providers. The difference was that today these are companies individuals pay to get them on the Internet and then they were connecting people to private nets. When AOL began life in 1985, they were called Quantum Computer Services. Case began as VP of Marketing but would transition to CEO in 1991. But Case had been charged with strategy early on and they focused on networking Commodore computers with a service they called Q-Link, or Quantum Link. Up until that point, most software that connected computers together had been terminal emulators. But the dialup service they built used the processing power of the Commodore to connect to services they offered, allowing it to be much more scalable. They kept thinking of things to add to the service, starting with online chat using a service called Habitat in 1986. And by 1988 they were adding dedicated fiction with a series they called QuantumLink Serial. By 1988 they were able add AppleLink for Apple users and PC Link for people with IBM computers and IBM clones. By 1989 they were growing far faster than Apple and the deal with Apple soured and they changed their name to America Online. They had always included games with their product, but included a host of other services like news, chat, and mail. CompuServe changed everything when they focused on connecting people to the Internet in 1989, a model that AOL would eventually embrace. But they were all about community from the beginning. They connected groups, provided chat communities for specific interests, and always with the games. That focus on community was paying off. The first Massively Multiplayer Online Role Playing Game, Dungeons and Dragons Neverwinter Nights got huge. Sure there had been communities and Massively Multiplayer games. So most of the community initiatives weren’t new or innovative, just done better than others had done it before them. They launched AOL for DOS in 1991 and AOL for Windows in 1992. At this point, you paid by the hour to access the network. People would dial in, access content, write back offline, then dial back in to send stuff. A lot of their revenue came from overages. But they were growing at a nice and steady pace. In 1993 they gave access to Usenet to users. In the early 90s, half of the CDs being pressed were for installing AOL on computers. By 1994 they hit a million subscribers. That’s when they killed off PC Link and Q-Link to focus on the AOL service and just kept growing. But there were challengers, and at the time, larger competitors in the market. CompuServe had been early to market connecting people to the Internet but IBM and Sears had teamed up to bring Prodigy to market. The three providers were known as the big three when modems ran at 9,600 bits per second. But as the mid-90s came around they bought WebCrawler in 1995 and sold it to Excite shortly thereafter, inking a deal with Excite to provide search services. They were up to 3 million users. In 1996, with downward pressure on pricing, they went to a flat $19.95 pricing model. This led to a spike in usage that they weren’t prepared for and a lot of busy signals, which caused a lot of users to cancel after just a short time using the service. And yet, they continued to grow. They inked a deal with Microsoft for AOL to be bundled with Windows and the growth accelerated. 1997 was a big year. Case engineered a three0way deal where WorldCom bought CompuServe for $1.2 billion in stock and then sold it to AOL. This made way for a whole slew of competitors to grow, which is an often-unanticipated result of big acquisitions. This was also the year they released AIM, which gave us our first taste of a network effect messaging service. Even after leaving AOL many a subscriber hung on to AIM for a decade. That’s now been replaced by What’s App, Facebook Messenger, Text Messaging, Snapchat to some degree, and messaging features inside practically every tool, from Instagram and Twitter to more community based solutions like Slack and Microsoft Teams. AIM caused people to stay online longer. Which was great in an hourly model but problematic in a flat pricing model. Yet it was explosive until Microsoft and others stepped in to compete with the free service. It lasted until it was shut down in 2017. By then, I was surprised it was still running to be honest. In 1998 AOL spent $4.2 Billion to buy Netscape. And Netscape would never be the same. Everyone thought the Internet would become a huge mall at that point. But instead, that would have to wait for Amazon to emerge as the behemoth they now are. In 1999, AOL launched AOL Search and hit 10 Million users. AOL invested $800 million in Gateway and those CompuServe users put another 2.2 million subscribers on the board. They also bought Mapquest for $1.1 billion dollars. And here’s the thing, that playbook of owning the browser, community content, a shopping experience, content-content, maps, and everything else was really starting to become a playbook that others would follow in the dark ages after the collapse of AOL. And yes, that would be coming. All empires over-extend themselves eventually. In Y2k they made over $4 billion in subscriptions. 15 years of hard work was paying off. With over 23 million subscribers, their market valuation was at $224 billion in today’s money and check this out, only half of the US was online. But they could sense the tides changing. We could all feel the broadband revolution in the air. Maybe to diversify or maybe to grow into areas they hadn’t, AOL merged with media congomerate Time Warner in 2001, by paying $165 billion dollars for them in what was then the biggest merger (or reverse merger maybe) in history. This was a defining moment for the history of the Internet. AOL was clearly leveraging their entry point into the internet as a means of pivoting to the online advertising market and Warner Cable brought them into broadband. But this is where the company became overextended. Yes, old media and new media were meeting but it was obvious almost immediately that this was a culture clash and the company never really met the growth targets. Not only because they were overextended but also because so much money was being pumped into Internet startups that there were barbarians at every gate. And of course, the dot com bubble burst. Oh, and while only 1% of homes had broadband, that market was clearly about to pop and leave companies like AOL in the dust. But, now Time Warner and Time Warner Cable would soften that blow as it came. 2002, over 26 million users. And that’s when the decline began. By then 12% of homes in the US were wired up to broadband, likely DSL, or Digital Subscriber Lines, at that time. Case left AOL in 2003 and the words AOL would get dropped from the name. The company was now just Time Warner again. 2004 brings a half billion dollar settlement with the SEC for securities fraud. Oops. More important than the cash crunch, it was a horrible PR problem at a time when subscribers were falling off and broadband had encroached with over a quarter of US homes embracing faster internet usage than anything dialup could offer. The advertising retooling continued as the number of subscribers fell. In 2007 AOL moved to New York to be closer to those Mad Men. By the way, the show Mad Men happened to start that year. This also came with layoffs. And by then, broadband had blanketed half of the US. And now, wireless Internet was being developed, although it would not start to encroach until about 2013. AOL and Time Warner get a divorce in 2009 when AOL gets spun back off into its own standalone company and Tim Armstrong is brought in from Google to run the place. They bought his old company Patch.com that year, to invest into more hyperlocal news. You know those little papers we all get for our little neighborhoods? They often don’t seem like tooooo much more than a zine from the 90s. Hyperlocal is information for a smaller community with a focus on the concerns and what matters to that cohort. 2010 they buy TechCrunch, 2011 they buy The Huffington Post. To raise cash they sell off a billion dollars in patents to Microsoft in 2012. Verizon bought AOL in 2015 for $4.4 billion dollars. They would merge it with Yahoo! In 2017 as a company called Oath that is now called Verizon Media. And thus, AOL ceased to exist. Today some of those acquisitions are part of Verizon Media and others like Tumblr were ruined by mismanagement and corporate infighting. Many of the early ideas paved the way for future companies. AOL Local can be seen in companies like Yelp. AOL Video is similar to what became YouTube or TikTok. Or streaming media like Netflix and Hulu. AOL Instant Messenger in What’s App. XDrive in Google Drive. AOL News in CNN, Apple News, Fox News, etc. We now live in an App-driven world where each of these can be a new app coming around every year or two and then fading into the background as the services are acquired by an Amazon, Google, Apple, or Facebook and then fade off into the sunset only to have others see the billions of dollars paid as a reason to put their own spin on the concept. Steve Case runs an investment firm now. He clearly had a vision for the future of the Internet and did well off that. And his book The Third Wave lays out the concept that rather than try and build all the stuff a company like AOL did, that companies would partner with one another. While that sounds like a great strategy, we do keep seeing acquisitions over partnerships. Because otherwise it’s hard to communicate priorities through all the management layers of a larger company. He talked about perseverance, like how Uber and Airbnb would punch through the policies of regulators. I suspect what we are seeing by being sent home due to COVID will propel a lot of technology 5-10 years in adoption and force that issue. But I think the most interesting aspect of that book to me was when he talked about R&D spending in the US. He made a lot of money at AOL by riding the first wave of the Internet. And that began far before him, when the ARPANet was formed in 1969. R&D spending has dropped to the lowest point since 1950, due to a lot of factors, not least of which is the end of the Cold War. And we’re starting to see the drying up of the ideas and innovations that came out of that period transition heavily regulated. So think about this. AOL made a lot of money by making it really, really easy to get online and then on the Internet. They truly helped to change the world by taking R&D that the government instigated in the 70s and giving everyday people, not computer scientists, access to it. They built communities around it and later diversified when the tides were changing. What R&D from 5 to 20 years ago that could truly be beneficial to humanity today hasn’t made it into homes across the world - and of that what can we help to proliferate? Thank you for joining us for this episode of the History of Computing Podcast. We are so lucky to have you and we are so lucky to make use of the innovations you might be bringing us in the future. Whether those are net-new technologies, or just making that research available to all. Have a great day.
6/25/2020 • 19 minutes, 9 seconds
Bill Gates Essay: Content Is King
Today we’re going to cover an essay Bill Gates wrote in 1996, a year and change after his infamous Internet Tidal Wave memo, called Content is King, a term that has now become ubiquitous. It’s a bit long but perfectly explains the Internet business model until such time as there was so much content that the business model had to change. See, once anyone could produce content and host it for free, like in the era of Blogger, the model flipped. So here goes: “Content is where I expect much of the real money will be made on the Internet, just as it was in broadcasting. The television revolution that began half a century ago spawned a number of industries, including the manufacturing of TV sets, but the long-term winners were those who used the medium to deliver information and entertainment. When it comes to an interactive network such as the Internet, the definition of “content” becomes very wide. For example, computer software is a form of content-an extremely important one, and the one that for Microsoft will remain by far the most important. But the broad opportunities for most companies involve supplying information or entertainment. No company is too small to participate. One of the exciting things about the Internet is that anyone with a PC and a modem can publish whatever content they can create. In a sense, the Internet is the multimedia equivalent of the photocopier. It allows material to be duplicated at low cost, no matter the size of the audience. The Internet also allows information to be distributed worldwide at basically zero marginal cost to the publisher. Opportunities are remarkable, and many companies are laying plans to create content for the Internet. For example, the television network NBC and Microsoft recently agreed to enter the interactive news business together. Our companies will jointly own a cable news network, MSNBC, and an interactive news service on the Internet. NBC will maintain editorial control over the joint venture. I expect societies will see intense competition-and ample failure as well as success-in all categories of popular content-not just software and news, but also games, entertainment, sports programming, directories, classified advertising, and on-line communities devoted to major interests. Printed magazines have readerships that share common interests. It’s easy to imagine these communities being served by electronic online editions. But to be successful online, a magazine can’t just take what it has in print and move it to the electronic realm. There isn’t enough depth or interactivity in print content to overcome the drawbacks of the online medium. If people are to be expected to put up with turning on a computer to read a screen, they must be rewarded with deep and extremely up-to-date information that they can explore at will. They need to have audio, and possibly video. They need an opportunity for personal involvement that goes far beyond that offered through the letters-to-the-editor pages of print magazines. A question on many minds is how often the same company that serves an interest group in print will succeed in serving it online. Even the very future of certain printed magazines is called into question by the Internet. For example, the Internet is already revolutionizing the exchange of specialized scientific information. Printed scientific journals tend to have small circulations, making them high-priced. University libraries are a big part of the market. It’s been an awkward, slow, expensive way to distribute information to a specialized audience, but there hasn’t been an alternative. Now some researchers are beginning to use the Internet to publish scientific findings. The practice challenges the future of some venerable printed journals. Over time, the breadth of information on the Internet will be enormous, which will make it compelling. Although the gold rush atmosphere today is primarily confined to the United States, I expect it to sweep the world as communications costs come down and a critical mass of localized content becomes available in different countries. For the Internet to thrive, content providers must be paid for their work. The long-term prospects are good, but I expect a lot of disappointment in the short-term as content companies struggle to make money through advertising or subscriptions. It isn’t working yet, and it may not for some time. So far, at least, most of the money and effort put into interactive publishing is little more than a labor of love, or an effort to help promote products sold in the non-electronic world. Often these efforts are based on the belief that over time someone will figure out how to get revenue. In the long run, advertising is promising. An advantage of interactive advertising is that an initial message needs only to attract attention rather than convey much information. A user can click on the ad to get additional information-and an advertiser can measure whether people are doing so. But today the amount of subscription revenue or advertising revenue realized on the Internet is near zero-maybe $20 million or $30 million in total. Advertisers are always a little reluctant about a new medium, and the Internet is certainly new and different. Some reluctance on the part of advertisers may be justified, because many Internet users are less-than-thrilled about seeing advertising. One reason is that many advertisers use big images that take a long time to download across a telephone dial-up connection. A magazine ad takes up space too, but a reader can flip a printed page rapidly. As connections to the Internet get faster, the annoyance of waiting for an advertisement to load will diminish and then disappear. But that’s a few years off. Some content companies are experimenting with subscriptions, often with the lure of some free content. It’s tricky, though, because as soon as an electronic community charges a subscription, the number of people who visit the site drops dramatically, reducing the value proposition to advertisers. A major reason paying for content doesn’t work very well yet is that it’s not practical to charge small amounts. The cost and hassle of electronic transactions makes it impractical to charge less than a fairly high subscription rate. But within a year the mechanisms will be in place that allow content providers to charge just a cent or a few cents for information. If you decide to visit a page that costs a nickel, you won’t be writing a check or getting a bill in the mail for a nickel. You’ll just click on what you want, knowing you’ll be charged a nickel on an aggregated basis. This technology will liberate publishers to charge small amounts of money, in the hope of attracting wide audiences. Those who succeed will propel the Internet forward as a marketplace of ideas, experiences, and products-a marketplace of content.”
6/6/2020 • 9 minutes, 57 seconds
ALGOL
Today we’re going to cover a computer programming language many might not have heard of, ALGOL. ALGOL was written in 1958. It wasn’t like many of the other languages in that it was built by committee. The Association for Computing Machinery and the German Society of Applied Mathematics and Mechanics were floating around ideas for a universal computer programming language. Members from the ACM were a who’s who of people influential in the transition from custom computers that were the size of small homes to mainframes. John Backus of IBM had written a programming language called Speedcoding and then Fortran. Joseph Wegstein had been involved in the development of COBOL. Alan Perlis had been involved in Whirlwind and was with the Carnegie Institute of Technology. Charles Katz had worked with Grace Hopper on UNIVAC and FLOW-MATIC. The Germans were equally as influential. Frederich Bauer had brought us the stack method while at the Technical University of Munich. Hermann Bottenbruch from The Institute for Applied Mathematics had written a paper on constructing languages. Klaus Samelson had worked on a computer called PERM that was similar to the MIT Whirlwind project. He’d come into computing while studying Eigenvalues. Heinz Ritishauser had written a number of papers on programming techniques and had codeveloped the language Superplan while at the The Swiss Federal Institute of Technology. This is where the meeting would be hosted. They went from May 27th to June 2nd in 1958 and initially called the language they would develop as IAL, or the International Algebraic Language. But would expand the name to ALGOL, short for Algorithmic Language. They brought us code blocks, the concept that you have a pair of words or symbols that would begin and end a stanza of code, like begin and end. They introduced nested scoped functions. They wrote the whole language right there. You would name a variable by simply saying integer or setting the variable as a := 1. You would substantiate a for and define the steps to perform until - the root of what we would now call a for loop. You could read a variable in from a punch card. It had built-in SIN and COSIN. It was line based and fairly simple functional programming by today’s standards. They defined how to handle special characters, built boolean operators, floating point notation. It even had portable types. And by the end had a compiler that would run on the Z22 computer from Konrad Zuse. While some of Backus’ best work it effectively competed with FORTRAN and never really gained traction at IBM. But it influenced almost everything that happened afterwards. Languages were popping up all over the place and in order to bring in more programmers, they wanted a formalized way to allow languages to flourish, but with a standardized notation system so algorithms could be published and shared and developers could follow along with logic. One outcome of the ALGOL project was the Backus–Naur form, which was the first such standardization. That would be expanded by Danish Peter Naur for ALGOL 60, thus the name. In ALGOL 60 they would meet in Paris, also adding Father John McCarthy, Julien Green, Bernard Vauquois, Adriaan van Wijngaarden, and Michael Woodger. It got refined, yet a bit more complicated. FORTRAN and COBOL use continued to rage on, but academics loved ALGOL. And the original implementation now referred to as the ZMMD implementation, gave way to X1 ALGOL, Case ALGOL, ZAM in Poland, GOGOL, VALGOL, RegneCentralen ALGOL, Whetstone ALGOL for physics, Chinese ALGOL, ALGAMS, NU ALGOL out of Norway, ALGEK out of Russia, Dartmouth ALGOL, DG/L, USS 90 Algol, Elliot ALGOL, the ALGOL Translator, Kidsgrove Algol, JOVIAL, Burroughs ALGOL, Niklaus Firths ALGOL W, which led to Pascal, MALGOL, and the last would be S-algol in 1979. But it got overly complicated and overly formal. Individual developers wanted more flexibility here and there. Some wanted simpler languages. Some needed more complicated languages. ALGOL didn’t disappear as much as it evolved into other languages. Those were coming out fast and with a committee to approve changes to ALGOL, they were much slower to iterate. You see, ALGOL profoundly shaped how we think of programming languages. That formalization was critical to paving the way for generations of developers who brought us future languages. ALGOL would end up being the parent of CPL and through CPL, BCPL, C, C++, and through that Objective-C. From ALGOL also sprang Simula and through Simula, Smalltalk. And Pascal and from there, Modula and Delphi. It was only used for a few years but it spawned so much of what developers use to build software today. In fact, other languages evolved as anti-ALGOL-derivitives, looking at how you did something and deciding to do it totally differently. And so we owe this crew our thanks. They helped to legitimize a new doctrine, a new career, computer programmer. They inspired. They coded. And in so doing, they helped bring us into the world of functional programming and set structures that allowed the the next generation of great thinkers to go even further, directly influencing people like Adele Goldberg and Alan Kay. And it’s okay that the name of this massive contribution is mostly lost to the annals of history. Because ultimately, the impact is not. So think about this - what can we do to help shape the world we live in? Whether it be through raw creation, iteration, standardization, or formalization - we all have a role to play in this world. I look forward to hearing more about yours as it evolves!
5/26/2020 • 8 minutes, 37 seconds
The Homebrew Computer Club
Today we’re going to cover the Homebrew Computer Club. Gordon French and Fred More started the Homebrew Computer Club. French hosted the Home-brew Computer Club’s first meeting in his garage in Menlo Park, California on March 5th, 1975. I can’t help but wonder if they knew they were about to become the fuse the lit a powder keg? If they knew they would play a critical role in inspiring generations to go out and buy personal computers and automate everything. If they knew they would inspire the next generation of Silicon Valley hackers? Heck, it’s hard to imagine they didn’t with everything going on at the time. Hunter S Thompson rolling around deranged, Patty Hearst robbing banks in the area, the new 6800 and 8008 chips shipping… Within a couple of weeks they were printing a newsletter. I hear no leisure suits were damaged in the making of it. The club would meet in French’s garage three times until he moved to Baltimore to take a job with the Social Security Administration. The group would go on without him until late in 1986. By then, the club had played a substantial part in spawning companies like Cromemco, Osborne, and most famously, Apple. The members of the club traded parts, ideas, rumors, and hacks. The first meeting was really all about checking out the Altair 8800, by an Albuquerque calculator company called MITS, which would fan the flames of the personal computer revolution by inspiring hackers all over the world to build their own devices. It was the end of an era of free love and free information. Thompson described it as a high water mark. Apple would help to end the concept of free, making its founders rich beyond their working-class dreams. A newsletter called the People’s Computer Company had gotten an early Altair. Bob Albrecht would later change the name of the publication to Dr Dobbs. That first, fateful meeting, inspired Deve Wozniak to start working on one of the most important computers of the PC revolution, the Apple I. They’d bounce around until they pretty much moved into Stanford for good. I love a classic swap meet, and after meetings, some members of the group would reconvene at a parking lot or a bar to trade parts. They traded ideas, concepts, stories, hacks, schematics, and even software. Which inspired Bill Gates to write his “Open Letter to Hobbyists” - which he sent to the club’s newsletter. Many of the best computer minds in the late 70s were members of this collective. George Morrow would make computers mostly through his company Morrow designs, for 30 years. Jerry Lawson invented cartridge-based gaming. Lee Felsenstein built the SOL, a computer based on the Intel 8080, the Pennywhistle Modem, and designed the Osborne 1, the first real portable computer. He did that with Adam Osborne who he met at the club. Li-Chen Wang developed Palo Alto Tiny Basic. Todd Fischer would help design the IMSAI. Paul Terrell would create the Byte Shop, a popular store for hobbyists that bought the first 50 Apple 1 computers to help launch the company. It was also the only place to buy the Altair in the area. Dan Werthimer founded the SETI@home project. Roger Melen would found Cromemco with Harry Garland. They named the company after Crothers Memorial, the graduate student engineering dorm at Stanford. They built computers and peripherals for the Z80 and S-100 bus. They gave us the Cyclops digital camera, the JS-1 joystick, and the Dazzler color graphics interface - all for the Altair. They would then build the Z-1 computer, using the same chassis as the IMSAI, iterating new computers until 1987 when they sold to Dynatech. John Draper, also known as Captain Crunch, had become a famous phreaker in 1971, having figured out that a whistle from a box of Captain Crunch would mimic the 2600 hertz frequency used to route calls. His Blue Box design was then shared to Steve Wozniak who set up a business selling them with his buddy from high school, Steve Jobs. And of course, Steve Wozniak would design the Apple 1 using what he learned at the meetings and team up with his buddy Steve Jobs to create Apple Computer and launch the Apple I, which Woz wanted to give his schematics away for free and Jobs wanted to sell the boards. That led to the Apple II, which made both wealthy beyond their wildest imaginations and paved the way for the Mac and every innovation to come out of Apple since. Slowly the members left to pursue their various companies. When the club ended in 1986, the personal computing revolution had come and IBM was taking the industry over. A number of members continued to meet for decades, using the new name, the 6800 club, named after the Motorola 6800 chip, which had been used in the Altair on that fateful day in 1975. This small band of pirates and innovators changed the world. Their meetings produced the concepts and designs that would be used in computers from Atari, Texas Instruments, Apple, and every other major player in the original personal computing hobbyist market. The members would found companies that went public and inspired IBM to enter what had been a hobbyist market and turn it into a full fledged industry. They would democratize the computer and their counter-culture personalities would humanize computing and even steer computing to benefit humans in an era when computers were considered part of the military industrial complex and so evil. They were open with one another, leading to faster sharing of ideas, faster innovation. Until suddenly they weren’t. And the higher water mark of open ideas was replaced with innovation that was financially motivated. They capitalized on a recession in chips as war efforts spun down. And they changed the world. And for that, we thank them. And I think you listener, for tuning in to this episode of the history of computing podcast. We are so, so lucky to have you. Now tune in to innovation, drop out of binge watching, and go change the world.
5/23/2020 • 9 minutes, 36 seconds
Konrad Zuse
Today we’re going to cover the complicated legacy of Konrad Zuse. Konrad Zuse is one of the biggest pioneers in early computing that relatively few have heard about. We tend to celebrate those who lived and worked in Allied countries in the World War II era. But Zuse had been born in Berlin in 1910. He worked in isolation during those early days, building his historic Z1 computer at 26 years old in his parents living room. It was 1936. That computer was a mechanical computer and he was really more of a guru when it came to mechanical and electromechanical computing. Mechanical computing was a lot like watch-making, with gears, and automations. There was art in it, and Zuse had been an artist early on in life. This was the first computer that really contained every part of what we would today think of a modern computer. It had a central processing control unit. It had memory. It had input through punched tape that could be used to program it. It even had floating point logic. It had an electric motor that ran at 1 hertz. This design would live inside future computers that he built, but was destroyed in 1943 during air raids, and would be lost to history until Zuse built a replica in 1989. He started building the Z2 in 1940. This used the same memory as the Z1 (64 words) but had 600 relays that allowed him to get up to 5 hertz. He’d also speed up calculations based on those relays, but the power required would jump up to a thousand watts. He would hand it over to the German DVL, now the German Aerospace Center. If there are Nazis on the moon, his computers likely put them there. And this is really where the German authorities stepped in and, as with in the US, began funding efforts in technological advancement. They saw the value of modeling all the maths on these behemoths. They ponied up the cash to build the Z3. And this turned out to ironically be the first Turing-complete computer. He’d continue 22-bit word lengths and run at 5 hertz. But this device would have 2,600 relays and would help to solve wing flutter problems and other complicated aerodynamic mathematical mysteries. The machine also used Boolean algebra, a concept brought into computing independently by Claude Shannon in the US. It was finished in 1941, two years before Tommy Flowers finished the Colossus and 1 year before the Atanasoff-Berry Computer was built. And 7 years before ENIAC. And this baby was fast. Those relays crunched multiplication problems in 3 seconds. Suddenly you could calculate square roots in no time. But the German war effort was more focused on mechanical computing and this breakthrough was never considered critical to the war effort. Still, it was destroyed by allied air raids, just as its younger siblings had been. The war had gone from 1939 to 1945, the year he married Gisela and his first child was born. He would finish building the Z4 days before the end of the war and met Alan Turing in 1947. He’d found Zuse KG in 1949. The Germans were emerging from a post-wartime depression and normalizing relations with the rest of Europe. The Z4 would finally go into production in Zurich in 1950. His team was now up to a couple dozen people and he was getting known. With electronics getting better and faster and better known, he was able to bring in specialists and with 2,500 relays - now 21 step-wise relays. - to get up to 40 hertz. And to under complicate something from a book I read, no Apple was not the first company to hook a keyboard up to a computer, the Zs did it in the 50s as they were now using a typewriter to help program the computer. OK, fine, ENIAC did it in 1946… But can you imagine hooking a keyboard up to a device rather than just tapping on the screen?!?! Archaic! For two years, the Z4 was the only digital computer in all of Europe. But that was all about to change. They would refine the design and build the Z5, delivering it to Leitz GMBH in 1953. The Americans tried to recruit him to join their growing cache of computer scientists by sending Douglas Buck and others out. But he stayed on in Germany. They would tinker with the designs and by 1955 came the Z11, shipping in 1957. This would be the first computer they produced multiple of in an almost assembly line building 48 and gave them enough money to build their next big success, the Z22. This was his seventh and would use vacuum tubes. And actually had an ALGOL 58 compiler. If you can believe it, the University of Applied Sciences, Karlsruhe still has one running! It added a rudimentary form of water cooling, teletype, drum memory, and core memory. They were now part of the computing mainstream. And in 1961 they would go transistorized with the Z23. Ferrite memory. 150 kilohertz, Algol 60. This was on par with anything being built in the world. Transistors and diodes. They’d sell nearly 100 of them over the next few years. They would even have Z25 and Z26 variants. The Z31 would ship in 1963. They would make it to the Z43. But the company would run into financial problems and be sold to Siemens in 1967, who had gotten into computing in the 1950s. Being able to focus on something other than running a company prompted Zuse to write Calculating Space, effectively positing that the universe is a computational structure, now known as digital physics. He wasn’t weird, you’re weird. OK, he was… e was never a Nazi, but he did build machines that could have helped their effort. You can trace the history of the mainframe era from gears to relays to tubes to transistors in his machines. IBM and other companies licensed his patents. And many advances were almost validated by him independently discovering them, like the use of Boolean algebra in computing. But to some degree he was a German in a lost era of history, often something that falls to the losers in a war. So Konrad Zuse, thank you for one of the few clean timelines. It was a fun romp. I hope you have a lovely place in history, however complicated it may be. And thank you listeners, for tuning in to this episode of the history of computing podcast. We are so lucky to have you stop by. I hope you have a lovely and quite uncomplicated day!
5/19/2020 • 9 minutes, 34 seconds
The Atanasoff-Berry Computer
Today we’re going to cover the Atanasoff–Berry computer (ABC), the first real automatic electronic digital computer. The Atanasoff-Berry Computer was the brainchild of John Vincent Atanasoff. He was a physics professor at Iowa State College at the time. And it’s like he was born to usher in the era of computers. His dad had emigrated to New York from Bulgaria, then a part of the Ottoman Empire, and moved to Florida after John was born. The fascination with electronics came early as his dad Ivan was an electrical engineer. And seeking to solve math problems with electronics - well, his mom Iva was a math teacher. He would get his bachelors from the University of Florida and go to Iowa State College to get his Masters. He’d end up at the University of Wisconsin to get his PhD before returning to Iowa State College to become a physics professor. But there was a problem with teaching physics. The students in Atanasoff’s physics courses took weeks to calculate equations, getting in the way of learning bigger concepts. So in 1934 he started working on ideas. Ideas like using binary algebra to compute tasks. Using those logic circuits to add and subtracted. Controlling clocks, using a separate memory from compute tasks, and parallel processing. By 1937 he’d developed the concept of a computer. Apparently many of the concepts came to him while driving late at night in the winter early in 1938. You know, things like functions and using vacuum tubes. He spent the next year working out the mechanical elements required to compute his logic designs and wrote a grant in early 1939 to get $5,330 of funding to build the machine. The Research Corporation of New York City funded the project and by 1939 he pulled in a graduate student named Clifford Berry to help him build the computer. He had been impressed by Berry when introduced by another professor who was from the electrical engineering department, Harold Anderson. They got started to build a computer capable of solving linear equations in the basement of the physics building. By October of 1939 they demonstrated a prototype that had 11 tubes and sent their work off to patent attorneys at the behest of the university. One of the main contributions to computing was the concept of memory. Processing that data was done with vacuum tubes, 31 thyratrons, and a lot of wire. Separating processing from memory would mean taking an almost record player approach to storage. They employed a pair of drums that had 1600 capacitors in them and rotated, like a record player. Those capacitors were stored in 32 bands of 50 and because the drum rotated once per second, they could add or subtract 30 numbers per second. Thus, 50 bits. The concept of storing a binary bit of data and using binary logic to convert that into more of a zero or one was the second contribution to computing that persists today. The processing wasn’t a CPU as we’d think of it today but instead a number of logic gates that included inverters and input gates for two and three inputs. Each of these had an inverting vacuum tube amplifier and a resistor that defined the logical function. The device took input using decimals on standard IBM 80-column punched cards. It stored results in memory when further tasks were required and the logic operations couldn’t be handled in memory. Much as Atanasoff had done using a Monroe calculator hooked to an IBM tabulating machine when he was working on his dissertation. In many ways, the computer he was building was the next evolution from that just as ENIAC would be the next evolution after. Changing plugs or jumpers on the front panel was akin to programming the computer. Output was also decimal and provided using a display on the front panel. The previous computers had been electro-mechanical. Gears and wires and coils that would look steampunk to many of us today. But in his paper Computing Machine For the Solution Of Large Systems of Linear Algebraic Equations (http://jva.cs.iastate.edu/img/Computing%20machine.pdf), Atanasoff had proposed a fully digital device, which they successfully tested in 1942. By then the computer had a mile of wire in it, weighed 700 pounds, had 280 vacuum tubes, and 31 thyratrons. The head of the Iowa State College Statistics Department was happy to provide problems to get solved. And so George W. Snedecor became the first user of a computer to solve a real problem. We have been fighting for the users ever since. But then came World War II. Both Atanasoff and Berry got called away to World War II duties and the work on the computer was abandoned. The first use of vacuum tubes to do digital computation was almost lost to history. But Mauchly, who built ENIAC would come later. ENIAC would build on many of the concepts and be programmable so many consider it to be the first real computer. But Atanasoff deserves credit for many of the concepts we still use today, albeit under the hood! Most of the technology we have today didn’t exist at the time. They gave us what evolved into DRAM. And between them and ENIAC, was Konrad Zuse's Z3 and Colossus. So the ‘first computer” is a debatable topic. With the pioneers off to help win the war, the computer would go into relative obscurity. At least, until the computer business started to get huge and people didn’t want to pay Mauchly and Eckert to use their patent for a computer. Mauchly certainly would have known about the ABC since he saw it in 1941 and actually spent four days with Atanasoff. And there are too many parallels between them to say that some concepts weren’t borrowed. But that shouldn’t take anything away from any of the people involved. Because of Atanasoff, the patents were voided and IBM and other companies saved millions in royalties. ABC would be designated an official IEEE Milestone in 1990, 5 years before Atanasoff passed away. And so their contributions would be recognized eventually and those we can’t know about due to their decades in the defense industry are surely recognized by those who enable our freedoms in the US today. But not to the general public. But we thank them for their step in the evolution that got us where we are today. Just as I think you dear listener for tuning in to this episode of the history of computing podcast. We are so lucky to have you.
5/9/2020 • 9 minutes, 22 seconds
The Evolution Of Wearables
Mark Weiser was the Chief Technologiest at the famed Xerox Palo Alto Research Center, or Xerox Parc in 1988 when he coined the term "ubiquitous computing.” Technology hadn’t entered every aspect of our lives at the time like it has now. The concept of wearable technology probably kicks off way earlier than you might think. Humans have long sought to augment ourselves with technology. This includes eyeglasses, which came along in 1286 and wearable clocks, an era kicked off with the Nuremberg eggs in 1510. The technology got smaller and more precise as our capacity at precision grew. Not all wearable technology is meant to be worn by humans. We strapped cameras to pigeons in 1907. in the 15th century, Leonardo da Vinci would draw up plans for a pedometer and that concept would go on the shelf until Thomas Jefferson picked it back up during his tinkering days. And we would get an abacus ring in 1600. But computers began by needing a lot of electricity to light up those vacuum tubes to replace operations from an abacus, and so when the transistor came along in the 40s, we’d soon start looking for ways to augment our capabilities with those. Akio Morita and Masaru Ibuka began the wearable technology craze in 1953 when they started developing what would become the TR-55 when it was released in 1955. It was the first transistor radio and when they changed their name to Sony, they would introduce the first of their disruptive technologies. We don’t think of radios as technology as much as we once did, but they were certainly an integral part of getting the world ready to accept other technological advances to come! Manfred Clynes came up with cyborgs in his story story called Cyborgs in Space in 1960. The next year, Edward Thorp and mathematician and binary algebra guru Claude Shannon wanted to try their hands at cheating at roulette so built a small computer to that timed when balls would land. It went in a shoe. created their own version of wearable technology – a computer small enough to fit into a shoe. This would stay a secret until Thorp released his book “Beat the Dealer” telling readers they got a 44 percent improvement in making bets. By 1969 though Seiko gave us the first automatic quartz watch. Other technologies were coming along at about the same time that would later revolutionize portable computing once they had time to percolate for awhile. Like in the 1960s, liquid crystal displayers were being researched at RCA. The technology goes back further but George H. Heilmeier from RCA laboratories gets credit for In 1964 for operationalizing LCD. And Hatano developed a mechanical pedometer to track progress to 10,000 steps a day, which by 1985 had him defining that as the number of steps a person should reach in a day. But back to electronics. Moore’s law. The digital camera traces its roots to 1975, but Kodak didn’t really pursue it. 1975 and devices were getting smaller and smaller. Another device we don’t think of as a computer all that much any more is a calculator. But kits were being sold by then and suddenly components had gotten small enough that you could get a calculator in your watch, initially introduced by Pulsar. And those radios were cool but what if you wanted to listen to what you wanted rather than the radio? Sony would again come along with another hit: The Walkman in 1979, selling over 200 million over the ensuing decade. Akio Morita was a genius, also bringing us digital hearing aids and putting wearables into healthcare. Can you imagine the healthcare industry without wearable technology today? You could do more and more and by 1981, Seiko would release the UC 2000 Wrist PC. By then portable computers were a thing. But not wearables. You could put 2 whopping kilobytes of data on your wrist and use a keyboard that got strapped to an arm. Computer watches continued to improve any by 1984 you could play. Games on them, like on the Nelsonic Space Attacker Watch. Flash memory arguably came along in 1984 and would iterate and get better, providing many, many more uses for tiny devices and flash media cards by 1997. But those calculator watches, Marty McFly would sport one in 1985s Back To The Future and by the time I was in high school they were so cheap you could get them for $10 at the local drug store. And a few years later, Nintendo would release the Power Glove in 1989, sparking the imagination of many a nerdy kid who would later build actually functional technology. Which regrettably the Power Glove was not. The first portable MP3 player came along in 1998. It was the MPMan. Prototypes had come along in 1979 with the IXI digital audio player. The audible player, Diamond Rio, and Personal Jukebox came along in 1998 and on the heels of their success the NOMAX Jukebox came in y2k. But the Apple iPod exploded onto the scene in 2001 and suddenly the Walkman and Diskman were dead and the era of having a library of music on mainstream humans was upon us, sparking Microsoft to release the Zen in 2004, and the Zune in 2006. And those watches. Garmin brought us their first portable GPS in 1990, which continues to be one of the best such devices on the market. The webcam would come along in 1994 when Canadian researcher Steve Mann built the first the wearable wireless webcam. That was the spark that led to the era of the Internet of Things. Suddenly we weren’t just wearing computers. We were wearing computers connected to the inter webs. All of these technologies brought to us over the years… They were converging. Bluetooth was invented in 2000. By. 2006, it was time for the iPod and fitness tracking to converge. Nike+iPod was announced and Nike would release a small transmitter that. Fit into a notch in certain shoes. I’ve always been a runner and jumped on that immediately! You needed a receiver at the time for an iPod Nano. Sign me up, said my 2006 self! I hadn’t been into the cost of the Garmin but soon I was tracking everything. Later I’d get an iPhone and just have it connect. But it was always a little wonky. Then came The Nike+ Fuelband in 2012. I immediately jumped on that bandwagon as well. You. Had to plug it in at first but eventually a model came out that sync’d over bluetooth and life got better. I would sport that thing until it got killed off in 2014 and a little beyond… Turns out Nike knew about Apple coming into their market and between Apple, Fitbit, and Android Wear, they just didn’t want to compete in a blue ocean, no matter how big the ocean would be. Speaking of Fitbit, they were founded in 2007 James Park and Eric Friedman with a goal of bringing fitness trackers to market. And they capitalized on an exploding market for tracking fitness. But it wasn’t until the era of the app that they achieved massive success and in 2014 they released apps for iOS, Android and Windows Mobile, which was still a thing. And the watch and mobile device came together in 2017 when they released their smartwatch. They are now the 5th largest wearables company. Android Wear had been announced at Google I/O in 2014. Now called Wear OS, it’s a fork of Android Lollipop, that pairs with Android devices and integrates with the Google Assistant. It can connect over Bluetooth, Wi-Fi, and LTE and powers the Moto 360, the LG G and Samsung Gear. And there are a dozen other manufacturers that leverage the OS in some way, now with over 50 million installations of the apps. It can use Hangouts, and leverages voice to do everything from checking into Foursquare to dictating notes. But the crown jewel in the smart watches is definitely the Apple Watch. That came out of hiring former Adobe CTO Kevin Lynch to bring a Siri-powered watch to market, which happened in 2015. With over 33 million being sold and as of this recording on the 5th series of the watch, it can now connect over LTE, Wifi, or through a phone using Bluetooth. There are apps, complications, and a lot of sensors on these things, giving them almost limitless uses. Those glasses from 1286. Well, they got a boost in 2013 when Google put images on them. Long a desire from science fiction, Google Glass brought us into the era of a heads up display. But Sega had introduced their virtual reality headset in 1991 and the technology actually dates back to the 70s from JPL and MIT. Nintendo experimented with Virtual boy in 1994. Apple released QuickTime VR shortly thereafter, but it wasn’t that great. I even remember some VGA “VR” headsets in the early 2000s, but they weren’t that great. It wasn’t until the Oculus Rift came along in 2012 that VR seemed all that ready. These days, that’s become the gold standard in VR headsets. The sign to the market was when Facebook bought Oculus for $2.3 billion dollars in 2014 and the market has steadily grown ever since. Given all of these things that came along in 2014, I guess it did deserve the moniker “The Year of Wearable Technology.” And with a few years to mature, now you can get wearable sensors that are built into yoga pants, like the Nadi X Yoga Pants, smartwatches ranging from just a few dollars to hundreds or thousands from a variety of vendors, sleep trackers, posture trackers, sensors in everything bringing a convergence between the automated home and wearables in the internet of things. Wearable cameras like the Go Pro, smart glasses from dozens of vendors, VR headsets from dozens of vendors, smart gloves, wearable onesies, sports clothing to help measure and improve performance, smart shoes, smart gloves, and even an Alexa enabled ring. Apple waited pretty late to come out with bluetooth headphones, releasing AirPods in 2016. These bring sensors into the ear, the main reason I think of them as wearables where I didn’t think of a lot of devices that came before them in that way. Now on their second generation, they are some of the best headphones you can buy. And the market seems poised to just keep growing. Especially as we get more and more sensors and more and more transistors packed into the tiniest of spaces. It truly is ubiquitous computing.
5/4/2020 • 15 minutes, 59 seconds
The Rise of Netflix
Today we’re going to cover what many of you do with your evenings: Netflix. Now, the story of Netflix comes in a few stages that I like to call the founding and pivot, the Blockbuster killer, the streaming evolution, and where we are today: the new era of content. Today Netflix sits at more than a 187 billion dollar market cap. And they have become one of the best known brands in the world. But this story has some pretty stellar layers to it. And one of the most important in an era of eroding (or straight up excavated) consumer confidence is this thought. The IPOs that the dot com buildup created made fast millionaires. But those from the Web 2.0 era made billionaires. And you can see that in the successes of Netflix CEO Reed Hastings. Prelude Hastings founded Pure Software in 1991. They made software that helped other people make… software. They went public in 1995 and merged with Atria, and were acquired the next year by Rational Software - making he and Netflix founder Marc Randolph, well, obsolete. Hastings made investors and himself a lot of money. Which at that point was millions and millions of dollars. So he went on to sit on the State Board of Education and get involved in education. Act I: The Founding and Pivot He and Marc Randolph had carpooled to worked while at Pure Atria and had tossed around a lot of ideas for startups. Randolph landed on renting DVDs by mail. Using the still somewhat new Internet. Randolph would become CEO and Hastings would invest the money to get started. Randolph brought in a talented team from Pure Atria and they got to work using an initial investment of two and a half million dollars in 1997. But taking the brick and mortar concept that video stores had been successfully using wasn’t working. They had figured out how to ship DVDs cheaply, how to sell them (until Amazon basically took that part of the business away), and even how to market the service by inking deals with DVD player manufacturers. The video stores had been slow to adopt DVDs after the disaster they found with laser disk and so the people who made the DVDs saw it as a way to get more people to buy the players. And it was mostly working. But the retention numbers sucked and they were losing money. So they tinkered with the business model, relentlessly testing every idea. And Hastings came back to take the role of CEO and Randolph stepped into the role of president. One of those tests had been to pivot from renting DVDs to a subscription model. And it worked. They gave customers a free month trial. The subscription and the trial are now all too common. But at the time it was a wildly innovative approach. And people loved it. Especially those who could get a DVD the next day. They also gave Netflix huge word of mouth. In 1999 they were at 110,000 subscribers. Which is how I first got introduced to them in 2000, when they were finally up to 300,000 subscribers. I had no clue, but they were already thinking about streaming all the way back then. But they had to survive this era. And as is often the case when there’s a free month that comes at a steep cost, Netflix was bleeding money. And running out of cash. They planned to go IPO. But because the dot com bubble had burst, cash was becoming hard to come by. They had been well funded, taking a hundred million dollars by the time they got to a series E. And they were poised for greatness. But there was that cash crunch. And a big company to contend with: Blockbuster. With 9,000 stores, $6b in revenue, tens of thousands of employees, and millions of rentals being processed a month, Blockbuster was the king of the video rental market. The story goes that Hastings got the Netflix idea from a late fee. So they would do subscriptions. But they had sold DVDs and done rentals first. And really, they found success because of the pivot, wherever that pivot came from. And in fact, Hastings and Randolph had flown to Texas to try and sell Netflix to Blockbuster. Pretty sure Blockbuster wishes they’d jumped on that. Which brings us to Act II: The Blockbuster Killer. Managing to keep enough cash to make it through the growth, they managed to go public in 2002 and finally got profitable in 2003. Soon they would be shipping over a million DVDs every single day. They quickly rose through word of mouth. That one day shipping was certainly a thing. They pumped money into advertising and marketing. And they continued a meteoric growth. They employed growth hacks and they researched a lot of options for the future, knowing that technology changes were afoot. Randolf investigated opening kiosks with Mitch Lowe. Netflix wouldn’t really be interested in doing so, and Randolph would leave the company in 2002 on good terms. Wealthy after the companies successful IPO. And Lowe took the Video Droid concept of a VHS rental vending machine to DVDs after Netflix abandoned it, and went to Redbox, which had been initially started by McDonalds in 2003. Many of the ideas he and Randolf tested in Vegas as a part of Netflix would be used and by 2005 Redbox would try to sell to Netflix and Blockbuster. But again, Blockbuster failed to modernize. They didn’t have just one shot at buying Netflix, Reed Hastings flew out there four times to try and sell the company to Blockbuster. Blockbuster launched their own subscription service in 2004 but it was flawed and there was bad press around late fees and other silly missteps. Meanwhile Netflix was growing fast. Netflix shipped the billionth DVD in 2007. And by 2007, there were more Reboxes than Blockbusters and by 2011 the kiosks accounted for half of the rental market. Blockbuster was finally forced to file for bankruptcy in 2010, after being a major name brand for 25 years. Netflix was modernizing though. Not with Kiosks but they were already beginning to plan for streaming. And a key to their success, as in the early days was relentless self improvement and testing every little thing, all the time. They took their time and did it right. Broadband was on the rise. People had more bandwidth and were experimenting with streaming music at work. Netflix posted earnings of over a hundred million dollars in 2009. But they were about to do something special. And so Act III: The Streaming Revolution The streaming world came online in the early days of the Internet when Severe Tire Damage streamed the first song out of Xerox PARC in 1993. But it wasn’t really until YouTube came along in 2005 that streaming video was getting viable. By 2006 Google would acquire YouTube, which was struggling with over a million dollars a month in bandwidth fees and huge legal issues with copywritten content. This was a signal to the world that streaming was ready. I mean, Saturday Night Live was in, so it must be real! Netflix first experimented with making their own content in 2006 with a film production division they called Red Envelope Films. They made over a dozen movies but ultimately shut down, giving Netflix a little focus on another initiative before they came back to making their own content. Netflix would finally launch streaming media in 2007, right around the time they shipped that billionth DVD. This was the same year Hulu launched out of AOL, Comcast, Facebook, MSN, and Yahoo. But Netflix had a card up it’s sleeve. Or a House of Cards, the first show they produced, which launched in 2013. Suddenly, Netflix was much, much more than a DVD service. They were streaming movies, and creating content. Wildly popular content. They’ve produced hundreds of shows now in well over a dozen languages. 2013 also brought us Orange is the New Black, another huge success. They started off with a whole Marvel universe in 2015 with Daredevil, followed by Jessica Jones, Luke Cage, Iron Fist, and tied that up with The Defenders. But along the way we got The Crown, Narcos and the almost iconic at this point Stranger Things. Not to mention Bojack Horseman, Voltron, and the list just goes on and on. That era of expansion would include more than just streaming. They would finally expand into Canada in 2010, finally going international. They would hit 20 million subscribers in 2011. By 2012 they would be over 25 million subscribers. By 2013 they would exceed 33 million. In 2014 they hit 50 million. By the end of 2015 they were at almost 70 million. 2016 was huge, as they announced an expansion into 130 new international territories at CES. And the growth continued. Explosively. At this point, despite competition popping up everywhere Netflix does over 20 billion a year in revenue and has been as instrumental in revolutionizing the world as anyone. That competition now includes Disney Plus, Apple, Hulu, Google, and thousands of thousands of podcasts and home spun streamers, even on Twitch. All battling to produce the most polarizing, touching, beautiful, terrifying, or mesmerizing content. Oh and there’s still regular tv I guess… Epilogue So Y2K. The dot com bubble burst. And the overnight millionaires were about to give way to something new. Something different. Something on an entirely different scale. As with many of the pre-crash dot com companies, Netflix had initially begun with a pretty simple idea. Take the video store concept, where you payed per-rental. And take it out of brick and mortar and onto the internets. And if they had stuck with that, we probably wouldn’t know who they are today. We would probably be getting our content from a blue and yellow box called Blockbuster. But they went far beyond that, and in the process, they changed how we think of that model. And that subscription model is how you now pay for almost everything, including software like Microsoft Office. And Netflix continued to innovate. They made streaming media mainstream. They made producing content a natural adjacency to a streaming service. And they let millions cut the cord from cable and get into traditional media. They became a poster child for the fact that out of the dot com bubble and Great Recession, big tech companies would go from making fast millionaires to a different scale, fast billionaires! As we move into a new post COVID-19 era, a new round of change is about to come. Nationalism is regrettably becoming more of a thing. Further automation and adoptions of new currencies may start to disrupt existing models even further. We have so much content we have to rethink how search works. And our interpersonal relationships will be forever changed from these months in isolation. Many companies are about to go the way of Blockbuster. Including plenty that have been around much, much longer than they were. But luckily, companies like Netflix are there for us to remind us that any company can innovate like in a multi-act play. And we owe them our thanks, for that. - and because what the heck else would we do stuck in quarantine, right?!?! So to the nearly 9,000 people that work at Netflix we 167 million plus subscribers thank you. For revolutionizing content distribution, revolutionizing business models, and for the machine learning and other technological advancements we didn’t even cover in this episode. You are lovely. And thank you listeners, for abandoning binge watching Tiger King long enough to listen to this episode of the History of Computing Podcast. We are so lucky to have you. Now get back to it!
4/26/2020 • 16 minutes, 15 seconds
Piecing Together Microsoft Office
Today we’re going to cover the software that would become Microsoft Office. Microsoft Office was announced at COMDEX in 1988. The Suite contained Word, Excel, and PowerPoint. These are still the core applications included in Microsoft Office. But the history of Office didn’t start there. Many of the innovations we use today began life at Xerox. And Word is no different. Microsoft Word began life as as Multi-Tool Word in 1981, when Charles Simonyi was hired away from Xerox PARC where he had worked on one of the earlier word processors, Bravo. He brought in Richard Brodie, and by 1983, they would release it for DOS, simplifying the name to just Microsoft Word. They would port it to the Mac in 1985, shortly after the release of the iconic 1984 Macintosh. Being way more feature-rich than MacWrite, it was an instant success. 2.0 would come along in 1987, and they would be up to 5 by 1992. But Word for Windows came along in 1989, when Windows 3.0 dropped. So Word went from DOS to Mac to Windows. Excel has a similar history. It began life as Multiplan in 1982 though. At the time, it was popular on CP/M and DOS but when Lotus 1-2-3 came along, it knocked everything out of the hearts and minds of users and Microsoft regrouped. Doug Klunder would be the Excel lead developer and Jabe Blumenthal would act as program manager. They would meet with Bill Gates and Simonyi and hammer out the look and feel and released Excel for the Mac in 1985. And Excel came to Windows in 1987. By Excel 5 in 1993, Microsoft would completely taken the spreadsheet market and suddenly Visual Basic for Applications (VBA) would play a huge role in automating tasks. Regrettably, then came macro viruses, but for more on those check out the episode on viruses. In fact, along the way, Microsoft would pick up a ton of talented developers including Bob Frankton a co-creator of the original spreadsheet, VisiCalc. Powerpoint was an acquisition. It began life as Presenter at Forethought, a startup, in 1983. And Robert Gaskins, a former research manager from Bell Norther Research, would be brought in to get the product running on Windows 1. It would become PowerPoint when it was released for the Mac in 1987 and was wildly successful, selling out all of the copies from the first run. But then Jeff Raikes from Microsoft started getting ready to build a new presentation tool. Bill Gates had initially thought it was a bad idea but eventually gave Raikes the go-ahead to buy Forethought and Microsoft PowerPoint was born. And that catches up to that fateful day in 1988 when Bill Gates announced Office at COMDEX in Las Vegas, which at the time was a huge conference. Then came the Internet. Microsoft Mail was released for the Mac in 1988 and bundled with Windows from 1991 and on. Microsoft also released a tool called Inbox. But then came Exchange, expanding beyond mail and into contacts, calendars, and eventually much more. Mail was really basic and for Exchange, Microsoft released Outlook, which was added to Office 97 and an installer was bundled with Windows Exchange Server. Office Professional in that era included a database utility called Access. We’ve always had databases. But desktop databases had been dominated by Borland’s dBase and FoxPro up until 1992 when Microsoft Access began to chip away at their marketshare. Microsoft had been trying to get into that market since the mid-90s with R:Base and Omega, but when Access 2 dropped in 1994, people started to take notice and by the release of Office 95 Professional it could be purchased as part of a suite and integrated cleanly. I can still remember those mdb files and setting up data access objects and later ActiveX controls! So the core Office components came together in 1988 and by 1995 the Office Suite was the dominant productivity suite on the market. It got better in 97. Except The Office Assistant, designed by Kevan Atteberry and lovingly referred to as Clippy. By 2000 Office became the de facto standard. Everything else had to integrate with Office. That continued in the major 2003 and 2007 releases. And the products just iterated to become better and better software. And they continue to do that. But another major shift was on the way. A response to Google Apps, which had been released in 2006. The cloud was becoming a thing. And so Office 365 went into beta in 2010 and was launched in 2011. It includes the original suite, OneDrive, SharePoint, Teams for chatting with coworkers, Yammer for social networking, Skype for Business (although video can now be done in Teams), Outlook and Outlook online, and Publisher. As well as Publisher, InfoPath, and Access for Windows. This Software + Services approach turned out to be a master-stroke. Microsoft was able to finally raise prices and earned well over a 10% boost to the Office segment in just a few years. The pricing for subscriptions over the term of what would have been a perpetual license was often 30% more. Yet, the Office 365 subscriptions kept getting more and more cool stuff. And by 2017 the subscriptions captured more revenue than the perpetual licenses. And a number of other services can be included with Office 365. Another huge impact is the rapid disappearing act of on premises Exchange servers. Once upon a time small businesses would have an Exchange server and then as they grew, move that to a colocation facility, hire MCSE engineers (like me) to run them, and have an amplified cost increase in dealing with providing groupware. Moving that to Microsoft means that Microsoft can charge more, and the customer can get a net savings, even though the subscriptions cost more - because they don’t have to pay people to run those servers. OneDrive moves files off old filers, etc. And the Office apps provided aren’t just for Windows and Mac. Pocket Office would come in 1996, for Windows CE. Microsoft would have Office apps for all of their mobile operating systems. And in 2009 we would get Office for Symbian. And then for iPhone in 2013 and iPad in 2014. Then for Android in 2015. Today over 1 and a quarter billion people use Microsoft Office. In fact, not a lot of people have *not* used Office. Microsoft has undergone a resurgence in recent years and is more nimble and friendly than ever before. Many of the people that created these tools are still at Microsoft. Simonyi left Microsoft for a time. But they ended up buying his company later. During what we now refer to as the “lost decade” at Microsoft, I would always think of these humans. Microsoft would get dragged through the mud for this or that. But the engineers kept making software. And I’m really glad to see them back making world class APIs that do what we need them to do. And building good software on top of that. But most importantly, they set the standard for what a word processor, spreadsheet, and presentation tool would look like for a generation. And the ubiquity the software obtained allowed for massive leaps in adoption and innovation. Until it didn’t. That’s when Google Apps came along, giving Microsoft a kick in the keister to put up or shut up. And boy did Microsoft answer. So thank you to all of them. I probably never would have written my first book without their contributions to computing. And thank you listener, for tuning in, to this episode of the history of computing podcast. We are so lucky to have you. Have a great day.
4/21/2020 • 10 minutes, 52 seconds
500 Years Of Electricity
Today we’re going to review the innovations in electricity that led to the modern era of computing. As is often the case, things we knew as humans, once backed up with science, became much, much more. Electricity is a concept that has taken hundreds of years to really take shape and be harnessed. And whether having done so is a good thing for humanity, we can only hope. We’ll take this story back to 1600. Early scientists were studying positive and negative elements and forming an understanding that electricity flowed between them. Like the English natural scientist, William Gilbert - who first established some of the basics of electricity and magnetism in his seminal work De Magnete, published in 1600, when he coined the term electricity. There were others but the next jump in understanding didn’t come until the time of Sir Thomas Browne, who along with other scientists of the day continued to refine theories. He was important because he documented where the scientific revolution was in his 1646 Pseudodoxia Epidemica. He codified that word electricity. And computer by the way. And electricity would be debated for a hundred years and tinkered with in scientific societies, before the next major innovations would come. Then another British scientist, Peter Collinson, sent Benjamin Franklin an electricity tube, which these previous experiments had begun to produce. Benjamin Franklin spent some time writing back and forth with Collinson and flew a kite and proved that electrical currents flowed through a kite string and that a metal key was used to conduct that electricity. This proved that electricity was fluid. Linked capacitors came along in 1749. That was 1752 and Thomas-Francois Dalibard also proved the hypothesis using a large metal pole struck by lightning. James Watt was another inventor and scientist who was studying steam engines from the 1760s to the late 1790s. Watt used to quantify the rate of energy transfer, a unit to measure power. Today we often measure those watts in terms of megawatts. His work in engines would prove important for converting thermal into mechanical energy and producing electricity later. But not yet. 1799, Alessandro Volta built a battery, the Volta Pile. We still refer to the resistance of an ohm when the current of an amp flows through it as a volt. Suddenly we were creating electricity from an electrochemical reaction. Humphry Davy took a battery and invented the “arc lamp.” By attaching a piece of carbon that glowed to it with wires. Budding scientists continued to study electricity and refine the theories. And by the 1820s, Hans Christian Orsted proved that an electrical current creates a circular magnetic field when flowing through a wire. Humans were able to create electrical current and harness it from nature. Inspired by Orsted’s discoveries, André-Marie Ampère began to put math on what Orsted had observed. Ampére observed two parallel wires carrying electric currents attract and that they repeled each other, depending on the direction of the currents, the foundational principal of electrodynamics. He took electricity to an empirical place. He figured out how to measure electricity, and for that, the ampere is now the unit of measurement we use to track electric current. In 1826 Georg Ohm defined the relationship between current, power, resistance, and voltage. This is now called “Ohms Law” and we still measure electrical resistance in ohms. Michael Faraday was working in electricity as well, starting with replicating a voltaic pile and he kinda’ got hooked. He got wind of Orsted’s discovery as well and he ended up building an electric motor. He studied electromagnetic rotation, and by. 1831 was able to generate electricity using what we now call the Faraday disk. He was the one that realized the link between the various forms of electricity and experimented with various currents and voltages to change outcomes. He also gave us the Faraday cage, Faraday constant, Faraday cup, Faraday's law of induction, Faraday's laws of electrolysis, the Faraday effect, Faraday paradox, Faraday rotator, Faraday wave, and the Faraday wheel. It’s no surprise that Einstein kept a picture of Faraday in his study. By 1835, Joseph Henry developed the electrical relay and we could send current over long distances. Then, in the 1840s, a brewer named James Joule had been fascinated by electricity since he was a kid. And he discovered the relationship between mechanical work and heat. And so the law of conservation of energy was born. Today, we still call a joule a unit of energy. He would also study the relationship between currents that flowed through resistors and how they let off heat, which we now call Joules first law. By the way, he also worked with Lord Kelvin to develop the Kelvin scale. 1844, Samuel Morse gave us the electrical telegraph and Morse code. After a few years coming to terms with all of this innovation, JC Maxwell unified magnetism and electricity and gave us Maxwell’s Equations, which gave way to electric power, radios, television, and much, much more. By 1878 we knew more and more about electricity. The boom of telegraphs had sparked many a young inventor into action and by 1878 we saw the lightbulb and a lamp that could run off a generator. This led Thomas Edison to found Edison Light and Electric and continue to refine electric lighting. By 1882, Edison fired up the Pearl Street Power station and could light up 5,000 lights using direct current power. A hydroelectric station opened in Wisconsin the same year. The next year, Edison gave us the vacuum tube. Tesla gave us the Tesla coil and therefore alternating current in 1883, making it more efficient to send electrical current to far away places. Tesla would go on to develop polyphase ac power and patent the generator to transformer to motor and light system we use today, which was bought by George Westinghouse. By 1893, Westinghouse would use aC power to light up the World’s Fair in Chicago, a turning point in the history of electricity. And from there, electricity spread fast. Humanity discovered all kinds of uses for it. 1908 gave us the vacuum and the washing machine. The air conditioner came in 1911 and 1913 brought the refrigerator. And it continued to spread. By 1920, electricity was so important that it needed to be regulated in the US and the Federal Power Commission was created. By 1933, the Tennessee Valley Authority established a plan to built damns across the US to light cities. And by 1935 The Federal Power Act was enacted to regulate the impact of damns on waterways. And in the history of computing, the story of electricity kinda’ ends with the advent of the transistor, in 1947. Which gave us modern computing. The transmission lines for the telegraph put people all over the world in touch with one another. The time saved with all these innovations gave us even more time to think about the next wave of innovation. And the US and other countries began to ramp up defense spending, which led to the rise of the computer. But none of it would have been possible without all of the contributions of all these people over the years. So thank you to them. And thank you, listeners, for tuning in. We are so lucky to have you. Have a great day!
4/12/2020 • 10 minutes, 26 seconds
Y Combinator
Welcome to the History of Computing Podcast, where we explore the history of information technology. Because understanding the past prepares us to innovate (and sometimes cope with) the future! Today we’re going to look at Y Combinator. Here’s a fairly common startup story. After finishing his second book on Lisp, Paul Graham decides to found a company. He and Robert Morris start Viaweb in 1995, along with Trevor Blackwell. Some of the code came from Lisp - you know, like the books Graham had worked on. It was one of the earliest SaaS startups, which let users host online stores - similar to Shopify today. Viaweb had an investor named Julian Weber, who invested $10,000 in exchange for 10% of the company. Weber gave them invaluable advice. By 1998 they were acquired by Yahoo! for about $50 million in stock, which was a little shy of half a million shares. Viaweb would became the Yahoo Store. Both Graham and Morris have PhDs from Harvard. Here’s where the story gets different. Graham would write a number of essays, establishing himself as an influencer of sorts. 2005 rolls around and Graham decides to start doing seed funding for startups, following the model that Weber had established with Viaweb. He gets the gang back together, hooking up with his Viaweb co-founders Robert Morris (the guy that wrote the Morris worm) and Trevor Blackwell, and adding girlfriend and future wife Jessica Livingston - and they create Y Combinator. Graham would pony up $100,000, Morris and Blackwell would each chip in $50,000 and they would start with $200,000 to invest in companies. Being Harvard alumni, it was called Cambridge Seed. And as is the case with many of the companies they invest in, the name would change quickly, to Y Combinator. They would hold their first session in Boston and called it the Summer Founders Program. And they got a great batch of startups! So they decided to do it again, this time in Mountain View, using space provided by Blackwell. This time, a lot more startups applied and they decided to run two a year, one in each location. And they had plenty of startups looking to attend. But why? There have always been venture capital firms. Well, not always, but ish. They invest in startups. And incubators had become more common in business since the 1950s. The incubators mostly focused on planning, launching, and growing a company. But accelerators we just starting to become a thing, with the first one maybe being Colorado Venture Centers in 2001. The concept of accelerators really took off because of Y Combinator though. There have been incubators and accelerators for a long, long time. Y Combinator didn’t really create those categories. But they did change the investment philosophy of many. You see, Y Combinator is an investor and a school. But. They don’t provide office space to companies. They have an open application process. They invest in the ideas of founders they like. They don’t invest much. But they get equity in the company in return. They like hackers. People that know how to build software. People who have built companies and sold companies. People who can help budding entrepreneurs. Graham would launch Hacker News in 2007. Originally called Startup News, it’s a service like Reddit that was developed in a language Graham co-wrote called Arc. I guess Arc would be more a stripped down dialect of Lisp, built in Racket. He’d release Arc in 2008. I wonder why he prefers technical founders… They look for technical founders. They look for doers. They look for great ideas, but they focus on the people behind the ideas. They coach on presentation skills, pitch decks, making products. They have a simple motto: “Make Something People Want”. And it works. By 2008 they were investing in 40 companies a year and running a program in Boston and another in Silicon Valley. It was getting to be a bit much so they dropped the Boston program and required founders who wanted to attend the program to move to the Bay Area for a couple of months. They added office hours to help their founders and by 2009 the word was out, Y Combinator was the thing every startup founder wanted to do. Sequoia Capital ponied up $2,000,000 and Y Combinator was able to grow to 60 investments a year. And it was working out really well. So Sequoia put in another $8,250,000 round. The program is a crash course in building a startup. They look to grow fast. They host weekly dinners that Graham used to cook. Often with guest speakers from the VC community or other entrepreneurs. They build towards Demo Day, where founders present to crowds of investors. It kept growing. It was an awesome idea but it took a lot of work. The more the word spread, the more investments like Yuri Milner wanted to help fun every company that graduated from Y Combinator. They added non profits in 2013 and continued to grow. By 2014, Graham stepped down as President and handed the reigns to Sam Altman. The amount they invested went up to $120,000. More investments required more leaders and others would come in to run various programs. Altman would step down in 2019. They would experiment with some other ideas but in the end, the original concept was perfect. Several alumni would come back and contribute to the success of future startups. People from companies like Justin.tv and twitch. In fact, their cofounder Michel Seibel would recommend Y Combinator to the founders of Airbnb. He ran Y Combinator Core for a while. Many of the founders who had good exits have gone from starting companies to investing in companies. Y Combinator changed the way seed investments happen. By 2015, a third of startups got their Series A funding from accelerators. The combined valuation of the Y Combinator companies who could be surveyed is well over $150 billion dollars in market capitalization. Graduates include Airbnb, Stripe, Dropbox, Coinbase, DoorDash, Instacart, Reddit. Massive success has led to over 15,000 applicants for just a few spots. To better serve so many companies, they created a website called Startup School in 2017 and over 1,500 startups went through it in the first year alone. Y Combinator has been quite impactful in a lot of companies. More important than the valuations and name brands, graduates are building software people want. They’re iterating societal change, spurring innovation at a faster pace. They’re zeroing in on helping founders build what people want rather than just spinning their wheels and banging their heads against the wall trying to figure out why people aren’t buying what they’re selling. My favorite part of Y Combinator has been the types of founders they look for. They give $150,000 to mostly technical founders. And they get 7% of the company in exchange for that investment. And their message of finding the right product market fit has provided them with massive returns on their investments. At. This point they’ve helped over 2,000 companies by investing and countless others with the startup School and by promoting them on Hacker News. Not a lot of people can say they changed the world. But this crew did. And there’s a chance Airbnb, Doordash, Reddit, Stripe, Dropbox and countless others would have launched and succeeded, but we’re all better off for the thousands of companies who have gone through YC having done so. So thank you for helping us get there. And thank you, listeners, for tuning in to this episode of the History of Computing Podcast. We are so, so lucky to have you. Have a great day.
4/10/2020 • 10 minutes, 17 seconds
From The Palm Pilot To The Treo
Today we’re going to look at the history of the Palm. It might be hard to remember at this point, but once upon a time, we didn’t all have mobile devices connected to the Internet. There was no Facebook and Grubhub. But in the 80s, computer scientists were starting to think about what ubiquitous computing would look like. We got the Psion and the HP Jaguar (which ran on DOS). But these seemed much more like really small laptops. And with tiny keyboards. General Magic spun out of Apple in 1990 but missed the mark. Other devices were continuing to hit the market, some running PenPoint from Go Corporation - but none really worked out. But former Intel, GRiD, and then Tandy employee Jeff Hawkins envisioned a personal digital assistant and created Palm Computing to create one in 1992. He had been interested in pen-based computing and worked with pattern recognition for handwriting at UC Berkeley. He asked Ed Colligan of Radius and Donna Dubinsky of Claris to join him. She would become CEO. They worked with Casio and Tandy to release the Casio Zoomer in 1993. The Apple Newton came along in 1993 and partially due to processor speed and partially due to just immaturity in the market, both devices failed to resonate with the market. The Newton did better, but the General Magic ideas that had caught the imagination of the world were alive and well. HP Jaguars were using Palm’s synchronization software and so they were able to stay afloat. And so Hawkins got to work on new character recognition software. He got a tour of Xerox PARC, as did everyone else in computing and they saw Unistrokes, which had been developed by David Goldberg. Unistrokes resembled shorthand and required users to learn a new way of writing but proved much more effective. Hawkins went on to build Graffiti, based on that same concept and as Xerox patented the technology they would go into legal battles until Palm eventually settled for $22.5 million. More devices were coming every year and by 1995 Palm Computing was getting close to releasing a device. They had about $3 million dollars to play with. They would produce a device that had less buttons and so a larger screen size than other devices. It had the best handwriting technology on the market. It was the perfect size. Which Hawkins had made sure of by carrying around a block of wood in his pocket and to meetings to test it. Only problem is that they ran out of cash during the R&D and couldn’t take it to market. But they knew they hit the mark. The industry had been planning for a pen-based computing device for some time and US Robotics saw an opening. Palm ended up selling to US Robotics, who had made a bundle selling modems, for $44 million dollars. And they got folded into another acquisition, 3Com, which had been built by Bob Metcalfe, who co-invented Ethernet. US Robotics banked on Ethernet being the next wave. And they were right. But they also banked on pen computing. And were right again! US Robotics launched the Palm Pilot 1000 with 128k of RAM and the Palm Pilot 5000 with 518k of RAM in 1996. This was the first device that actually hit the mark. People became obsessed with Graffiti. You connected it to the computer using a serial port to synchronize Notes, Contacts, and Calendars. It seems like such a small thing now, but it was huge then. They were an instant success. Everyone in computing knew something would come along, but they didn’t realize this was it. Until it was! HP, Ericsson, Sharp, NEC, Casio, Compaq, and Philips would all release handhelds but the Palm was the thing. By 1998 the three founders were done getting moved around and left, creating a new company to make a similar device, called Handspring. Apple continued to flounder in the space releasing the eMate and then the MessagePad. But the Handspring devices were eerily similar to the Palms. Both would get infrared, USB, and the Handspring Visor would even run Palm OS 3. But the founders had a vision for something more. They would take Handspring public in 2000. 3Com would take Palm public in 2000. Only problem is the dot com bubble. Well, that and Research in Notion began to ship the Blackberry OS in 1999 and the next wave of devices began to chip away at the market share. Shares dropped over 90% and by 2002 Palm had to set up a subsidiary for the Palm OS. But again, the crew at Handspring had something more in mind. They released the Tree in 2002. The Handspring Treo was, check this out, a smart phone. It could do email, SMS, voice calls. Over the years they would add a camera, GPS, MP3, and Wi-Fi. Basically what we all expect from a smartphone today. Handspring merged with Palm in 2003 and they released the Palm Tree 600. They merged back the company the OS had been spun out into, finally all merged back together in 2005. Meanwhile, Pilot pens had sued Palm and the devices were then just called Palm. We got a few, with the Palm V probably being the best, got a few new features, lots and lots of syncing problems, when new sync tools were added. Now that all of the parts of the company were back together, they started planning for a new OS, which they announced in 2009. And webOS was supposed to be huge. And they announced the Palm Pre, the killer next Smartphone. The only problem is that the iPhone had come along in 2007. And Android was released in 2008. Palm had the right idea. They just got sideswiped by Apple and Google. And they ran out of money. They were bought by Hewlett-Packard in 2010 for 1.2 billion dollars. Under new management the company was again split into parts, with WebOS never really taking off, the PRe 3 never really shipping, and TouchPads not actually being any good and ultimately ending in the CEO of HP getting fired (along with other things). Once Meg Whitman stepped in as CEO, WebOS was open sourced and the remaining assets sold off to LG Electronics to be used in Smart TVs. The Palm Pilot was the first successful handheld device. It gave us permission to think about more. The iPod came along in 2001, in a red ocean of crappy MP3 handheld devices. And over time it would get some of the features of the Palm. But I can still remember the day the iPhone came out and the few dozen people I knew with Treos cursing because they knew it was time to replace it. In the meantime Windows CE and other mobile operating systems had just pilfered market share away from Palm slowly. The founders invented something people truly loved. For awhile. And they had the right vision for the next thing that people would love. They just couldn’t keep up with the swell that would become the iPhone and Android, which now own pretty much the entire market. And so Palm is no more. But they certainly left a dent in the universe. And we owe them our thanks for that. Just as I owe you my thanks for tuning in to this episode of the history of computing podcast. We are so lucky to decided to listen in - you’re welcome back any time! Have a great day!
4/3/2020 • 10 minutes, 4 seconds
The History Of The Computer Modem
Today we’re going to look at the history of the dial-up computer modem. Modem stands for modulate/demodulate. That modulation is carying a property (like voice or computer bits) over a waveform. Modems originally encoded voice data with frequency shift keys, but that was developed during World War II. The voices were encoded into digital tones. That system was called SIGSALY. But they called them vocoders at the time. They matured over the next 17 years. And then came the SAGE air defense system in 1958. Here, the modem was employed to connect bases, missile silos, and radars back to the central SAGE system. These were Bell 101 modems and ran at an amazing 110 baud. Bell Labs, as in AT&T. A baud is a unit of transmission that is equal to how many times a signal changes state per second. Each of those baud is equivalent to one bit per second. So that first modem was able to process data at 110 bits per second. This isn’t to say that baud is the same as bitrate. Early on it seemed to be but the algorithms sku the higher the numbers. So AT&T had developed the modem and after a few years they began to see commercial uses for it. So in 1962, they revved that 101 to become the Bell 103. Actually, 103A. This thing used newer technology and better encoding, so could run at 300 bits per second. Suddenly teletypes - or terminals, could connect to computers remotely. But ma’ Bell kept a tight leash on how they were used for those first few years. That, until 1968. In 1968 came what is known as the Carterphone Decision. We owe a lot to the Carterfone. It bridged radio systems to telephone systems. And Ma Bell had been controlling what lives on their lines for a long time. The decision opened up what devices could be plugged into the phone system. And suddenly new innovations like fax machines and answering machines showed up in the world. And so in 1968, any device with an acoustic coupler could be hooked up to the phone system. And that Bell 103A would lead to others. By 1972, Stanford Research had spun out a device, Novation, and others. But the Vladic added full duplex and got speeds four times what the 103A worked at by employing duplexing and new frequencies. We were up to 1200 bits per second. The bit rate had jumped four-fold because, well, competition. Prices dropped and by the late 1970s microcomputers were showing up in homes. There was a modem for the S-100 Altair bus, the Apple II through a Z-80 SoftCard, and even for the Commodore PET. And people wanted to talk to one another. TCP had been developed in 1974 but at this point the most common way to communicate was to dial directly into bulletin board services. 1981 was a pivotal year. A few things happened that were not yet connected at the time. The National Science Foundation created the Computer Science Network, or CSNET, which would result in NSFNET later, and when combined with the other nets, the Internet, replacing ARPANET. 1981 also saw the release of the Commodore VIC-20 and TRS-80. This led to more and more computers in homes and more people wanting to connect with those online services. Later models would have modems. 1981 also saw the release of the Hayes Smartmodem. This was a physical box that connected to the computer of a serial port. The Smartmodem had a controller that recognized commands. And established the Hayes command set standard that would be used to connect to phone lines, allowing you to initiate a call, dial a number, answer a call, and hang up. Without lifting a handset and placing it on a modem. On the inside it was still 300-baud but the progress and innovations were speeding up. And it didn’t seem like a huge deal. The online services were starting to grow. The French Minitel service was released commercially in 1982. The first BBS that would become Fidonet showed up in 1983. Various encoding techniques started to come along and by 1984 you had the Trailblazer modem, at over 18,000 bits a second. But, this was for specific uses and combined 36 bit/second channels. The use of email started to increase and the needs for even more speed. We got the ability to connect two USRobotics modems in the mid-80s to run at 2400 bits per second. But Gottfried Ungerboeck would publish a paper defining a theory of information coding and add parity checking at about the time we got echo suppression. This allowed us to jump to 9600 bits in the late 80s. All of these vendors releasing all of this resulted in the v.21 standard in 1989 from the ITU Telecommunication Standardization Sector (ITU-T). They’re the ones that ratify a lot of standards, like x.509 or MP4. Several other v dot standards would come along as well. The next jump came with the SupraFaXModem with Rockwell chips, which was released in 1992. And USRobotics brought us to 16,800 bits per second but with errors. But we got v.32 in 1991 to get to 14.4 - now we were talking in kilobits! Then 19.2 in 1993, 28.8 in 1994, 33.6 in 1996. By 1999 we got the last of the major updates, v.90 which got us to 56k. At this point, most homes in the US at least had computers and were going online. The same year, ANSI ratified ADSL, or Asymmetric Digital Subscriber Lines. Suddenly we were communicating in the megabits. And the dial-up modem began to be used a little less and less. In 2004 Multimedia over Coax Alliance was formed and cable modems became standard. The combination of DSL and cable modems has now all but removed the need for dial up modems. Given the pervasiveness of cell phones, today, as few as 20% of homes in the US have a phone line any more. We’ve moved on. But the journey of the dial-up modem was a key contributor to us getting from a lot of disconnected computers to… The Internet as we know it today. So thank you to everyone involved, from Ma Bell, to Rockwell, to USRobotics, to Hayes, and so on. And thank you, listeners, for tuning in to this episode of the History of Computing Podcast. We are so lucky to have you. Have a great day.
4/1/2020 • 9 minutes, 39 seconds
Cray Supercomputers
Today we’re going to talk through the history of Cray Computers. And really, this is then a history of supercomputers during Seymour Cray’s life. If it’s not obvious by his name, he was the founder of Cray. But before we go there, let’s back up a bit and talk about some things that were classified for a long time. The post-World War II spending by the US government definitely leveled up the US computer industry. And defense was the name of the game in those early years. Once upon a time, the computer science community referred to the Minneapolis/St Paul area as the Land of 10,000 Top Secret Projects. And a lot of things ended up coming out of that. One of the most important in the history of computing though, was Engineering Research Associates, or ERA. They built highly specialized computers. Those made for breaking Soviet codes. Honeywell had been founded in Minneapolis and as with Vannevar Bush, had gone from thermostats to computers. Honeywell started pumping out the DATAmatic 1000 in 1957. There was a computer shipping and Honeywell was well situated to capitalize on the growing mainframe computer market. ERA had some problems because the owners were embroiled in Washington politics and so they were acquired by Sperry Rand, today’s Unisys, but at the time one of the larger mainframe developers and the progeny of both the Harvard Mark series and ENIAC series of mainframes. Only problem is that the Sperry Rand crew were making a bundle off Univacs and so didn’t put money into forward looking projects. The engineers knew that there were big changes coming in computing. And they wanted to be at the forefront. Who wouldn’t. But with Sperry Rand barely keeping up with orders they couldn’t focus on R&D the way many former ERA engineers wanted to. So many of the best and brightest minds from ERA founded Control Data Corporation, or CDC. And CDC built some serious computers that competed with everyone at the time. Because they had some seriously talented engineers. One, who had come over from ERA, was Seymour Cray. And he was a true visionary. And so you had IBM and their seven biggest competitors, known as Snow White and the Seven Dwarfs. Three of those dwarfs were doing a lot of R&D in Minneapolis (or at least the Minneapolis area). None are still based in the Twin Cities. But all three build ruggedized computers that could withstand nuclear blasts, corrosive elements, and anything you could throw at them. But old Seymour. He wanted to do something great. Cray had a vision of building the fastest computer in the world. And as luck would have it, transistors were getting cheaper by the day. They had initially been designed to use germanium but Seymour Cray worked to repackage those at CDC to be silicon and was able to pack enough in to make the CDC 6600 the fastest computer in the world in 1964. They had leapfrogged the industry and went to market, selling the machines like hotcakes. Now CDC would build one of the first real supercomputers in that 6600. And supercomputers are what Cray is known for today. But there’s a little more drama to get from CDC to Cray and then honestly from Cray to the other Crays that Seymour founded. CDC went into a big of a buying tornado as well. As with the Univacs, they couldn’t keep up with demand and so suddenly were focused too much on Development to look beyond fulfillment and shipping and into the Research part of R&D. Additionally shipping all those computers and competing with IBM was rough and CDC was having financial problems, so CEO William Norris wouldn’t let them redesign the 6600 from the ground up. But Cray saw massive parallel processing as the future, which is kinda’ what supercomputing really is at the end of the day, and was bitten by that bug. He wanted to keep building the fastest computers in the world. And he would get his wish. He finally left CDC in 1972 and founded Cray Research along with cofounding engineer Lester Davis. They went to Chippewa Falls Wisconsin. It took him four years, but Cray shipped the Cray-1 in 1976, which became the best selling supercomputer in history (which means they sold more than 80 and less than a hundred). It was 80MhZ, or 200 gigaFLOPS. And that was vector processing. They would math faster by re-arranging the memory and registers to more intelligently process big amounts of data. He used Maxwell’s equations on his boards. He designed it all on paper. The first Cray-1 would ship to Los Alamos National Laboratory. The Cray-1 was 5 and a half tons, cost around $8 million dollars in 1976 money and the fact that they were the fastest computer in the world combined with the fact that they were space age looking gave Seymour Cray instant star status. The Cray-1 would soon get competition from the ILLIAC IV out of the University of Illinois, an ARPA project. So Cray got to work thinkin’. He liked to dig when he thought, and he tried to dig a tunnel under his house. This kinda’ sums up what I think of Wisconsin. The Cray-2 would come in 1985, which was the first multiple CPU design by Cray. It came in at 1.9 Gigaflops. They rearranged memory to allow for more parallelization and used two sets of memory registers. It effectively set the stage for modern processing architectures in a lot of ways, offloading tasks for a dedicated foreground processor to main memory connected over the fastest channels possible to each CPU. But IBM wouldn’t release the first real multi core processor until 2001. And we see this with supercomputers. The techniques used in them come downmarket over time. But some of the biggest problems were how to keep the wires close together. The soldering of connecters at that level was nearly impossible. And the thing was hot. So they added, get this, liquid coolant, leading some people to call the Cray-2 “Bubbles.” By now, Seymour Cray had let other people run the company and thee were competing projects like the Cray X-MP underway. Almost immediately after the release of the Cray-2 Seymour moved to working on the Cray-3 but the project was abandoned and again, Cray found himself wanting to just go do research without priorities shifting what he could do. But Seymour always knew best. Again, he’s from Wisconsin. So he left the company with his name and started another company, this one called Cray Computer, where he did manage to finish the Cray-3. But that Cold War war spending from the Cold War dried up. And while he thought of designs for a Cray-4, the company would go bankrupt in 1995. He was athletic and healthy, so in his 70s, why not keep at it? His next company would focus on massively parallel processing, which would be the trend of the future, but Seymour Cray died from complications to a car accident in 1996. He was one of the great pioneers of the computing industry. He set a standard that computers like IBM’s Blue Gene then Summit or China’s Sunway TahuLight or Dell’s Frontera or Cray’s HPE or Fujitsu’s aBCI or Lenovo’s SuperMUC-NG carry on. Those run at between 20 gigaflops to close to 150 gigaflops. Today, the Cray X1E pays homage to it’s ancestor, the great Cray-1. But no one does it with style the way the Cray-1 did - and think about this, Moore’s Law says transistors will double every two years. Not to oversimplify things but that means that since the Cray-2 we should have had a 262 gigaflop machine by now. But I guess he’s not here to break down the newer barriers like he did with the von Neumann bottleneck. Also, think about this, those early supercomputers were funded by the departments that became the NSA. They even helped fund the development of Cray’s throughout history. So maybe we have hit 262 and it’s just classified. I swoon at that thought. But maybe it’s just that this is where the move from bits to qubits and quantum computing becomes the next significant jump. Who knows? But hey, thanks for joining me on this episode of the History of Computing Podcast. Do you have a story you want to tell? I plan to run more interviews soon and while we have a cast of innovators that we’re talking to, we’d love even more weird and amazing humans. Hit us up if you want to! And in the meantime, thanks again for listening, we are so lucky to have you.
3/28/2020 • 12 minutes, 2 seconds
Radio Shack: Over 100 Years Of Trends In Technology
Today we’re going to talk about a company that doesn’t get a ton of credit for bringing computing to homes across the world but should: Radio Shack. Radio Shack was founded by Theodore and Milton Deutschmann in 1921 in downtown Boston. The brothers were all about ham radio. A radio shack was a small structure on a ship that kept the radio equipment at the time. The name was derived from that slightly more generic term, given that one group of customers were radio officers outfitting ships. By 1939 they would print a catalog and ship equipment over mail as well. They again expanded operations in 1954 and would make their own equipment and sell it as well. But after too much expansion they ran into financial troubles and had to sell the company. When Charles Tandy bought the company for $300,000 in 1962, they had nine large retail stores. Tandy had done well selling leather goods and knew how to appeal to hobbyists. He slashed management and cut the amount of stock from 40,000 items to 2,500. The 80/20 rule is a great way to control costs. Given the smaller amount of stock, they were able to move to smaller stores. They also started to buy generic equipment and sell it under the Realistic brand, and started selling various types of consumer electronics. They used the locations that people bought electronics over the mail to plan new, small store openings. They gave ownership to store managers. And it worked. The growth was meteoric for the next 16 years. They had some great growth hacks. They did free tube testing. They gave a battery away for free to everyone that came in. They hired electronics enthusiasts. And people loved them. They bought Allied Radio in 1970.and continued to grow their manufacturing abilities. Tandy would pass away in 1978, leaving behind a legacy of a healthy company, primed for even more growth. Electronics continued to be more pervasive in the lives of Americans and the company continued its rapid growth, looking for opportunities to bring crazy new electronics products into people’s homes. One was the TRS-80. Radio Shack had introduced the computer in 1977 using an operating system from Microsoft. It sold really well and they would sell more than 100k of them before 1980. Although after that the sales would slowly go down with competition from Apple and IBM, until they finally sold the business off in the early 90s. But they stayed in computing. They bought Grid Systems Corporation to bring laptops to the masses in 1988. They would buy Computer City in 1991 and the 200 locations would become the Radio Shack Computer Centers. They would then focus on IBM compatible computers under the Tandy brand name rather than the TRS line. Computers were on the rise and clearly part of the Radio Shack strategy. I know I’ll never forget the Tandy Computer Whiz Kids that I’d come across throughout my adolescence. In the early 90s, Radio Shack was actually the largest personal computer manufacturer in the world, building computers for a variety of vendors, including Digital Equipment Corporation and of course, themselves . Their expertise in acting as an OEM electronics factory turned out to be profitable in a number of ways. They also made cables, video tapes, even antennas. Primarily under the Tandy brand. This is also when they started selling IBM computers in Radio Shack stores. They also tried to launch their own big box retail stores. They sold the Radio Shack Computer Centers to a number of vendors, including CompUSA and Fry’s, during their explosive growth, in 1998. They would move from selling IBM to selling Compaq in Radio Shacks at that point. Radio Shack hit its peak in 1999. It was operating in a number of countries and had basically licensed the name globally. This was a big year of change, though. This was around the time they sold the Tandy leather side of the business to The Leather Factory, which continues on. They also got rid of the Realistic brand and inked a deal to sell RCA equipment instead. They were restructuring. And it would continue on for a long time and rarely for the better. Radio Shack began a slow decline in the upcoming millenia. The move into adjacencies alienated the hobbyists, who had always been the core Radio Shack shopper. And Radio Shack tried to move into other markets, cluing other companies into what their market was worth. They had forgotten the lessons learned when Tandy took over the company and had more and more parts in the warehouses. More and more complex sales. More and more bigger stores. Again, the hobbyists were abandoning Radio Shack. By 2004 sales were down. The company started a high pressure plan and started hammering on the managers at the stores, constantly pushing them and by 2004 they rebelled with thousands of managers filing a class action suit. And it wasn’t just internal employees. They were voted the worst overall customer experience amongst any retailer for 6 years in a row. Happy Cows make happy milk. And it wasn’t just about store managers. They went through six CEOs from 2006 to 2016.And 2006 was a tough year to kick such things off. They had to close 500 stores that year. And the computer business was drying up. Dell, Amazon, Best Buy, Circuit City, and others were eating their lunch. By 2009, they would rebrand as just The Shack and started to focus on mobile devices. Hobbyists were confused and there was less equipment on the shelves, driving even more of them online and to other locations. Seeing profit somewhere, they started to sell subscriptions to other services, like Dish Network. They would kick off Amazon Locker services in 2012 but that wouldn’t last but a year. They were looking for relevance. Radio Shack filed Chapter 11 in 2015 after nearly 3 years of straight losses. And big ones. That’s when they were acquired by General Wireless Inc for just over 26 million dollars. The plan was to make money by selling mobile phones and mobile phone plans at Radio Shacks. They would go into a big deal with Sprint, who would take over leases to half the stores, which would become Sprint stores, and sell mobile devices through Sprint, along with cell plans of course! And there were law suits. From former debtors, leasers, and even people with gift cards. Only problem is, General Wireless couldn’t capitalize on the Sprint partnership in quite the way they planned and they went bankrupt in 2017 as well! I don’t envy Radio Shack CEO Steve Moroneso. Radio Shack was once the largest electronics chain in the world. But a variety of factors came into play. Big box retailers started to carry electronics. The Flavoradio was almost a perfect example of the rise and fall. They made it from the 70s, up until 2001 when they began their decline. It was unchanged throughout all of that growth. But after they got out of the radio business, things just… weren’t right. With 500 stores, he has a storied history. A 100 plus year old company, one that grew through multiple waves of technology: from ham radios to CB radios to personal computers in the 70s and 80s to cell phones. But they never really found the next thing once the cell phone market for Radio Shack started to dry up. They went from the store of the tinkerer with employees who cared, to a brand kinda’ without an identity. If that identity will succeed, they need the next wave. Unless it’s too late. But we owe them our gratitude for helping the world by distributing many waves of technology. Just as I owe you dear listeners, for tuning in to yet another episode of the History of Computing Podcast.
3/24/2020 • 10 minutes, 35 seconds
As We May Think and the Legacy of Vannevar Bush
Today we’re going to celebrate an article called As We May Think and it’s author, Vannevar Bush. Imagine it’s 1945. You see the future and prognosticate instant access to all of the information in the world from a device that sits on every person’s desk at their office. Microfiche wouldn’t come along for another 14 years. But you see the future. And the modern interpretations of this future would be the Internet and personal computing. But it’s 1945. There is no transistor and no miniaturization that led to microchips. But you’ve seen ENIAC and you see a path ahead and know where the world is going. And you share it. That is exactly what happened in “As We May Think” an article published by Vannevar Bush in The Atlantic. Vannevar Bush was one of the great minds in early computing. He got his doctorate from MIT and Harvard in 1916 and went into the private sector. During World War I he built a submarine detector and went back to MIT splitting his time between academic pursuits, inventing, and taking inventions to market. He worked with American Radio and Research Corporation (AMRAD), made millions off an early thermostat company, and founded the American Appliance Company, now known as the defense contracting powerhouse Raytheon. By 1927 computing began to tickle his fancy and he built a differential analyzer, or a mechanical computer to do all the maths! He would teach at MIT penning texts on circuit design and his work would influence the great Claude Shannon and his designs would be used in early codebreaking computers. He would become a Vice President of MIT as well as the Dean of the MIT School of Engineering. Then came World War II. He went to work at the Carnegie Institute of Science, where he was exposed to even more basic research than during his time with MIT. Then he sat on and chaired the National Advisory Committee for Aeronautics, which would later become NASA - helping you get the Ames Research Crnter and Glenn Research Center started. Seems like a full career? Nah, just getting started! he went to President Roosevelt and got the National Defense Research Committee approved. There, they developed antiaircraft guns, radar, and funded the development of ENIAC. Roosevelt then made him head of the Office of Scientific Research and Development who worked on developing the proximity fuse. There he also recruited Robert Oppenheimer to run the Manhattan Project and was there in 1945 for the Trinity Test, to see the first nuclear bomb detonated. And that is when he lost a major argument. Rather than treat nuclear weapons like the international community had treated biological weapons, the world would enter into a nuclear arms race. We still struggle with that fallout today. He would publish As We May Think in the Atlantic that year and inspire the post World War II era of computing in a few ways. The first is funding. He was the one behind the National Science Foundation. And he advised a lot of companies and US government agencies on R&D through his remaining years sitting on boards, acting as a trustee, and even a regent of the Smithsonian. Another was inspiration. As We May Think laid out a vision. Based on all of the basic and applied research he had been exposed to, he was able to see the convergence that would come decades later. ENIAC would usher in the era of mainframes. But things would get smaller. Cameras and microfilm and the parsing of data would put more information at our fingertips than ever. An explosion of new information out of all of this research would follow and we would need to parse it using those computers, which he called a memex. The collective memory of the world. But he warned of an arms race leading to us destroying the world first. Ironically it was the arms race that in many ways caused Bush’s predictions to come true. The advances made in computing during the Cold War were substantial. The arms race wasn’t just about building bigger and more deadly nuclear weapons but brought us into the era of transistorized computing and then minicomputers and of course ARPANET. And then around the time that basic research was getting defunded by the government due to Vietnam the costs had come down enough to allow Commodore, Apple, and Radioshack to flood the market with inexpensive computers and for the nets to be merged into the Internet. And the course we are on today was set. I can almost imagine Bush sitting in a leather chair in 1945 trying to figure out if the powers of creation or the powers of destruction would win the race to better technology. And I’m still a little curious to see how it all turns out. The part of his story that is so compelling is information. He predicted that machines would help unlock even faster research, let us make better decisions, and ultimately elevate the human consciousness. Doug Englebart saw it. The engineers at Xerox saw it. Steve Jobs made it accessible to all of us. And we should all look to further that cause. Thank you for tuning in to yet another episode of the History of Computing Podcast. We are so very lucky to have you.
3/21/2020 • 7 minutes, 14 seconds
CP/M
Welcome to the History of Computing Podcast, where we explore the history of information technology. Because understanding the past prepares us to innovate (and sometimes cope with) the future! Today we’re going to look at an often forgotten period in the history of computers. The world before DOS. I’ve been putting off telling the story of CP/M. But it’s time. Picture this: It’s 1974. It’s the end of the Watergate scandal. The oil crisis. The energy crisis. Stephen King’s first book Carrie is released. The Dolphins demolish my Minnesota Vikings 24-7 in the Super Bowl. Patty Hearst is kidnapped. The Oakland As win the World Series. Muhammad Ali pops George Forman in the grill to win the Heavyweight title. Charles de Gaulle opens in Paris. The Terracotta Army is discovered in China. And in one of the most telling shifts that we were moving from the 60s into the mid-70s, the Volkswagen Golf replaces the Beetle. I mean, the Hippies shifted to Paul Anka, Paper Lace, and John Denver. The world was settling down. And the world was getting ready for something to happen. A lot of people might not have known it yet, but the Intel 8080 series of chips was about to change the world. Gary Kildall could see it. He’d bought the first commercial microprocessor, the Intel 4004 when it came out in 1971. He’d been enamored and consulted with Intel. He finished his doctorate in computer science and went tot he Naval Postgraduate School in Monterrey to teach and developed Kildall’s Method, to optimize compilers. But then he met the 8080 chip. The Intel Intellec-8 was an early computer that he wanted to get an operating system running on. He’d written PL/M or the Programming Language for Microcomputers and he would write the CP/M operating system, short for Control Program/Monitor, loosely based on TOPS-10, the OS that ran on his DECsystem-10 mainframe. He would license PL/M through Intel but operating systems weren’t really a thing just yet. By 1977, personal computers were on the rise and he would take it to market though calling the company Digital Research, Inc. His wife Dorothy ran the company. And they would go into a nice rise in sales. 250,000 licenses in 3 years. This was the first time consumers could interact with computer hardware in a standardized fashion across multiple systems. They would port the code to the Z80 processors, people would run CP/M on Apple Its, Altair’s, IMSaI, Kaypro, Epson, Osbourne, Commodore and even the trash 80, or TRS-80. The world was hectic and not that standard, but there were really 3 main chips so the software actually ran on 3,000 models during an explosion in personal computer hobbyists. CP/M quickly rose and became the top operating system on the market. We would get WordStar, dBase, VisiCalc, MultiPlan, SuperCalc, Delphi, and Turbo Pascal for the office. And for fun, we’d get Colossal Cave Adventure, Gorillas, and Zork. It bootstrapped from floppy disks. They made $5 million bucks in 1981. Almost like cocoaine money at the time. Gary got a private airplane. And John Opel from IBM called. Bill Gates told him to. IBM wanted to buy the rights to CP/M. Digital Research and IBM couldn’t come to terms. And this is where it gets tricky. IBM was going to make CP/M the standard operating system for the IBM PC. Microsoft jumped on the opportunity and found a tool called 86-DOS from a company called Seattle Computer Products. The cool thing there is that used the CP/M Api and so would be easy to have compatible software. Paul Allen worked with them to license the software then compiled it for the IBM. This was the first MS DOS and became the standard, branded as PC DOS for IBM. Later, Kildall agreed to sell CP/M for $240 on the IBM PCs. The problem was that PC DOS came in at $40. If you knew nothing about operating systems, which would you buy? And so even though it had compatibility with the CP/M API, PC DOS really became the standard. So much so that Digital Research would clone the Microsoft DOS and release their own DR DOS. Kildall would later describe Bill Gates using the following quote: "He is divisive. He is manipulative. He is a user. He has taken much from me and the industry.” While Kildall considered DOS theft, he was told not to sue because the laws simply weren’t yet clear. At first though, it didn’t seem to hurt. Digital Research continued to grow. By 1983 computers were booming. Digital Research would hit $45 million in sales. They had gone from just Gary to 530 employees by then. Gangbusters. Although they did notice that they missed the mark on the 8088 chips from Intel and even with massive rises in sales had lost market share to Unix System V and all the variants that would come from that. CP/M would add DOS emulation. But sales began to slip. The IBM 5150 and subsequent machines just took over the market. And CP/M, once a dominant player, would be left behind. Gary would move more into research and development but by 1985 resigned as the CEO of Digital Research, in a year where they laid off 200 employees. He helped start a show called the Computer Chronicles in 1983. It has been something I’ve been watching a lot recently, researching these episodes and it’s awesome! He was a kinda and wicked smart man. Even to people who had screwed him over. As many would after them, Digital Research went into long-term legal drama, involving the US Department of Justice. But none of that saved them. And it wouldn’t save any of the other companies that went there either. Digital Research would sell to Novell for $80 million in 1991 and various parts of the intellectual property would live on with compilers, interpreters, and DR DOS living on. For example, as Caldera OpenDOS. But CP/M itself would be done. Kildall would die in a bar in Monterey, California in 1994. One of the pioneers of the personal computer market. From CP/M to disk buffering the data structure that made the CD, he was all over the place in personal computers. And CP/M was the gold standard of operating systems for a few years. One of the reasons I put this episode off is because I didn’t know how I would end it. Like, what’s the story here. I think it’s mostly that I’ve heard it said that he could have been Bill Gates. I think that’s a drastic oversimplification. CP/M could have been the operating system on the PC. But a lot of other things could have happened as well. He was wealthy, just not Bill Gates level wealthy. And rather than go into a downward spiral over what we don’t have, maybe we should all be happy with what we have. And much of his technology survived for decades to come. So he left behind a family and a legacy. In uncertain times, focus on the good and do well with it. And thank you for being you. And for tuning in to this episode of the History of Computing Podcast.
3/18/2020 • 9 minutes, 40 seconds
The Days Of Our Twitters
Today we’re going to celebrate the explosion and soap-opera-esque management of Twitter. As with many things, it started with an idea. Some people get one idea. Some of these Twitter founders got multiple ideas, which is one of the more impressive parts of this story. And the story of Twitter goes back to 1999. Evan Williams created a tool that gave “push-button publishing for the people.” That tool was called blogger.com and ignited a fire in people publishing articles about whatever they were thinking or feeling or working on or doing. Today, we just call it blogging. The service jumped in use and Evan sold the company to Google, where he worked for a bit and then left in 2004 in search of a new opportunity. Seeing the rise of podcasting, Williams founded another company called Odeo, to build a tool for podcasters. They worked away at that, being joined by Noah Glass, Biz Stone, Jack Dorsey, Crystal Taylor, Florian Weber, Blaine Cook, Ray McClure, Rim Roberts, Rabble, Dom, @Jeremy and others. And some investors of course. Apple added podcasts to iTunes and they knew they had to pivot. They’d had these full day sessions brainstorming new ideas. Evan was thinking more and more about this whole incubator kind of thing. Noah was going through a divorce and one night he and Jack Dorsey were going through some ideas for new pivots or companies. Jack had just been turned on to text messaging and mentioned this one idea about sharing texts to groups. The company was young and full of raver kids at the time and the thought was you could share where you are and what you were doing. Noah thought you could share your feelings as well. Since it was through text, you had a maximum 140 characters. It started as a side project. Jack and Florian Webber built a prototype. It slowly grew into a real product. They sold the remaining assets of Odeo and Twitter and was finally spun off into its own company in 2007. Noah was the first CEO. But he was ousted in 2007 when Jack Dorsey took over. They grew slowly during the year but jumped into the limelight at South By Southwest, taking home the Web Award. I joined Twitter in October of 2007. To be honest, I didn’t really get it yet. But they started to grow. And rapidly. They were becoming a news source. People were tweeting to their friends. They added the @ symbol to mention people in posts. They added the ability to retweet, or repost a tweet from someone else. And of course hashtags. Servers crashed all the time. The developers worked on anything they wanted. And after a time, the board of Twitter, which primarily consisted of investors, got tired of the company not being run well and outside Jack in 2008, letting Evan run the company. And I do like to think of the history of Twitter in stages. Noah was the incubator. He and Jack worked hard and provided a vision. Noah came up with the name, Jack helped code the site and keep it on track. Once Noah was gone they were a cool hacker collective that went into hyper growth. There wasn’t a ton of structure and the company reflected the way people used the service, a bit chaotic. But with Evan in, the hyper growth accelerated. Twitter added lists in 2009, allowing you to see updates from people you weren’t following. They were still growing fast. By 2010 there were 50 million tweets a day. Months later there were 65 million. And Jack Dorsey, while no longer at Twitter, was the media darling face of Twitter. He would found Square that year. And Square would make a dent in the multi-verse by allowing pretty much anyone to take a credit card using their phone, pretty much any time. That would indirectly lead to coffee shops, yoga studios, and any number of kinds of businesses popping up all over the world. They bought an app called Tweetie which became the Twitter app many of us use today. But servers could still crash. There was still no revenue. So Evan brought in Dick Costolo, founder of feedburner, to become the Chief Operating Officer. Dick would be named CEO. Dorsey, fuming ever since his ousting, had been behind the switch. This is where Twitter kinda’ grew up. Under Dick the site got stable finally. The users continued to grow. They started to make money. Lots of money. By 2011 they added URL shortening using the t.co domain because many of us would use a URL shortening service to conserve characters. Twitter would continue to grow and go public in 2013. By then, they’d had offers to buy equity from musicians, actors, sports stars, and even former Vice Presidents. And Twitter would continue to grow. Jack Dorsey would lead Square to an IPO in 2015. Obama would send his first tweet that same year. Shortly afterwards, Dick stepped down as the CEO of Twitter and Jack came back. Grand plans work out I suppose. Usually people don’t get back together after the breakup. But Jack did. In 2016, Donald Trump was elected president of the United States. While Obama had used Twitter, Trump took it to a whole new level, announcing public policy there sometimes before other politicians knew. And this is where Twitter just gets silly. Hundreds of millions of people log on and argue. Not my thing. I mostly just post links to these episodes these days. Jack Dorsey is now the CEO of both Square and Twitter. He catches flack for it every now and then - but it’s mostly working. He co-founded two great companies and he likely doesn’t want to risk losing control of either. Evan Williams founded Medium in 2012, another blogging service. Blogging, micro-blogging, then back to blogging. He has had three great companies he co-founded. And continues helping startups. Biz Stone, often the heart of twitter would found Jelly, which was sold to Pinterest. The fourth co-founder, Noah Glass, took some time away from startups. His part in the founding of Twitter was often under-estimated. But today, he’s the CEO of olo.com and serves on the board of a number of non-profits. The post-PC era, the social media era, the instant everything era. Twitter symbolizes all of it, kicked off when Jack sent the first message on March 21, 2006, 9:50 p.m. It read, "just setting up my twttr.". From a rag-tag group of kids who went to clubs to a multi-billion dollar social media behemoth, they also show the growth stages of network effect companies. The incubation period led by a passionate Noah. The release and rise period full of doing everything it takes and people working 20 hour days symbolized by the Jack part 1. The meteoric rise and beginnings of getting their ducks in order tenure of Evan. The growing up phase where they got profitable and stable with Dick. And then the Steve Jobs-esque reinvention of Jack and his return, slowing growth and reducing risk. The founders all felt like Twitter was theirs. And it was. A lot of founders think they’re going to change the world. And some actually do. And for the effort they put into putting a dent in the universe, we thank them. And you dear listeners, we think you too, for giving us the opportunity to share these stories of betrayal and shame and rebirth. We are so lucky to have you. Have a great day!
3/15/2020 • 10 minutes, 38 seconds
Commodore Computers
Today we’re going to talk through the history of the Commodore. That history starts with Idek Trzmiel, who would become Jack Tramiel when he immigrated to the United States. Tramiel was an Auschwitz survivor and Like many immigrants throughout history, he was a hard worker. He would buy a small office repair company in the Bronx with money he saved up driving taxis in New York and got a loan to help by the company through the US Army. He wanted a name that reflected the military that had rescued him from the camp so he picked Commodore and incorporated the company in Toronto. He would import Czeck typewriters through Toronto and assemble them, moving to adding machines when lower-cost Japanese typewriters started to enter the market. By 1962, Commodore got big enough to go public on the New York Stock Exchange. Those adding machines would soon be called calculators when they went from electromechanical devices to digital, with Commodore making a bundle off the Minuteman calculators. Tramiel and Commodore investor Irving Gould flew to Japan to see how to better compete with manufacturers in the market. They got their chips to build the calculators from MOS Technology and the MOS 6502 chip took off quickly becoming one of the most popular chips in early computing. When Texas Instruments, who designed the chips, entered the calculator market, everyone knew calculators were a dead end. The Altair had been released in 1975. But it used the Intel chips. Tramiel would get a loan to buy MOS for $3 million dollars and it would become the Commodore Semiconductor Group. The PC revolution was on the way and this is where Chuck Peddle, who came to Commodore from the acquisition comes in. Seeing the 6502 chips that MOS started building in 1975 and the 6507 that had been used in the Atari 2600, Pebble pushed to start building computers. Commodore had gotten to 60 million in revenues but the Japanese exports of calculators and typewriters left them needing a new product. Pebble proposed they build a computer and developed one called the Commodore PET. Starting at $800, the PET would come with a MOS 6502 chip - the same chip that shipped in the Apple I that year. It came with an integrated keyboard and monitor. And Commodore BASIC in a ROM. And as with many in that era, a cassette deck to load data in and save it. Commodore was now a real personal computer company. And one of the first. Along with the TRS-80, or Trash 80 and Apple when the Apple II was released they would be known as the Trinity of Personal Computers. By 1980 they would be a top 3 company in the market, which was growing rapidly. Unlike Apple, they didn’t focus on great products or software and share was dropping. So in 1981 they would release the VIC-20. This machine came with Commodore BASIC 2.0, still used a 6502 chip. But by now prices had dropped to a level where the computer could sell for $299. The PET would be a computer integrated into a keyboard so you brought your own monitor, which could be composite, similar to what shipped in the Apple IIc. And it would be marked in retail outlets, like K-Mart where it was the first computer to be sold. They would outsource the development of the VICModem and did deals with the Source, CompuServe, and others to give out free services to get people connected to the fledgeling internet. The market was getting big. Over 800 software titles were available. Today you can use VICE, a VIC-20 emulator, to use many of them! But the list of vendors they were competing with would grow, including the Apple II, The TRS-80, and the Atari 800. They would sell over a million in that first year, but a new competitor emerged in the Commodore 64. Initially referred to as the VIC-40, the Commodore 64 showed up in 1982 and would start at around $600 and came with the improved 6510 or 8500 MOS chip and the 64k of ram that gave it its name. It is easily one of the most recognizable computer names in history. IT could double as a video game console. Sales were initially slow as software developers caught up to the new chips - and they kinda’ had to work through some early problems with units failing. They still sold millions and millions by the mid 1980s. But they would need to go into a price war with Texas Instruments, Atari, and other big names of the time. Commodore would win that war but lost Tramiel along the way. He quit after disagreements with Gould, who brought in a former executive from a steel company with no experience in computers. Ironically, Tramel bought Atari after he left. A number of models would come out over the next few years with the Commodore MAX, Communicator 64, the SX-64, the C128, the Commodore 64 Game System, the 65, which was killed off by Irving Gould in 1991. And by 1993, Gould had mismanaged the company. But Commodore would buy Amiga for $25 million in 1984. They wouldn’t rescue the company with a 32 bit computer. After the Mac and the IBM came along in 1984 and after the downward pressures that had been put on prices, Commodore never fully recovered. Yes, they released systems. Like the Amiga 500 and ST, but they were never as dominant and couldn’t shake the low priced image for later Amiga models like one of the best machines made for its time, the Amiga 1000. Or the 2000s to compete with the Mac or with entries in the PC clone market to compete with the deluge of vendors that did that. They even tried a MicrosoftBASIC interpreter and their own Amiga Unix System V Release variant. But, ultimately by 1994 the company would go into bankruptcy with surviving subsidiaries going through that demise that happens where you end up with your intellectual property somehow being held by Gateway computers. More on them in a later episode. I do think the story here is a great one. A person manages to survive Auschwitz, move to the United States, and build a publicly traded empire that is easily one of the most recognizable names in computing. That survival and perseverance should be applauded. Tramiel would run Atari until he sold it in the mid-90s and would cofound the United States Holocaust Memorial Museum. He was a hard negotiator and a competent business person. Today, in tech we say that competing on price is a race to the bottom. He had to live that. But he and his exceptional team at Commodore certainly deserve our thanks, for helping to truly democratize computing, putting low-cost single board machines on the shelves at Toys-R-Us and K-mart and giving me exposure to BASIC at a young age. And thank you, listeners, for tuning in to this episode of the History of Computing Podcast. We are so lucky you listen to these stories. Have a great day. https://www.youtube.com/watch?v=AMD2nF7meDI.
3/12/2020 • 9 minutes, 27 seconds
The Brief History Of The Battery
Most computers today have multiple batteries. Going way, way, back, most had a CMOS or BIOS battery used to run the clock and keep BIOS configurations when the computer was powered down. These have mostly centered around the CR2032 lithium button cell battery, also common in things like garage door openers and many of my kids toys! Given the transition to laptops for a lot of people now that families, schools, and companies mostly deploy one computer per person, there’s a larger battery in a good percentage of machines made. Laptops mostly use lithium ion batteries, which The oldest known batteries are “Baghdad batteries”, dating back to about 200BC. They could have been used for a number of things, like electroplating. But it would take 2,000 years to get back to it. As is often the case, things we knew as humans, once backed up with science, became much, much more. First, scientists were studying positive and negative elements and forming an understanding that electricity flowed between them. Like the English natural scientist, William Gilbert - who first established some of the basics of electricity and magnetism. And Sir Thomas Browne, who continued to refine theories and was the first to call it “electricity.” Then another British scientist, Peter Collinson, sent Franklin an electricity tube, which these previous experiments had begun to produce. Benjamin Franklin spent some time writing back and forth with Collinson and flew a kite and proved that electrical currents flowed through a kite string and that a metal key was used to conduct that electricity. This proved that electricity was fluid. Linked capacitors came along in 1749. That was 1752 and Thomas-Francois Dalibard also proved the hypothesis using a large metal pole struck by lightning. Budding scientists continued to study electricity and refine the theories. 1799, Alessandro Volta built a battery by alternating zinc, cloth soaked in brine, and silver and stacking them. This was known as a voltaic pile and would release a steady current. The batteries corroded fast but today we still refer to the resistance of an ohm when the current of an amp flows through it as a volt. Suddenly we were creating electricity from an electrochemical reaction. People continued to experiment with batteries and electricity in general. Giuseppe Zamboni, another Italian, physicist invented the Zamboni pile in 1812. Here, he switched to zinc foil and manganese oxide. Completely unconnected, Swedish chemist Johann August Arfvedson discovered Lithium in 1817. Lithium. Atomic number 3. Lithium is an alkali metal found all over the world. It can be used to treat manic depression and bipolar disorder. And it powers todays modern smart-everything and Internet of thingsy world. But no one knew that yet. The English chemist John Frederick Daniell invented the Daniell cell in 1836, building on the concept but using a copper plate in a copper sulfate solution in a plate and hanging a zinc plate in the jar or beaker. Each plate had a wire and the zinc plate would become a negative terminal, while the copper plate would be a positive terminal and suddenly we were able to reliably produce electricity. Robert Anderson would build the first electric car using a battery at around the same time, but Gaston Plante would build the first rechargeable battery in 1859, which is very much resembles the ones in our cars today. He gave us the lead-acid battery, switching to lead oxide in sulfuric acid. In the 1860s the Daniell cell would be improved by Callaud and a lot of different experiments continued on. The Gassner dry cell came from Germany in 1886, mixing ammonium chloride with plaster of Paris and adding zinc chloride. Shelf life shot up. The National Carbon Company would swap out the plaster of Paris with coiled cardboard. That Colombia Dry Cell would be commercially sold throughout the United States and National Carbon Company, which would become Eveready, who makes the Energizer batteries that power the weird bunny with the drum. Swedish scientist Jungner would give us nickel-cadmium or NiCd in 1899, but they were a bit too leaky. So Thomas Edison would patent a new model in 1901, iterations of these are pretty much common through to today. Litum would start being used shortly after by GN Lewis but would not become standard until the 1970s when push button cells started to be put in cameras. Asahi Chemical out of Japan would then give us the Lithium Ion battery in 1985, brought to market by Sony in 1991, leading to John B. Goodenough, M. Stanley Whittingham, and Akira Yoshino winning the Nobel Prize in Chemistry in 2019. Those lithium ion batteries are used in most computers and smart phones today. The Osborne 1 came in 1981. It was what we now look back on as luggable computer. A 25 pound computer that could be taken on the road. But you plugged it directly into the wall. But the Epson HX-20 would ship the same year, with a battery, opening the door to batteries powering computers. Solar cells and other larger batteries require much larger amounts. This causes an exponential increase in demand and thus a jump in the price, making it more lucrative to mine. Mining lithium to create these batteries is, as with all other large scale operations taken on by humans, destroying entire ecosystems, such as those in Argentina, Bolivia, Chile, and the Tibetan plateau. Each ton of lithium takes half a million gallons of water, another resource that’s becoming more precious. And the waste is usually filtered back into the ecosystem. Most other areas mine lithium out of rock using traditional methods, but there’s certainly still an environmental impact. There are similar impacts to mining Cobalt and Nickel, the other two metals used in most batteries. So I think we’re glad we have batteries. Thank you to all these pioneers who brought us to the point that we have batteries in pretty much everything. And thank you, listeners, for sticking through to the end of this episode of the History of Computing Podcast. We’re lucky to have you.
3/9/2020 • 8 minutes, 14 seconds
The Data General Nova
Today we’re going to talk through the history of the Data General Nova. Digital Equipment was founded in 1957 and released a game changing computer, the PDP-8, in 1965. We covered Digital in a previous episode, but to understand the Data General Nova, you kinda’ need to understand the PDP. It was a fully transistorized computer and it was revolutionary in the sense that it brought interactive computing to the masses. Based in part on research from work done for MIT in the TX-0 era, the PDP made computing more accessible to companies that couldn’t spend millions on computers and it was easier to program - and the PDP-1 could be obtained for less than a hundred thousand dollars. You could use a screen, type commands on a keyboard for the first time and it would actually output to screen rather than reading teletypes or punch cards. That interactivity unlocked so much. The PDP began the minicomputer revolution. The first real computer game Spacewar! Was played on it and the adoption increased. The computers got faster. They could do as much as large mainframes. The thousands of transistors were faster and less error-prone than the old tubes. In fact, those transistors signaled that the third generation of computers was upon us. And people who liked the PDP were life-long converts. Fanatical even. The PDP evolved until 1965 when the PDP-8 was released. This is where Edson de Castro comes in, acting as the project manager for the PDP-8 development at Digital. 3 years later, he, Henry Burkhardt, and Richard Sogge of Digital would be joined by Herbert Richman a sales person from Fairchild Semiconductor. They were proud of the PDP-8. It was a beautiful machine. But they wanted to go even further. And they didn’t feel like they could do so at Digital. They would build a less expensive minicomputer that opened up even more markets. They saw new circuit board manufacturing techniques, new automation techniques, new reasons to abandon the 12-bit CPU techniques. Edson had wanted to build a PDP with all of this and the ability to use 8 bit, 16 bit, or 32 bit architectures, but it got shut down at Digital. So they got two rounds of venture capital at $400,000 each and struck out on their own. They wanted the computer to fit into a 19-inch rack mount. That choice would basically make the 19 inch rack the standard from then on. They wanted the machines to be 16-bit, moving past the 8 or 12 bit computers common in mini-computing at the time. They used an accumulator-based architecture, which is to say that there was a CPU that had a register that stored the results of various bits of code. This way you weren’t writing the results of all the maths into memory and then sending it right back to the CPU. Suddenly, you could do infinitely more math! Having someone from Fairchild really unlocked a lot of knowledge about what was happening in the integrated circuit market. They were able to get the price down into the thousands, not tens of thousands. You could actually buy a computer for less than 4 thousand dollars. The Nova would ship in 1969 and be an instant success with a lot of organizations. Especially smaller science labs like one at the University of Texas that was their first real paying cusotmer. Within 6 months they sold 100 units and within the first few years, they were over $100 million in sales. They were seeking into Digital’s profits. No one would have invested in Digital had they tried to compete head-on with IBM. Digital had become the leader in the minicomputer market, effectively owning the category. But Nova posed a threat. Until they decided to get into a horse race with Digital and release the SuperNOVA to compete with the PDP-11. They used space age designs. They were great computers. But Digital was moving faster. And Data General started to have production and supply chain problems, which led to law suits and angry customers. Never good. By 1977 Digital came out with the VAX line, setting the standard to 32-bit. Data General was late to that party and honestly, after being a market leader in low-cost computing they started to slip. By the end of the 70s microchips and personal computers would basically kill minicomputers and while transitioning from minicomputers to servers, Data General never made quite the same inroads that Digital Equipment did. Data General would end up with their own DOS, like everyone their own UNIX System V variant, one of the first portable computers, but by the mid-80s, IBM showed up on the market and Data General would make databases and a number of other areas to justify what was becoming a server market. In fact, the eventual home for Data General would be to get acquired by EMC and become CLaRiiON under the EMC imprint. It was an amazing rise. Hardware that often looked like it came straight out of Buck Rogers. Beautiful engineering. But you just can’t compete on price and stay in business forever. Especially when you’re competing with your former bosses who have much much deeper pockets. EMC benefited from a lot of these types of acquisitions over the years, to become a colossus by the end of the 2010s. We can thank Data General and specifically the space age nova, for helping set many standards we use today. We can thank them for helping democratize computing in general. And if you’re a heavy user of EMC appliances, you can probably thank them for plenty of underlying bits of what you do even through to today. But the minicomputer market required companies to make their own chips in that era and that was destroyed by the dominance of Intel in the microchip industry. It’s too bad. So many good ideas. But the costs to keep up turned out to be too much for them, as with many other vendors. One way to think about this story. You can pick up on new manufacturing and design techniques and compete with some pretty large players, especially on price. But when the realities of scaling an operation come you can’t stumble or customer confidence will erode and there’s a chance you won’t get to compete for deals again in the future. But try telling that to your growing sales team. I hear people say you have to outgrow the growth rate of your category. You don’t. But you do have to do what you say you will and deliver. And when changes in the industry come, you can’t be all over the place. A cohesive strategy will help you whether the storm. So thank you for tuning into this episode of the History of Computing Podcast. We are so lucky you chose to join us and we hope to see you next time! Have a great day!
3/5/2020 • 9 minutes, 42 seconds
Airbnb: The Rise and Rise of the Hospitality Industry
Today we’re going to talk through the history of Airbnb. But more importantly, we’re going to look at what brought the hospitality industry to a place so ripe to be disrupted. The ancient Greeks, Romans, Persians, and many other cultures provided for putting travelers up while visiting other cities in one way or another. Then inns begins to rise from roads connecting medieval Europe, complete with stables and supplies to get to your next town. The rise of stagecoaches gave way to a steady flow of mail and a rise in travel over longer distances for business gave way to much larger and fancier hotels in the later 1700s and 1800s. In 1888 Cesare Ritz became the first manager of the Savoy hotel in London, after time at the Hotel Splendide in Paris and other hotels. He would open the Paris Ritz in 1898 and expand with properties in Rome, Frankfurt, Palermo, Madrid, Cairo, Johannesburg, Monte Carlo, and of course London. His hotels were in fact so fancy that he gave us the term ritzy. Ritz is one of the most lasting names but this era was the first boom in the hotel industry, with luxury hotels popping up all over the world. Like the Astor, the Waldorf Astoria, the Plaza, the Taj Mahal, and the list goes on. The rise of the hotel industry was on its way when Conrad Hilton bought the Mobley Hotel in Cisco Texas in 1919. By 1925 he would open the Dallas Hilton and while opening further hotels nearly ruined him in the Great Depression he emerged into the post World War II boom times establishing a juggernaut now boasting 568 hotels. Best Western would start in 1946 and now has 4,200 locations. After World War II we saw the rise of the American middle class and the great American road trip. Chains exploded. Choice Hotels which acts as more of a franchiseier established in 1939 sits with 7,000 locations but that’s spread across Extended Stay, MainStay, Quality Inn, Cambria Hotels, Comfort Inn, and other brands. Holiday Inn was founded 1952 in the growing post-war boom time by Kemmons Wilson and named after the movie by the same name. The chain began with that first hotel in 1952 and within 20 years hit 1,400 Holiday Inns landing Wilson on the cover of Time as “The Nation’s Innkeeper.’ They would end up owning Harrah's Entertainment, Embassy Suites Hotels, Crowne Plaza, Homewood Suites, and Hampton Inn now sitting with 1,173 hotels. The Ramada would start the next year by Marion Isbell and has now grown to 811 locations. Both of them started their companies due to the crappy hotels that were found on the sides of roads, barely a step above those founded in the medieval days. Howard Johnson took a different path, starting with soda shops then restaurants and opening his first hotel in 1954, expanding to 338 at this point, and now owned by Wyndham Hotels, a much later entrant into the hotel business. Wyndham also now owns Ramada. The 1980s led to a third boom in hotels with globalization, much like it was the age of globalization for other brands and industries. The oil boom in the Middle East, the rising European Union, the opening up of Asian markets. And as they grew, they used computers to build software to help and cut costs and enable loyalty programs. It was an explosion of money and profits and as the 80s gave way to the 90s, the Internet gave customers the ability to comparison shop and the rise of various sites that aggregated hotel information, with Expedia, Travelocity, American Express, even Concur rising - sites came and went quickly and made it easy for AccorHotels to research and then buy Raffles, Sofitel, Novotel and for Intercontinental and others to user in the era of acquisitions and mergers. Meanwhile the Internet wasn’t just about booking hotels at chains easily. VRBO began in 1995 when David Clouse wanted to rent his condo in Breckenridge and got sick of classifieds. Seeing the web on the rise, he built a website and offered subscriptions to rent properties for vacations, letting owners and renters deal directly with one another to process payments. Vacation Rentals By Owner, or VRBO would expand through the 90s. And then Paris Hilton happened. Her show The Simple Life in 2003 led to a 5 year career that seemed to fizzle at the Toronto International Film Festival in 2008 with the release of a critical documentary of her called Paris, Not France. The mergers and acquisitions and globalization and being packed in stale smokey rooms like sardines seemed to have run its course. Boutique hotels were opening, a trend that started in the 90s and by 2008 W Hotels was expanding into Europe, now with 55 properties around the world. And that exemplifies what some of this backlash was against big chains that was starting to brew. In 2004, CEH Holdings bought a few websites to start HomeAway.com and in 2006 raised $160 million in capital to buy VRBO and gain access to their then 65,000 properties. homeaway.com would be acquired by Expedia in 2015 for $3.9 billion, but not before a revolution in the hospitality industry began. That revolution started with 2 industrial design students. Brian Chesky and Joe Gebbia had come from the Rhode Island School of Design. After graduation Gebbia would move to San Francisco and Chesky would move to Los Angeles. They had worked on projects together in college and Gebbia bugged Chesky about moving to San Francisco to start a company together for a few years. By 2007 Chesky gave in and made the move, becoming one of Gebbia’s two roommates. It was the beginning of the Great Recession. They were having trouble making rent. The summer of 2008 brought the Industrial Designers Society of America’s Industrial Design Conference to San Francisco. They had the idea to take a few air beds from a recent camping trip and rent them out in their apartment. Paris Hilton would never have done that. They reached out to a former roommate of theirs, Nathan Blecharczyk. He’s a Harvard alum and pretty rock solid programmer and signed on to be a co-founder, building them a website in Ruby on Rails. They rented those three airbeds out and called their little business airbedandbreakfast.com. They thought they were on to something. I mean, who wouldn’t want to rent an airbed and crash on someone’s kitchen floor?!?! But reality was about to come calling. Venture capital was drying up due to the deepening recession. They tried to raise funding and failed. And so far their story seems pretty standard. But this is where I start really liking them. They bought a few hundred boxes of cereal and made '"Obama O's" and "Cap'n McCain's" to sell at the Democratic National Convention in 2008 for $40 per box. They sold $30,000 worth, enough to bootstrap the company. They would go to South By South West and visit events, growing slowly in New York and San Francisco. The money would last them long enough to make it into Y Combinator in 2009. Paul Graham and the others at Y Combinator has helped launch 2,000 companies, including Docker, DoorDash, Dropbox, GitLab, Gusto, Instacart, Reddit, Stripe, Twitch, and Zapier. They got $20,000 from Y Combinator. They changed the site to airbnb.com and a people started to book more and more stays - and not just with airbeds, but rending their full homes out. They charged 3% of the booking as a fee - a number that hasn’t really changed in all these years. They would get $600,000 in funding from Sequoia Capital in 2009 when they finally got up to 2,500 listings and had 10,000 users. Nothing close to what homeaway.com had, but they would get more funding from Sequoia and added Greylock to the investors and by the close of 2010 they were approaching a million nights booked. From here, the growth got meteoric. They won the app award during a triumphant return to South By South West in 2011 and went international, opening an office in London and expanding bookings to 89 countries. The investments, the advertising, the word of mouth, the media coverage. So much buzz and so much talk about innovation and disruption. The growth was explosive. They would iterate the website and raised another $112 million dollars in venture capital. And by 2012 they hit 10 million nights booked. And that international’s expansion paid off with well over half being outside of the United States. Growth of course led to problems. A few guests trashed their lodgings and Airbnb responded with a million dollar policy to help react to those kinds of things in the future. Some of the worse aspects of humanity can be seen on the web. They also encountered renters trying to discriminate based on race. So they updated their policies and took a zero tolerance approach. More importantly, they responded that they didn’t have to think of such things given the privilege of having a company founded by three white guys. They didn’t react with anger or displacement. They said we need to be better, with every problem that came up. And the growth continued. Doubling every year. They released a new logo and branding in 2014 and by 2016 were valued at $30 billion dollars. They added Trips, which seems to still be trying to catch up to what Groupon started doing for booking excursions years ago. During the rise of AirBNB we saw an actual increase in hotel profits. Customers are often millennials who are traveling more and more, given the way that the friction and some of the cost has been taken out of travel. The average age of a host is 43. And of the hosts I know, I can wager that Airbnb rentals have pumped plenty of cash back into local economies based on people taking better care of their homes, keeping fresh paint, and the added tourism spend when customers are exploring new cities. And not just visiting chains. After all, you stay at Airbnb for the adventure, not to go to shop for the same stuff at Forever 21. Even if you take out the issues with guests trashing places and racism, it still hasn’t all been sunshine and unicorns. AirBNB has been in legal battles with New York and a few other cities for years. Turns out that speculators and investors cause extra strain on an already over-burdened housing market. If you want to see the future of living in any dense population center, just look to New York. As the largest city in the US, it’s also the largest landlord of any public institution with over 400,000 tenants. And rent is rising almost twice as fast as incomes with lower income rents going up faster than those of the wealth. Independent auditors claim that AirBNB is actually accountable for 9.2 percent of that. But 79 percent of hosts use their Airbnb earnings to afford their apartments.And if many of the people that count on AirBNB to make their rent can’t afford their apartments. AirBNB argues their goal is to have “one host, one home” which is to say they don’t want a lot of investors. After all, will most investors want to sit around the kitchen table and talk about the history of the city or cool tidbits about neighborhoods. Probably not. AirBNB was started to offer networking opportunities and a cool place to stay that isn’t quite so… sterile. Almost the opposite of Paris Hilton’s life, at least according to TMZ and MTV shows. San Francisco and a number of other cities have passed ordinances as well, requiring permits to rent homes through AirBNB and maximizing the number of days a home can be rented through the service, often to about two thirds of a year. But remember, AirBNB is just the most visible but not the only game in town. Most category leaders have pre-existing competition, like VRBO and HomeAway. And given the valuation and insane growth of AirBNB, it’s also got a slew of specialized competitors. This isn’t to say that they don’t contribute to the problems with skyrocketing housing costs. They certainly do. As is often the case with true disruptors, Pandora’s Box is open and can’t be closed again. Regulation will help to limit the negative impacts of the disruption but local governments will alienate a generation that grew up with a disruption if they are overly-punitive. And most of the limits in place are easily subverted anyways. For example, if there’s a limit on the number of nights you can rent, just do half on VRBO and the other half on Airbnb. But no matter the problems, AirBNB continues to grow. They react well. Gebbia, now the CEO, has a deep pipeline of advisors he can call on in times of crisis. Whether corporate finance, issues with corporate infighting, crisis management, or whatever the world throws at them, the founders and the team they’ve surrounded themselves with have proven capable of doing almost anything. Today, AirBNB handles over half a million transactions per night. They are strongest with millineals, but get better and better at expanding out of their core market. One adjacency would be corporate bookings through a partnership with Concur and others, something we saw with Uber as well. Another adjacency. They now make more money than the Hilton and Hilton subsidiaries. Having said that, the major hotel chains are all doing better financially today than ever before and continue to thrive maybe despite, or maybe because AirBNB. That might be misleading though, revenue per room is actually decreasing correlative to the rise of AirBNB. And of course that’s amplified at the bottom tier of hotels. Just think of what would have happened had they not noticed that rooms were selling out for a conference in 2007. Would what we now call the “sharing” economy be as much a thing? Probably. Would someone else have seized the opportunity? Probably. But maybe not. And hopefully the future will net a more understanding and better connected society once we’ve all get such intimate perspectives on different neighborhoods and the amazing little ecosystems that humanity has constructed all over the world. That is the true disruption: in an age of global sterility, offering the most human of connections. As someone who loves staying in quirky homes on Airbnb, a very special thanks to Chesky, Gebbia, Blecharczyk, and the many, many amazing people at Airbnb. Thank you for reacting the way you do to problems when they arise. Thank you for caring. Thank you for further democratizing and innovating hospitality and experiences. And most importantly, thank you for that cabin by the lake a few months ago. That was awesome! And thanks to the listeners who tuned in to this episode, of the History of Computing Podcast. Have a great day!
3/2/2020 • 20 minutes, 43 seconds
The Evolution (and De-Evolution) of the Mac Server
Todays episode is on one of the topics I am probably the most intimate with that we’ll cover: the evolution of the Apple servers and then the rapid pivot towards a much more mobility-focused offering. Early Macs in 1984 shipped with AppleTalk. These could act as a server or workstation. But after a few years, engineers realized that Apple needed a dedicated server platform. Apple has had a server product starting in 1987 that lives on to today. At Ease had some file and print sharing options. But the old AppleShare (later called AppleShare IP server was primarily used to provide network resources to the Mac from 1986 to 2000, with file sharing being the main service offered. There were basically two options. At Ease, which ran on the early Mac operating systems and A/UX, or Apple Unix. This brought paged memory management and could run on the Macintosh II through the Centris Macs. Apple Unix shipped from 1988 to 1995 and had been based on System V. It was a solidly performing TCP/IP machine and introduced the world of POSIX. Apple Unix could emulate Mac apps and once you were under the hood, you could do pretty much anything you might do in another Unix environment. Apple also took a stab at early server hardware in the form of the Apple Network Server, which was announced in 1995 when Apple Unix went away, for the Quadra 950 and a PowerPC server sold from 1996 to 1997, although the name was used all the way until 2003. While these things were much more powerful and came with modern hardware, they didn’t run the Mac OS but ran another Unix type of operating system, AIX, which had begun life at about the same time as Apple Unix and was another System V variant, but had much more work done and given financial issues at Apple and the Taligent relationship between Apple and IBM to build a successor to Mac OS and OS/2, it made sense to work together on the project. Meanwhile, At Ease continued to evolve and Apple eventually shipped a new offering in the form of AppleShare IP, which worked up until 9.2.2. In an era before, as an example, you needed to require SMTP authentication, AppleShare IP was easily used for everything from file sharing services to mail services. An older Quadra made for a great mail server so your company could stop paying an ISP for some weird email address like that AOL address you got in college, and get your own domain in 1999! And if you needed more, you could easily slap some third party software on the hosts, like if you actually wanted SMTP authentication so your server didn’t get used to route this weird thing called spam, you could install Communigator or later Communigate Pro. Keep in mind that many of the engineers from NeXT after Steve Jobs left Apple had remained friends with engineers from Apple. Some still actually work at Apple. Serving services was a central need for NEXTSTEP and OPENSTEP systems. The UNIX underpinnings made it possible to compile a number of open source software packages and the first web server was hosted by Tim Berners Lee on a NeXTcube. During the transition over to Apple, AppleShare IP and services from NeXT were made to look and feel similarly and turned into Rhapsody from around 1999 and then Mac OS X Server from around 2000. The first few releases of Mac OS X Server, represented a learning curve for many classic Apple admins, and in fact caused a generational shift in who administered the systems. John Welch wrote books in 2000 and 2002 that helped administrators get up to speed. The Xserve was released in 2002 and the Xserve RAID was released in 2003. It took time, but a community began to form around these products. The Xserve would go from a G3 to a G4. The late Michael Bartosh compiled a seminal work in “Essential Mac OS X Panther Server Administration” for O’Reilly Media in 2005. I released my first book called The Mac Tiger Server Black Book in 2006. The server was enjoying a huge upswing in use. Schoun Regan and Kevin White wrote a Visual QuickStart for Panther Server. Schoun wrote one for Tiger Server. The platform was growing. People were interested. Small businesses, schools, universities, art departments in bigger companies. The Xserve would go from a G4 to an Intel processor and we would get cluster nodes to offload processing power from more expensive servers. Up until this point, Apple never publicly acknowledged that businesses or enterprises used their device so the rise of the Xserve advertising was the first time we saw that acknowledgement. Apple continued to improve the product with new services up until 2009 with Mac OS X Server 10.6. At this point, Apple included most services necessary for running a standard IT department for small and medium sized business in the product, including web (in the form of Apache), mail, groupware, DHCP, DNS, directory services, file sharing, and even web and wiki services. There were also edge case services such as Podcast Producer for automating video and content workflows, Xsan, a clustered file system, and in 2009 even purchased a company called Artbox, whose product was rebranded as Final Cut Server. Apple now had multiple awesome, stable products. Dozens of books and websites were helping built a community and growing knowledge of the platform. But that was a turning point. Around that same time Apple had been working towards the iPad, released in 2010 (although arguably the Knowledge Navigator was the first iteration, conceptualized in 1987). The skyrocketing sales of the iPhone led to some tough decisions. Apple no longer needed to control the whole ecosystem with their server product and instead began transitioning as many teams as possible to work on higher profit margin areas, reducing focus on areas that took attention away from valuable software developers who were trying to solve problems many other vendors had already solved better. In 2009 the Xserve RAID was discontinued and the Xserve went away the following year. By then, the Xserve RAID was lagging and for the use cases it served, there were other vendors whose sole focus was storage - and who Apple actively helped point customers towards. Namely the Promise array for Xsan. A few things that were happening around the same time. Apple could have bought Sun for less than 10% of their CASH reserves in 2010 but instead allowed Oracle to buy the tech giant. Instead, Apple released the iPad. Solid move. They also released the Mac Mini server, which while it lacked rack and stack options like an ipmi interface to remotely reboot the server and dual power supplies, was actually more powerful. The next few years saw services slowly pealed off the server. Today, the Mac OS X Server product has been migrated to just an app on the App Store. Today, macOS Server is meant to run Profile Manager and be run as a metadata controller for Xsan, Apple’s clustered file system. Products that used to compete with the platform are now embraced by most in the community. For the most part, this is because Apple let Microsoft or Linux-based systems own the market for providing features that are often unique to each enterprise and not about delighting end users. Today building server products that try to do everything for everyone seems like a distant memory for many at Apple. But there is still a keen eye towards making the lives of the humans that use Apple devices better, as has been the case since Steve Jobs mainstreamed the GUI and Apple made the great user experience advocate Larry Tesler their Chief Scientist. How services make a better experience for end users can be seen by the Caching service built into macOS (moved there from macOS Server) and how some products, such as Apple Remote Desktop, are still very much alive and kicking. But the focus on profile management and the desire to open up everything Profile Manager can do to third party developers who serve often niche markets or look more to scalability is certainly front and center. I think this story of the Apple Server offering is really much more about Apple branching into awesome areas that they needed to be at various points in time. Then having a constant focus on iterating to a better, newer offering. Growing with the market. Helping the market get to where they needed them to be. Serving the market and then when the needs of the market can be better served elsewhere, pulling back so other vendors could serve the market. Not looking to grow a billion dollar business unit in servers - but instead looking to provide them just until they didn’t need to. In many ways Apple paved the way for billion dollar businesses to host services. And the SaaS ecosystem is as vibrant for the Apple platform as ever. My perspective on this has changed a lot over the years. As someone who wrote a lot of books about the topic I might have been harsh at times. But that’s one great reason not to be judgmental. You don’t always know the full picture and it’s super-easy to miss big strategies like that when you’re in the middle of it. So thank you to Apple for putting user experience into servers as with everything you do. And thank you listeners for tuning into this episode of the History of Computing Podcast. We’re certainly lucky to have you and hope you join us next time!
2/28/2020 • 13 minutes, 27 seconds
Saying Farewell to Larry Tesler
Today we’re going to honor Larry Tesler, who died on February 17th, 2020. Larry Tesler is probably best known for early pioneering work on graphical user interfaces. He was the person that made up cut, copy, and paste as a term. Every time you say “just paste that in there,” you’re honoring his memory. I’ve struggled with how to write the episode or episodes about Xerox PARC. It was an amazing crucible of technical innovation. But they didn’t materialize huge commercial success for Xerox. Tesler was one of the dozens of people who contributed to that innovation. He studied with John McCarthy and other great pioneers at the Stanford Artificial Intelligence Laboratory in the 60s. What they called artificial intelligence back then we might call computer science today. Being in the Bay Area in the 60s, Tesler got active in war demonstrations and disappeared off to a commune in Oregon until he got offered a job by Alan Kay. You might remember Kay from earlier episodes as the one behind Smalltalk and the DynaBook. They’d both been at The Mother of All Demos where Doug Englebart showed the mouse, the first hyperlinks, and the graphical user interface and they’d been similarly inspired about the future of computing. So Tesler moves back down in 1970. I can almost hear Three Dog Night’s Mama Told Me Not To Come booming out of the 8track of his car stereo on the drive. Or hear Nixon and Kissinger on the radio talking about why they invaded Cambodia. So he gets to PARC and there’s a hiring freeze at Xerox, who after monster growth was starting to get crushed by bureaucracy, so was in a hiring freeze. Les Earnest from back at Stanford had him write one of the first markup language implementations, which he called Pub. That became the inspiration for Don Knuth’s TeX and Brian Reid’s Scribe and an ancestor of JavaScript and PHP. They find a way to pay him, basically bringing him on as a contractor. He works on Gypsy, the first real word processor. At the time, they’d figured out a way of using keystrokes to switch modes for documents. Think of how in vi or pico, you switch to a mode in order to insert or move, but they were applying metadata to an object, like making text bold or copying text from one part of a document to another. Those modes were terribly cumbersome and due to very simple mistakes, people would delete their documents. So he and Tim Mott started looking at ways to get rid of modes. That’s when they came up with the idea to make a copy and paste function. And to use the term cut, copy, and paste. Thee are now available in all “what you see is what you get” or WYSIWYG interfaces. Oh, he also coined that term while at PARC, although maybe not the acronym. And he became one of the biggest proponents of making software “user-friendly” when he was at PARC. By the way, that’s another term he coined, with relation to computing at least. He also seems to be the first to have used the term browser after building a browser for a friend to more easily write code. He’d go on to work on the Xerox Alto and NoteTaker. That team, which would be led by Adele Goldberg after Bob Taylor and then Alan Kay left PARC got a weird call to show these kids from Apple around. The scientists from PARC didn’t think much of these hobbyists but in 1979 despite Goldberg’s objections, Xerox management let the fox in the chicken coup when they let Steve Jobs and some other early Apple employees get a tour of PARC. Tesler would be one of the people giving Jobs a demo. And it’s no surprise that after watching Xerox not ship the Alto, that Tesler would end up at Apple 6 months later. After Xerox bonuses were distributed of course. At Apple, he’d help finish the Lisa. It cost far less than the Xerox Star, but it wouldn’t be until it went even further down-market to become the Macintosh that all of their hard work at Xerox and then Apple would find real success. Kay would become a fellow at Apple in 1984, as many of the early great pioneers left PARC. Tesler was the one that added object-oriented programming to Pascal, used to create the Lisa Toolkit and then he helped bring those into MacApp as class libraries for developing the Mac GUI. By 1990, Jobs had been out of Apple for 5 years and Tesler became the Vice President of the Newton project at Apple. He’d see Alan Kay’s concept of the digital assistant made into a reality. He would move into the role of Chief Scientist at Apple once the project was complete. There, he made his own mini-PARC but would shut down the group and leave after Apple entered their darkest age in 1997. Tesler had been a strong proponent, acting as the VP of AppleNet and pushing more advanced networking options prior to his departure. He would strike out and build Stagecast, a visual programming language that began life as an object-oriented teaching language called Cocoa. Apple would reuse the name Cocoa when they ported in OpenStep, so not the Cocoa many developers will remember or maybe even still use. Stagecast would run until Larry decided to join the executive team at Amazon. At Amazon, Larry was the VP of Shopping Experience and would start a group on usability, doing market research, usability research, and lots of data mining. He would stay there for 4 years before moving on to Yahoo!, spreading the gospel about user experience and design, managing up to 200 people at a time and embedding designers and researchers into product teams, a practice that’s become pretty common in UX. He would also be a fellow at Yahoo! before taking that role at 23 and me and ending his long and distinguished career as a consultant, helping make the world a better place. He conceptualized the Law of Conservation of Complexity, or Tesler’s Law, in 1984 states that “Every application has an inherent amount of irreducible complexity. The only question is: Who will have to deal with it—the user, the application developer, or the platform developer?” But One of my favorite quotes of his “I have been mistakenly identified as “the father of the graphical user interface for the Macintosh”. I was not. However, a paternity test might expose me as one of its many grandparents.” The first time I got to speak with him, he was quick to point out that he didn’t come up with much; he was simply carrying on the work started by Englebart. He was kind and patient with me. When Larry passed, we lost one of the founders of the computing world as we know it today. He lived and breathed user experience and making computers more accessible. That laser focus on augmenting human capabilities by making the inventions easier to use and more functional is probably what he’d want to be known for above all else. He was a good programmer but almost too empathetic not to end up with a focus on the experience of the devices. I’ll include a link to an episode he did on the 99% Invisible episode in the show notes if you want to hear more from him directly ( https://99percentinvisible.org/episode/of-mice-and-men ). Everyone except the people who get royalties from White Out loved what he did for computing. He was a visionary and one of the people that ended up putting the counterculture into computing culture. He was a pioneer in User Experience and a great human. Thank you Larry for all you did for us. And thank you, listeners, in advance or in retrospect, for your contributions.
2/24/2020 • 11 minutes, 10 seconds
OS/2
Today we’re going to look at an operating system from the 80s and 90s called OS/2. OS/2 was a bright shining light for a bit. IBM had a task force that wanted to build a personal computer. They’d been watching the hobbyists for some time and felt they could take off the shelf parts and build a PC. So they did.. But they needed an operating system. They reached out to Microsoft in 1980, who’d been successful with the Altair and so seemed a safe choice. By then, IBM had the IBM Entry Systems Division based out of their Boca Raton, Florida offices. The open architecture allowed them to ship fast. And it afforded them the chance to ship a computer with, check this out, options for an operating system. Wild idea, right? The options initially provided were CP/M and PC DOS, which was MS-DOS ported to the IBM open architecture. CP/M sold for $240 and PC DOS sold for $40. PC DOS had come from Microsoft’s acquisition of 86-DOS from Seattle Computer Products. The PC shipped in 1981, lightning fast for an IBM product. At the time Apple, Atari, Commodore, and were in control of the personal computer market. IBM had dominated the mainframe market for decades and once the personal computer market reached $100 million dollars in sales, it was time to go get some of that. And so the IBM PC would come to be an astounding success and make it not uncommon to see PCs on people’s desks at work or even at home. And being that most people didn’t know a difference, PC DOS would ship on most. By 1985 it was clear that Microsoft had entered and subsequently dominated the PC market. And it was clear that due to the open architecture that other vendors were starting to compete. And after 5 years of working together on PC DOS and 3 versions later, Microsoft and IBM signed a Joint Development Agreement and got to work on the next operating system. One they thought would change everything and set IBM PCs up to dominate the market for decades to come. Over that time, they’d noticed some gaps in DOS. One of the most substantial is that after the projects and files got too big, they became unwieldy. They wanted an object oriented operating system. Another is protected mode. The 286 chips from Intel had protected mode dating back to 1982 and IBM engineers felt they needed to harness that in order to get multi-tasking safely and harness virtual memory to provide better support for all these crazy new windowing things they’d learned with their GUI overlay to DOS called TOPview. So after the Joint Development agreement was signed , IBM let Ed Iacobucci lead the charge on their side and Microsoft had learned a lot from their attempts at a windowing operating system. The two organizations borrowed ideas from all the literature and Unix and of course the Mac. And really built a much better operating system than anything available at the time. Microsoft had been releasing Windows the whole time. Windows 1 came in 1985 and Windows 2 came in 1987, the same year OS/2 1.0 was released. In fact, one of the most dominant PC models to ever ship, the PS/2 computer, would ship that year as well. The initial release didn’t have a GUI. That wouldn’t come until version 1.1 nearly a year later in 1988. SNA shipped to interface with IBM mainframes in that release as well. And TCP/IP and Ethernet would come in version 1.2 in 1989. During this time, Microsoft steadily introduced new options in Windows and claimed both publicly and privately in meetings with IBM that OS/2 was the OS of the future and Windows would some day go away. They would release an extended edition that included a built-in database. Based on protected mode developers didn’t have to call the BIOS any more and could just use provided APIs. You could switch the foreground application using control-escape. In Windows that would become Alt-Tab. 1.2 brought the hpfs file system, bringing longer file names, a journaled file system to protect against data loss during crashes, and extended attributes, similar to how those worked on the Mac. But many of the features would ship in a version of Windows that would be released just a few months before. Like that GUI. Microsoft’s presentation manager came in Windows 2.1 just a few months before OS/2 1.1. Microsoft had an independent sales team. Every manufacturer that bundled Windows meant there were more drivers for Windows so a wider variety of hardware could be used. Microsoft realized that DOS was old and building on top of DOS was going to some day be a big, big problem. They started something similar to what we’d call a fork today of OS/2. And in 1988 they lured Dave Cutler from Digital who had been the architect of the VMS operating system. And that moment began the march towards a new operating system called NT, which borrowed much of the best from VMS, Microsoft Windows, and OS/2 - and had little baggage. Microsoft was supposed to make version 3 of OS/2 but NT OS/2 3.0 would become just Windows NT when Microsoft stopped developing on OS/2. It took 12 years, because um, they had a loooooot of customers after the wild success of first Windows 3 and then Windows 95, but eventually Cutler’s NT would replace all other operating systems in the family with the release of Windows 2000. But by 1990 when Microsoft released Windows 3 they sold millions of copies. Due to great OEM agreements they were on a lot of computers that people bought. The Joint Development Agreement would finally end. IBM had enough of what they assumed meant getting snowed by Microsoft. It took a couple of years for Microsoft to recover. In 1992, the war was on. Microsoft released Windows 3.1 and it was clear that they were moving ideas and people between the OS/2 and Windows teams. I mean, the operating systems actually looked a lot alike. TCP/IP finally shipped in Windows in 1992, 3 years after the companies had co-developed the feature for OS/2. But both would go 32 bit in 1992. OS /2 version 2.0 would also ship, bringing a lot of features. And both took off the blinders thinking about what the future would hold. Microsoft with Windows 95 and NT on parallel development tracks and IBM launched multiple projects to find a replacement operating system. They tried an internal project, Workstation OS, which fizzled. IBM did the unthinkable for Workplace OS. They entered into an alliance with Apple, taking on a number of Apple developers who formed what would be known as the Pink team. The Pinks moved into separate quarters and formed a new company called Taligent with Apple and IBM backing. Taligent planned to bring a new operating system to market in the mid-1990s. They would laser focus on PowerPC chips thus abandoning what was fast becoming the WinTel world. They did show Workspace OS at Comdex one year, but by then Bill Gates was all to swing by the booth knowing he’d won the battle. But they never shipped. By the mid-90s, Taligent would be rolled into IBM and focus on Java projects. Raw research that came out of the project is pretty pervasive today though. Those was an example of a forward looking project, though - and OS/2 continued to be developed with OS/2 Warp (or 3) getting released in 1994. It included IBM Works, which came with a word processor that wasn’t Microsoft Word, a spreadsheet that wasn’t Microsoft Excel, and a database that wasn’t Microsoft Access. Works wouldn’t last past 1996. After all, Microsoft had Charles Simony by then. He’d invented the GUI word processor at Xerox PARC and was light years ahead of the Warp options. And the Office Suite in general was gaining adoption fast. Warp was faster than previous releases, had way more options, and even browser support for early Internet adopters. But by then Windows 95 had taken the market by storm and OS/2 would see a rapidly declining customer base. After spending nearly a billion dollars a year on OS development, IBM would begin downsizing once the battle with Microsoft was lost. Over 1,300 people. And as the number of people dropped, defects with the code grew and the adoption dropped even faster. OS/2 would end in 2001. By then it was clear that IBM had lost the exploding PC market and that Windows was the dominant operating system in use. IBM’s control of the PC had slowly eroded and while they eeked out a little more profit from the PC, they would ultimately sell the division that built and marketed computers to Lenovo in 2005. Lenovo would then enjoy the number one spot in the market for a long time. The blue ocean had resulted in lower margins though, and IBM had taken a different, more services-oriented direction. OS/2 would live on. IBM discontinued support in 2006. It should have probably gone fully open source in 2005. It had already been renamed and rebranded as eComStation first by an IBM Business Partner called Serenity. It would go opensource(ish) and openoffice.org would be included in version two in 2010. Betas of 2.2 have been floating around since 2013 but as with many other open source compilations of projects, it seems to have mostly fizzled out. Ed Iacobucci would go on to found or co-found other companies, including Citrix, which flourishes to this day. So what really happened here. It would be easy, but an over-simplification to say that Microsoft just kinda’ took the operating system. IBM had a vision of an operating system that, similar to the Mac OS, would work with a given set of hardware. Microsoft, being an independent software developer with no hardware, would obviously have a different vision, wanting an operating system that could work with any hardware - you know, the original open architecture that allowed early IBM PCs to flourish. IBM had a big business suit and tie corporate culture. Microsoft did not. IBM employed a lot of computer scientists. Microsoft employed a lot of hackers. IBM had a large bureaucracy, Microsoft could build an operating system like NT mostly based on hiring a single brilliant person and rapidly building an elite team around them. IBM was a matrixed organization. I’ve been told you aren’t an enterprise unless you’re fully matrixed. Microsoft didn’t care about all that. They just wanted the marketshare. When Microsoft abandoned OS/2, IBM could have taken the entire PC market from them. But I think Microsoft knew that the IBM bureaucracy couldn’t react quickly enough at an extremely pivotal time. Things were moving so fast. And some of the first real buying tornados just had to be reacted to at lightning speeds. These days we have literature and those going through such things can bring in advisors or board members to help them. Like the roles Marc Andreeson plays with Airbnb and others. But this was uncharted territory and due to some good, shrewd and maybe sometimes downright bastardly decisions, Microsoft ended up leap-frogging everyone by moving fast, sometimes incurring technical debt that would take years to pay down, and grabbing the market at just the right time. I’ve heard this story oversimplified in one word: subterfuge. But that’s not entirely fair. When he was hired in 1993, Louis Gerstner pivoted IBM from a hardware and software giant into a leaner services organization. One that still thrives today. A lot of PC companies came and went. And the PC business infused IBM with the capital to allow the company to shoot from $29 billion in revenues to $168 billion just 9 years later. From the top down, IBM was ready to leave red oceans and focus on markets with fewer competitors. Microsoft was hiring the talent. Picking up many of the top engineers from the advent of interactive computing. And they learned from the failures of the Xeroxes and Digital Equipments and IBMs of the world and decided to do things a little differently. When I think of a few Microsoft engineers that just wanted to build a better DOS sitting in front of a 60 page refinement of how a feature should look, I think maybe I’d have a hard time trying to play that game as well. I’m all for relentless prioritization. And user testing features and being deliberate about what you build. But when you see a limited window, I’m OK acting as well. That’s the real lesson here. When the day needs seizing, good leaders will find a way to blow up the establishment and release the team to go out and build something special. And so yah, Microsoft took the operating system market once dominated by CP/M and with IBM’s help, established themselves as the dominant player. And then took it from IBM. But maybe they did what they had to do… Just like IBM did what they had to do, which was move on to more fertile hunting grounds for their best in the world sales teams. So tomorrow, think of bureaucracies you’ve created or had created to constrain you. And think of where they are making the world better vs where they are just giving some controlling jackrabbit a feeling of power. And then go change the world. Because that is what you were put on this planet to do. Thank you so much for listening in to this episode of the history of computing podcast. We are so lucky to have you.
2/21/2020 • 18 minutes, 38 seconds
The Mouse
In a world of rapidly changing technologies, few have lasted as long is as unaltered a fashion as the mouse. The party line is that the computer mouse was invente d by Douglas Engelbart in 1964 and that it was a one-button wooden device that had two metal wheels. Those used an analog to digital conversion to input a location to a computer. But there’s a lot more to tell. Englebart had read an article in 1945 called “As We May Think” by Vannevar Bush. He was in the Philippines working as a radio and radar tech. He’d return home,. Get his degree in electrical engineering, then go to Berkeley and get first his masters and then a PhD. Still in electrical engineering. At the time there were a lot of military grants in computing floating around and a Navy grant saw him work on a computer called CALDIC, short for the California Digital Computer. By the time he completed his PhD he was ready to start a computer storage company but ended up at the Stanford Research Institute in 1957. He published a paper in 1962 called Augmenting Human Intellect: A Conceptual Framework. That paper would guide the next decade of his life and help shape nearly everything in computing that came after. Keeping with the theme of “As We May Think” Englebart was all about supplementing what humans could do. The world of computer science had been interested in selecting things on a computer graphically for some time. And Englebart would have a number of devices that he wanted to test in order to find the best possible device for humans to augment their capabilities using a computer. He knew he wanted a graphical system and wanted to be deliberate about every aspect in a very academic fashion. And a key aspect was how people that used the system would interact with it. The keyboard was already a mainstay but he wanted people pointing at things on a screen. While Englebart would invent the mouse, pointing devices certainly weren’t new. Pilots had been using the joystick for some time, but an electrical joystick had been developed at the US Naval Research Laboratory in 1926, with the concept of unmanned aircraft in mind. The Germans would end up building one in 1944 as well. But it was Alan Kotok who brought the joystick to the computer game in the early 1960s to play spacewar on minicomputers. And Ralph Baer brought it into homes in 1967 for an early video game system, the Magnavox Odyssey. Another input device that had come along was the trackball. Ralph Benjamin of the British Royal Navy’s Scientific Service invented the trackball, or ball tracker for radar plotting on the Comprehensive Display System, or CDS. The computers were analog at the time but they could still use the X-Y coordinates from the trackball, which they patented in 1947. Tom Cranston, Fred Longstaff and Kenyon Taylor had seen the CDS trackball and used that as the primary input for DATAR, a radar-driven battlefield visualization computer. The trackball stayed in radar systems into the 60s, when Orbit Instrument Corporation made the X-Y Ball Tracker and then Telefunken turned it upside down to control the TR 440, making an early mouse type of device. The last of the options Englebart decided against was the light pen. Light guns had shown up in the 1930s when engineers realized that a vacuum tube was light-sensitive. You could shoot a beam of light at a tube and it could react. Robert Everett worked with Jay Forrester to develop the light pen, which would allow people to interact with a CRT using light sensing to cause an interrupt on a computer. This would move to the SAGE computer system from there and eek into the IBM mainframes in the 60s. While the technology used to track the coordinates is not even remotely similar, think of this as conceptually similar to the styluses used with tablets and on Wacom tablets today. Paul Morris Fitts had built a model in 1954, now known as Fitts’s Law, to predict the time that’s required to move things on a screen. He defined the target area as a function of the ratio between the distance to the target and the width of the target. If you listen to enough episodes of this podcast, you’ll hear a few names repeatedly. One of those is Claude Shannon. He brought a lot of the math to computing in the 40s and 50s and helped with the Shannon-Hartley Theorum, which defined information transmission rates over a given medium. So these were the main options at Englebart’s disposal to test when he started ARC. But in looking at them, he had another idea. He’d sketched out the mouse in 1961 while sitting in a conference session about computer graphics. Once he had funding he brought in Bill English to build a prototype I n 1963. The first model used two perpendicular wheels attached to potentiometers that tracked movement. It had one button to select things on a screen. It tracked x,y coordinates as had previous devices. NASA funded a study to really dig in and decide which was the best device. He, Bill English, and an extremely talented team, spent two years researching the question, publishing a report in 1965. They really had the blinders off, too. They looked at the DEC Grafacon, joysticks, light pens and even what amounts to a mouse that was knee operated. Two years of what we’d call UX research or User Research today. Few organizations would dedicate that much time to study something. But the result would be patenting the mouse in 1967, an innovation that would last for over 50 years. I’ve heard Engelbart criticized for taking so long to build the oNline System, or NLS, which he showcased at the Mother of All Demos. But it’s worth thinking of his research as academic in nature. It was government funded. And it changed the world. His paper on Computer-Aided Display Controls was seminal. Vietnam caused a lot of those government funded contracts to dry up. From there, Bill English and a number of others from Stanford Research Institute which ARC was a part of, moved to Xerox PARC. English and Jack Hawley iterated and improved the technology of the mouse, ditching the analog to digital converters and over the next few years we’d see some of the most substantial advancements in computing. By 1981, Xerox had shipped the Alto and the Star. But while Xerox would be profitable with their basic research, they would miss something that a candle-clad hippy wouldn’t. In 1979, Xerox let Steve Jobs make three trips to PARC in exchange for the opportunity to buy 100,000 shares of Apple stock pre-IPO. The mouse by then had evolved to a three button mouse that cost $300. It didn’t roll well and had to be used on pretty specific surfaces. Jobs would call Dean Hovey, a co-founder of IDEO and demand they design one that would work on anything including quote “blue jeans.” Oh, and he wanted it to cost $15. And he wanted it to have just one button, which would be an Apple hallmark for the next 30ish years. Hovey-Kelley would move to optical encoder wheels, freeing the tracking ball to move however it needed to and then use injection molded frames. And thus make the mouse affordable. It’s amazing what can happen when you combine all that user research and academic rigor from Englebarts team and engineering advancements documented at Xerox PARC with world-class industrial design. You see this trend played out over and over with the innovations in computing that are built to last. The mouse would ship with the LISA and then with the 1984 Mac. Logitech had shipped a mouse in 1982 for $300. After leaving Xerox, Jack Howley founded a company to sell a mouse for $400 the same year. Microsoft released a mouse for $200 in 1983. But Apple changed the world when Steve Jobs demanded the mouse ship with all Macs. The IBM PC would ;use a mouse and from there it would become ubiquitous in personal computing. Desktops would ship with a mouse. Laptops would have a funny little button that could be used as a mouse when the actual mouse was unavailable. The mouse would ship with extra buttons that could be mapped to additional workflows or macros. And even servers were then outfitted with switches that allowed using a device that switched the keyboard, video, and mouse between them during the rise of large server farms to run the upcoming dot com revolution. Trays would be put into most racks with a single u, or unit of the rack being used to see what you’re working on; especially after Windows or windowing servers started to ship. As various technologies matured, other innovations came along to input devices. The mouse would go optical in 1980 and ship with early Xerox Star computers but what we think of as an optical mouse wouldn’t really ship until 1999 when Microsoft released the IntelliMouse. Some of that tech came to them via Hewlett-Packard through the HP acquisition of DEC and some of those same Digital Research Institute engineers had been brought in from the original mainstreamer of the mouse, PARC when Bob Taylor started DRI. The LED sensor on the muse stuck around. And thus ended the era of the mouse pad, once a hallmark of many a marketing give-away. Finger tracking devices came along in 1969 but were far too expensive to produce at the time. As capacitive sensitive pads, or trackpads came down in price and the technology matured those began to replace the previous mouse-types of devices. The 1982 Apollo computers were the first to ship with a touchpad but it wasn’t until Synaptics launched the TouchPad in 1992 that they began to become common, showing up in 1995 on Apple laptops and then becoming ubiquitous over the coming years. In fact, the IBM Thinkpad and many others shipped laptops with little red nubs in the keyboard for people that didn’t want to use the TouchPad for awhile as well. Some advancements in the mouse didn’t work out. Apple released the hockey puck shaped mouse in 1998, when they released the iMac. It was USB, which replaced the ADB interface. USB lasted. The shape of the mouse didn’t. Apple would go to the monolithic surface mouse in 2000, go wireless in 2003 and then release the Mighty Mouse in 2005. The Mighty Mouse would have a capacitive touch sensor and since people wanted to hear a click would produce that with a little speaker. This also signified the beginning of bluetooth as a means of connecting a mouse. Laptops began to replace desktops for many, and so the mouse itself isn’t as dominant today. And with mobile and tablet computing, resistive touchscreens rose to replace many uses for the mouse. But even today, when I edit these podcasts, I often switch over to a mouse simply because other means of dragging around timelines simply aren’t as graceful. And using a pen, as Englebart’s research from the 60s indicated, simply gets fatiguing. Whether it’s always obvious, we have an underlying story we’re often trying to tell with each of these episodes. We obviously love unbridled innovation and a relentless drive towards a technologically utopian multiverse. But taking a step back during that process and researching what people want means less work and faster adoption. Doug Englebart was a lot of things but one net-new point we’d like to make is that he was possibly the most innovative in harnessing user research to make sure that his innovations would last for decades to come. Today, we’d love to research every button and heat map and track eyeballs. But remembering, as he did, that our job is to augment human intellect, is best done when we make our advances useful, helps to keep us and the forks that occur in technology from us, from having to backtrack decades of work in order to take the next jump forward. We believe in the reach of your innovations. So next time you’re working on a project. Save yourself time, save your code a little cyclomatic complexity, , and save users frustration from having to relearn a whole new thing. And research what you’re going to do first. Because you never know. Something you engineer might end up being touched by nearly every human on the planet the way the mouse has. Thank you Englebart. And thank you to NASA and Bob Roberts from ARPA for funding such important research. And thank you to Xerox PARC, for carrying the torch. And to Steve Jobs for making the mouse accessible to every day humans. As with many an advance in computing, there are a lot of people that deserve a little bit of the credit. And thank you listeners, for joining us for another episode of the history of computing podcast. We’re so lucky to have you. Now stop consuming content and go change the world.
2/18/2020 • 18 minutes, 26 seconds
Happy Birthday ENIAC
Today we’re going to celebrate the birthday of the first real multi-purpose computer: the gargantuan ENIAC which would have turned 74 years old today, on February 15th. Many generations ago in computing. The year is 1946. World War II raged from 1939 to 1945. We’d cracked Enigma with computers and scientists were thinking of more and more ways to use them. The press is now running articles about a “giant brain” built in Philadelphia. The Electronic Numerical Integrator and Computer was a mouthful, so they called it ENIAC. It was the first true electronic computer. Before that there were electromechanical monstrosities. Those had to physically move a part in order to process a mathematical formula. That took time. ENIAC used vacuum tubes instead. A lot of them. To put things in perspective: very hour of processing by the ENiAC was worth 2,400 hours of work calculating formulas by hand. And it’s not like you can do 2,400 hours in parallel between people or in a row of course. So it made the previous almost impossible, possible. Sure, you could figure out the settings to fire a bomb where you wanted two bombs to go in a minute rather than about a full day of running calculations. But math itself, for the purposes of math, was about to get really, really cool. The Bush Differential Analyzer, a later mechanical computer, had been built in the basement of the building that is now the ENIAC museum. The University of Pennsylvania ran a class on wartime electronics, based on their experience with the Differential Analyzer. John Mauchly and J. Presper Eckert met in 1941 while taking that class, a topic that had included lots of shiny new or newish things like radar and cryptanalysis. That class was mostly on ballistics, a core focus at the Moore School of Electrical Engineering at the University of Pennsylvania. More accurate ballistics would be a huge contribution to the war effort. But Echert and Mauchly wanted to go further, building a multi-purpose computer that could analyze weather and calculate ballistics. Mauchly got all fired up and wrote a memo about building a general purpose computer. But the University shot it down. And so ENIAC began life as Project PX when Herman Goldstine acted as the main sponsor after seeing their proposal and digging it back up. Mauchly would team up with Eckert to design the computer and the effort was overseen and orchestrated by Major General Gladeon Barnes of the US Army Ordnance Corps. Thomas Sharpless was the master programmer. Arthur Burkes built the multiplier. Robert Shaw designed the function tables. Harry Huskey designed the reader and the printer. Jeffrey Chu built the dividers. And Jack Davis built the accumulators. Ultimately it was just a really big calculator and not a computer that ran stored programs in the same way we do today. Although ENIAC did get an early version of stored programming that used a function table for read only memory. The project was supposed to cost $61,700. The University of Pennsylvania Department of Computer and Information Science in Philadelphia actually spent half a million dollars worth of metal, tubes and wires. And of course the scientists weren’t free. That’s around $6 and a half million worth of cash today. And of course it was paid for by the US Army. Specifically the Ballistic Research Laboratory. It was designed to calculate firing tables to make blowing things up a little more accurate. Herman Goldstine chose a team of programmers that included Betty Jennings, Betty Snyder, Kay McNulty, Fran Bilas, Marlyn Meltzer, and Ruth Lichterman. They were chosen from a pool of 200 and set about writing the necessary formulas for the machine to process the requirements provided from people using time on the machine. In fact, Kay McNulty invented the concept of subroutines while working on the project. They would flip switches and plug in cables as a means of programming the computer. And programming took weeks of figuring up complex calculations on paper. . Then it took days of fiddling with cables, switches, tubes, and panels to input the program. Debugging was done step by step, similar to how we use break points today. They would feed ENIAC input using IBM punch cards and readers. The output was punch cards as well and these punch cards acted as persistent storage. The machine then used standard octal radio tubes. 18000 tubes and they ran at a lower voltage than they could in order to minimize them blowing out and creating heat. Each digit used in calculations took 36 of those vacuum tubes and 20 accumulators that could run 5,000 operations per second. The accumulators used two of those tubes to form a flip-flop and they got them from the Kentucky Electrical Lamp Company. Given the number that blew every day they must have loved life until engineers got it to only blowing a tube every couple of days. ENIAC was modular computer and used different panels to perform different tasks, or functions. It used ring counters with 10 positions for a lot of operations making it a digital computer as opposed to the modern binary computational devices we have today. The pulses between the rings were used to count. Suddenly computers were big money. A lot of research had happened in a short amount of time. Some had been government funded and some had been part of corporations and it became impossible to untangle the two. This was pretty common with technical advances during World War II and the early Cold War years. John Atanasoff and Cliff Berry had ushered in the era of the digital computer in 1939 but hadn’t finished. Maunchly had seen that in 1941. It was used to run a number of calculations for the Manhattan Project, allowing us to blow more things up than ever. That project took over a million punch cards and took precedent over artillery tables. Jon Von Neumann worked with a number of mathematicians and physicists including Stanislaw Ulam who developed the Monte Method. That led to a massive reduction in programming time. Suddenly programming became more about I/O than anything else. To promote the emerging computing industry, the Pentagon had the Moore School of Electrical Engineering at The University of Pennsylvania launch a series of lectures to further computing at large. These were called the Theory and Techniques for Design of Electronic Digital Computers, or just the Moore School Lectures for short. The lectures focused on the various types of circuits and the findings from Eckert and Mauchly on building and architecting computers. Goldstein would talk at length about math and other developers would give talks, looking forward to the development of the EDVAC and back at how they got where they were with ENIAC. As the University began to realize the potential business impact and monetization, they decided to bring a focus to University owned patents. That drove the original designers out of the University of Pennsylvania and they started the Eckert-Mauchly Computer Corporation in 1946. Eckert-Mauchley would the build EDVAC, taking use of progress the industry had made since the ENIAC construction had begun. EDVAC would effectively represent the wholesale move away from digital and into binary computing and while it weighed tons - it would become the precursor to the microchip. After the ENIAC was finished Mauchly filed for a patent in 1947. While a patent was granted, you could still count on your fingers the number of machines that were built at about the same time, including the Atanasoff Berry Computer, Colossus, the Harvard Mark I and the Z3. So luckily the patent was avoided and digital computers are a part of the public domain. That patent was voided in 1973. By then, the Eckert-Mauchly computer corporation had been acquired by Remington Rand, which merged with Sperry and is now called Unisys. The next wave of computers would be mainframes built by GE, Honeywell, IBM, and another of other vendors and so the era of batch processing mainframes began. The EDVAC begat the UNIVAC and Grace Hopper being brought in to write an assembler for that. Computers would become the big mathematical number crunchers and slowly spread into being data processors from there. Following decades of batch processing mainframes we would get minicomputers and interactivity, then time sharing, and then the PC revolution. Distinct eras in computing. Today, computers do far more than just the types of math the ENIAC did. In fact, the functionality of ENIAC was duplicated onto a 20 megahertz microchip in 1996. You know, ‘cause the University of Pennsylvania wanted to do something to celebrate the 50th birthday. And a birthday party seemed underwhelming at the time. And so the date of release for this episode is February 15th, now ENIAC Day in Philadelphia, dedicated as a way to thank the university, creators, and programmers. And we should all reiterate their thanks. They helped put computers front and center into the thoughts of the next generation of physicists, mathematicians, and engineers, who built the mainframe era. And I should thank you - for listening to this episode. I’m pretty lucky to have ya’. Have a great day! .
2/15/2020 • 12 minutes, 58 seconds
VMS
VMS and OpenVMS Welcome to the History of Computing Podcast, where we explore the history of information technology. Because understanding the past prepares us to innovate (and sometimes cope with) the future! Today we’re going to talk through the history of VMS. Digital Equipment Corporation gave us many things. Once upon a time, I used a DEC Alpha running OpenVMS. The PDP-11 had changed the world, introducing us to a number of modern concepts in computers such as time sharing. The PDP was a minicomputer, smaller and more modern than mainframes. But by 1977 it was time for the next generation and the VAX ushered in the 32-bit era of computers and through the evolutions evolve into the VaXServer, helping to usher in the modern era of client-server architectures. It supported Branch delay slots and suppressed instructions. The VAX adopted virtual memory, privilege modes, and needed an operating system capable of harnessing all the new innovations packed into the VAX-11 and on. That OS would be Virtual Memory System, or VMS. The PDP had an operating system called RSX-11, which had been released in 1972. The architect was Dan Brevik, who had originally called it DEX as a homonym with DEC. But that was trademarked so he and Bob Decker over in marketing wrote down a bunch of acronyms and then found one that wasn’t trademarked. Then they had to reverse engineer a meaning out of the acronym to be Real-Time System Executive, or RSX. But for the VAX they needed more and so Dave Cutler from the RSX team, then in his early 30s, did much of the design work. Dick Hustvedt and Peter Lipman would join him and they would roll up to Roger Gourd, who worked with DECs VP of engineering Gordon Bell to build the environment. The project began as Starlet, named because it was meant to support the Startlet family of processors. A name that still lives on in various files in the operating system. The VMS Operating System would support RISC instructions, support 32-bit virtual address extension, would work with DECnet, would have virtual memory of course, as the name implies. VMS would bring a number of innovations in the world of clustering. VMS would use a modified Julian Day system to keep track of system time, which subtracts the Julian Date from 2,400,000.5. Why? Because it begins on November 17th, 1858. THat’s not why, that the day it starts. Why? Because it’s not Y10,000 compliant only having 4 slots for dates. Wait, that’s not a thing. Anyway, how did VMS come to be? One of the killer apps for the system though, was that DECnet was built on DIGITAL Network Architecture, or DNA. It first showed up in RSX, where you could like two PDPs but you could have 32 nodes by the time VaX showed up and 255 with VMS 2. Suddenly there was a simple way to network these machines, built into the OS. Version 1 was released in 1977 in support of the VAX-11/780. Version 2 would come along in 1980 for the 750 and Version 3 would come in 1982 for the 730. The VAX 8600 would ship in 84 with version 4. And here’s where it gets interesting. The advent of what were originally called microcomputers but are now called personal computers had come in the late 70s and early 80s. By 1984, MicroVMS was released as a port for running on the MicroVAX, Digitals attempt to go down-market. Much as IBM had missed minicomputers initially, Digital had missed the advent of microcomputers though and the platform never took off. Bill Gates would adorn the cover of Time that year. Of course, by 84, Apple had AppleTalk and DOS was ready to plug in as well. Bill Joy moved BSD away from VAX in 1986, after having been with the PDP and then VAX for years, before leaving for Sun. At this point the platform was getting a bit long in the tooth. Intel and Microsoft were just starting to emerge as dominant players in computing and DEC was the number two software company in the world, with a dominant sales team and world class research scientists. They released ULTRIX the same year though, as well as the DECStation with a desktop environment called UW for ULTRIX Workstation. Ultrix was based on BSD 4 and given that most Unixes had been written on PDPs, Bill Joy knew many of the group launched by Bill Munson, Jerry Brenner, Fred Canter and Bill Shannon. Cutler from that OpenVMS team hates Unix. Rather than have a unified approach, the strategy was fragmented. You see a number of times in the history of computing where a company begins to fail not because team members are releasing things that don’t fit within the strategy but because they release things that compete directly with a core product without informing their customers why. Thus bogging down the sales process and subsequent adoption in confusion. This led to brain drain. Cutler ended up going to the Windows NT team and bringing all of his knowledge about security and his sincere midwestern charm to Microsoft, managing the initial development after relations with IBM in the OS/2 world soured. He helped make NT available for the Alpha but also helping make NT dominate the operating system from his old home. Cutler would end up working on XP, Server operating systems, Azure and getting the Xbox to run as a host for Hyper-V . He’s just that rad and his experience goes back to the mid 60s, working on IBM 7044 mainframes. Generational changes in software development, like the move to object oriented programming or micro services, can force a lot of people into new career trajectories. But he was never one of those. That’s the kind of talent you just really, really, really hate to watch leave an organization - someone that even Microsoft name drops in developer conference session to get ooohs and aaahs. And there were a lot of them leaving as DEC shifted into more of a sales and marketing company and less into a product and research company as it had founded to be back when Ken Olsen was at MIT. We saw the same thing happen in other areas of DEC - competing chips coming out of different groups. But still they continued on. And the lack of centralizing resources and innovating quickly and new technical debt being created caused the release of 5 to slip from a 2 year horizon to a 4 year horizon, shipping in 1988 with Easynet, so you could connect 2,000 computers together. Version 6 took 5 years to get out the door in 1993. In a sign of the times, 1991 saw VMS become OpenVMS and would make OpenVMS POSIX compliant. 1992 saw the release of the DEC Alpha and OpenVMS would quickly get support for the RISC processor which OpenVMS would support through the transition of Alpha to Itanium when Intel bought the rights for the Alpha architecture. Version 7 of OpenVMS shipped in 1996 but by then the company was in a serious period of decline and corporate infighting and politics killed them. 1998 came along and they practically bankrupted Compaq by being acquired and then HP swooped in and got both for a steal. Enterprise computing has never been the same. HP made some smart decisions though. They inked a deal with Intel and Alpha would become the HP Itanium and made by Intel. Intel then had a RISC processor and all the IP that goes along with that. Version 8 would not be released until 2003. 7 years without an OS update while the companies were merged and remerged had been too long. Market share had all but disappeared. DECnet would go on to live in the Linux kernel until 2010. Use of the protocol was replaced by TCP/IP much the same way most of the other protocols got replaced. OpenVMS development has now been licensed to VSI and is now run by vmssoftware, which supports many former DEC and HP employees. There are a lot of great, innovative, unique features of OpenVMS. There’s a common language environment, that allows for calling functions easily and independently of various languages. You can basically mix Fortran, C, BASIC, and other languages. It’s kinda’ like my grandmas okra. She said I’d like it but I didn’t. VMS is built much the same way. They built it one piece at a time. To quote Johnny Cash: “The transmission was a fifty three, And the motor turned out to be a seventy three, And when we tried to put in the bolts all the holes were gone.” You can of course install PHP, Ruby, Java, and other more modern languages if you want. And the System Services, Run Time Libraries, and language support make it easy to use whatever works for a task across them pretty equally and provides a number of helpful debugging tools along the way. And beyond debugging, OpenVMS pretty much supports anything you find required by the National Computer Security Center and the DoD. And after giving the middle finger to Intel for decades… As with most operating systems, VMS is finally being ported to the x86 architecture signaling the end of one of the few holdouts to the dominance of the x86 architecture in some ways. The Itatiums have shipped less and less chips every year, so maybe we’re finally at that point. Once OpenVMS has been ported to x86 we may see the final end to the chip line as the last windows versions to support them stopped actually being supported by Microsoft about a month before this recording. The end of an era. I hope Dave Cutler looks back on his time on the VMS project fondly. Sometimes a few decades of crushing an old employer can help heal some old wounds. His contributions to computing are immense, as are those of Digital. And we owe them all a huge thanks for the techniques and lessons learned in the development of VMS in the early days, as with the early days of BSD, the Mac, Windows 1, and others. It all helped build a massive body of knowledge that we continue to iterate off of to this day. I also owe them a thank you for the time I got to spend on my first DEC Alpha. I didn’t get to touch another 64 bit machine for over a decade. And I owe them a thanks for everything I learned using OpenVMS on that machine! And to you, wonderful listers. Thank you for listening. And especially Derek, for reaching out to tell me I should move OpenVMS up in the queue. I guess it goes without saying… I did! Hope you all have a great day!
2/11/2020 • 15 minutes, 55 seconds
Boolean Algebra
Boolean algebra Welcome to the History of Computing Podcast, where we explore the history of information technology. Because understanding the past prepares us to innovate (and sometimes cope with) the future! Today we’re going to talk a little about math. Or logic. Computers are just a bunch of zeroes and ones, right? Binary. They make shirts about it. You know, there are 10 types of people in the world. But where did that come from? After centuries of trying to build computing devices that could help with math using gears that had lots of slots in them, armed with tubes and then transistors, we had to come up with a simpler form of logic. And why write your own complicated math when you can borrow it and have instant converts to your cause? Technical innovations are often comprised of a lot of building blocks from different fields of scientific or scholastic studies. The 0s and 1s, which make up the flip-flop circuits computers are so famous for, are made possible by the concept that all logic can be broken down into either true or false. And so the mathematical logic that we have built trillions of dollars in industry off of began in 1847 in a book called The Mathematical Analysis of Logic, by George Boole. He would follow that up in a book called An Investigation of the Laws of Thought in 1854. He was he father of what we would later call Boolean Algebra once the science of an entire mathematical language built on true and false matured enough for Charles Sanders Peirce wrote a book called The Simplest Mathematics and had a title called Boolian Algebra with One Constant. By 1913, there were many more works with the name and it became Boolean algebra. This was right around the time that the electronic research community had first started experimenting with using vacuum tubes as flip-flop switches. So there’s elementary algebra where you can have any old number with any old logical operation. Those operators can be addition, subtraction, multiplication, division, etc. But in boolean algebra the only variables available are a 0 or a 1. Later we would get abstract algebra as well, but for computing it was way simpler to just stick with those 0s and 1s and in fact, ditching the gears from the old electromechanical computing paved the way for tubes to act as flip-flop switches, and transistors to replace those. And the evolutions came. Both to the efficiency of flip-flop switches and to the increasingly complex uses for mechanical computing devices. But they hadn’t all been mashed up together. So set theory and statistics were evolving. And Huntington, Jevons, Schröder, basically perfected Boolean logic, paving the way for MH Stone to provide that Boolean algebra is isomorphic to a field of sets by 1936. And so it should come as no surprise that Boolean algebra would be key to the development of basic mathematical functions used on the Berry-Attansoff computer. Remember that back then, all computing was basically used for math. Claude Shannon would help apply Boolean algebra to switching circuits. This involved binary decision diagrams for synthesizing and verifying the design of logic circuits. And so we could analyze and design circuits using algebra to define logic gates. Those gates would get smaller and faster and combined using combinational logic until we got LSI circuits and later with the automation of the design of chips, VLSI. So to put it super-simple, let’s say you are trying to do some maths. First up, you convert values to bits, which are binary digits. Those binary digits would be represented as a 0 or a 1, expressed in binary algebra as . There’s a substantial amount of information you can pack into those bits, with all major characters easily allowed for in a byte, which is 8 of those bits. So let’s say you also map your algebraic operators using those 0s and 1s, another byte. Now you can add the number in the first byte. To do so though, you would need to basically translate the notations from classical propositional calculus to their expression in Boolean algebra, typically done in an assembler. Much, much more logic is required to apply quantifiers. And simple true values are 0 and 1 but have a one step truth table to define AND (also known as a conjunction), OR (also known as a disjunction), and NOT XOR (also known as an exclusive-or). This allows for an exponential increase in the amount of logic you can apply to a problem. The act of deceasing if the problem satisfies the ability to translate into boolean capabilities is known as the Boolean satisfiability problem or SAT. At this point though, all problems really seem solvable using some model of computation given the amount of complex circuitry we now have. So the computer interprets information the functions and sets the state of a switch based on the input. The computer then combines all those trues and false into the necessary logic and outputs an answer. Because the 0s and 1s took too much the input got moved to punch cards, and modern programming was born. These days we can also add Boolean logic into higher functions, such as running AND for google searches. So ultimately the point of this episode is to explore what exactly all those 0s and 1s are. They’re complex thoughts and formulas expressed as true and false using complicated Boolean algebra to construct them. Now, there’s a chance that some day we’ll find something beyond a transistor. And then we can bring a much more complicated expression of thought broken down into different forms of algebra. But there’s also the chance that Boolean algebra sitting on transistors or other things that are the next evolution of boolean gates or transistors is really, well, kinda’ it. So from the Barry-Attansoff computer comes Colossus and then ENIAC in 1945. It wasn’t obvious yet but nearly 100 years after the development of Boolean algebra, it had been combined with several other technologies to usher in the computing revolution, setting up the evolution to microprocessors and the modern computer. These days, few programmers are constrained by programming in Boolean logic. Instead, we have many more options. Although I happen to believe that understanding this fundamental building block was one of the most important aspects of studying computer science and provided an important foundation to computing in general. So thank you for listening to this episode. I’m sure algebra got ya’ totally interested and that you’re super-into math. But thanks for listening anyways. I’m pretty lucky to have ya’. Have a great day
2/8/2020 • 9 minutes, 24 seconds
The Punch Card
History of the punch card Welcome to the History of Computing Podcast, where we explore the history of information technology. Because understanding the past prepares us to innovate (and sometimes cope with) the future! Today we’re going to cover the history of punch cards. A punch card is a piece of paper, or card stock, or card, that holds data. They look like two index cards next to each other with a bunch of holes in them. The data they hold is in those holes. It’s boolean, with a true or false represented by a hole in a predefined location, or the absence of a hole - simple as that. The logic is then interpreted by a language, often one that was specific to the machines that each ran. My grandma used to configure punch cards and I remember seeing some of these when I was a kid and being awestruck. So they’ve held a fascination for me since what seems like the beginning of time. But those punched cards didn’t start out used for processing data. Or did they? The weaver Basile Bouchon then built a loom that could be controlled by holes punched into a paper tape in 1725. He was storing the positions for colors and patters on the loom with cards. And so saving time for humans by using the positions of those holes. You can call this computational memory. The holes controlled how rods could move and the positions were stored in the cards. And so the first memory came in the form of cards of paper that stored data. Much as there was already data stored on paper, in books. And before that tablets and papyrus. The design was improved by his assistant Jean-Baptiste Falcon and by Jacques Vaucanson. And ultimately isn’t programming just putting data into storage. So let’s say the first programmers hacked language by putting data into temporary storage called our brains. And then written languages. But now we were putting data into storage using a machine. Not just moving gears to calculate. The merger of calculation and memory would some day prove to be a pretty fantastic combination. But we weren’t there yet. Although these improvements controlled the patterns woven, they still required a human to operate the loom. In 1804 Joseph Marie Jacquard took that next step from stored memory to adding a mechanism capable of automating the loom operation. And so the Jacquard Loom was born. A bunch of 9 inches by 1.25 inches by 1/16 inches of punched cards in stacks. They were linked into a chain and read to build patterns. Each card held the instructions for shedding, which is moving the warp up and down) and setting the shuttle for each pass. You recorded a set of instructions onto a card and the loom performed them. Then comes the computer. The originals had gears we set to run calculations but as they became capable of more and more complex tasks, we needed a better mechanism for bringing data into and getting data out of a computer. We could write programs and these cards became the way we input the data into the computers. We loaded a set of commands and the device printed the output. And then Semyon Korsakov comes along and brings punched cards to machines in 1832. Charles Babbage expanded on the ideas of Pascal and Leibniz and added to mechanical computing, making the difference engine, the inspiration of many a steampunk. Babbage had multiple engineers building components for the engine and after he scrapped his first, he moved on to the analytical engine, adding conditional branching, loops, and memory - and further complicating the machine. The engine borrowed the punchcard tech from the Jacquard loom and applied that same logic to math, possibly with a little inspiration from Korsakov. He called them “Number Cards.” Carl Engel adds to the concept around 1860. Come 1881, Jules Carpentier brought us the harmonium using punch cards, converting those little grooves to sound. And then. Well, then comes Herman Hollerith and the 1890 US census project. He’d tried out a number of ways to bring in all the data from the census to a tabulating machine and I think he knew that he was on to something. He then went on to work with the New York Central, Hudson, and the Pennsylvania Railroads to help automate their data processing needs using these punched cards. At this point, those cards were 12 rows by 36 punch positions and 7 3/8 inches wide by 3 1/4 inches high by .007 inches thick. He chose the size based on the standard size of banknotes in the US, so that he could store his cards in boxes that had been made for the Treasury Department Based on these early successes, he was able to successfully read the data on those cards into his tabulating machine and so he founded the Tabulating Machine Company in 1896. The most important aspect of Hollerith’s contributions was actually bringing us machinery that could process the data stored on those cards. That company merged with a few other similar companies to join forces bringing in Thomas Watson to run the company, and in 1924 they became International Business Machine Company, or IBM. And so the era of unit recording machines begun. G. W. Baehne published Practical Applications of the Punched Card Method in Colleges and Universities in 1935 showed plenty of programming techniques and went through a variety of applications for use on the cards. As with any real industry there was competition. Remington Rand also began building punch cards and readers, along with others in the industry. And by 1937 IBM was running 32 presses at their Endicott plant where they print and sorted 10 million cards a day. By World War II, English cryptanalysts at Bletchley Park they ended up with over 2 million cards used to store decrypted messages, including those tabbed out of the Enigma. We were trying to automate as much as possible. Contracts, checks, bonds, orders out of the Sears catalog, airline ticket entry. And suddenly loading computers with punch card data for further processing was becoming critical to the upcoming need to automate the world. Punch cards had become a standard by 1950. Those IBM cards had said "Do not fold, spindle or mutilate” and many a bill would come with a card and potentially be used for processing when returning the bill with a check. Now, back in the 30s, Remington Rand and IBM had gotten in trouble for anti-trust by forcing their cards in their machines and by 1955, IBM was owning the market and you know, the innovation and automation of the country couldn’t be left to just one company. So Thomas Watson Jr was forced to sign a deal that IBM would drop to not any more than half of the manufacturing capacity of punched cards in the US. But we were already to go past the punch card. And computers couldn’t be programmed using jumper cables forever, so we started using punch cards. there were a lot of file formats and other conventions that were set in that era, that still trace their origins to the 80 column of text. And the programmers of those cards began to ask for cards to be printed that could support functions, to make their jobs easier. These were used for the GE 600 and other vendors, and Univac had a format, and with languages like FORTRAN and COBOL having come along, generic punched cards became popular. And the UNITYPER came along, giving us magnetic tape in the 50s. Then in the 60s we got an easy magnetic type encoder and it wouldn’t be long until we got computer terminals, light pens, and minicomputers. By then it would take years for older tech to be unnecessary. The dimensions would be set and standardized for the RS-292 punched card but the uses would be less and less and less and less. And so punch cards had survived the transistorization of computers. But not newer and better forms of input and output. Tape ribbons would sit in drawers in places like MIT and Stanford. In fact some of the first traffic to run over the Internet precursor DARPANet would be using those tape ribbons to write output. The last bastion of the punched card was electronic voting, which had begun in the 60s. But then, the State of Iowa basically banned punched cards in 1984. Their use had been shrinking over time but at this point it was time you could say that the punched cards are obsolete. That doesn’t mean they weren’t being used, more that they just weren’t being used to build much new tech. I suppose that’s how I ended up getting to play with some that my grandma brought home. I don’t think I had a clue what they were actually for at this point. The punch card then gave way to programming with paper where you filled in bubbles with a pencil, but that was a stop-gap for dealing with an era when computers were starting to become common, but there weren’t enough for entire classes to learn programming. So the punch cards gave us what we needed to get input and output to these early computing devices in a time before truly interactive computing. And the were useful for a time. But once keyboards became common-place, they just… weren’t needed as much. And it’s good because otherwise we might never have gotten object oriented programming. And loading large programs with cards was never very fun. But they had their uses and got us to a time when we didn’t need them any more. And so we owe them our thanks. Just as I owe you a thank you, listeners, for joining me on this episode of the history of computing podcast, to chat alllllll about punch cards. I hope you Have a great day!
2/5/2020 • 12 minutes, 50 seconds
Anyone Else Remember Y2K?
Y2K Welcome to the History of Computing Podcast, where we explore the history of information technology. Because understanding the past prepares us to innovate (and sometimes cope with) the future! Today we’re going to cover the y2k bug, millennium bug, or y2k scare as they called it in retrospect. Once upon a time, computers were young and the industry didn’t consider software that might be used beyond the next few years. Things were changing so fast. Memory was precious and the amount of time it took to commit a date to a database as a transaction. Many of the original big, complex software titles were written in the 1960s and 70s as mainframes began to be used for everything from controlling subways to coordinating flights. The Millennium Bug was a result of the year being used as two digits, not worrying about the 19 in the year. As the year 2000 got closer and closer, the crazy began because some systems wouldn’t interpret the year” 00" properly and basically the world would end. All because programmers didn’t plan on dates that spanned the century. And I had a job that had me on a flight to San Francisco so I could be onsite the day after the clock struck 1/1/2000. You know, just in case. The fix was to test the rollover of the year, apply a patch if there was a problem, and then test test test. We had feverishly been applying patches for months by y2k and figured we were good but you know, just in case, I needed to be on a flight. By then the electric grid and nuclear power plans and flight control and controls for buildings and anything else you could think of were hooked up to computers. There were computers running practically every business, and a ferver had erupted that the world might end because we didn’t know what might crash that morning. I still remember the clock striking midnight. I was at a rave in Los Angeles that night and it was apparent within minutes that the lights hadn’t gone off at the electric daisy carnival. We were all alive. The reports on the news were just silly headlines to grab attention. Or was it that we had worked out butts off to execute well planned tactics to analyze, patch, or replace every single piece of software for customers? A lot of both I suspect. It was my first big project in the world of IT and I learned a lot from seasoned project managers who ran it like a well oiled machine. The first phase was capturing a list of all software run, some 500 applications at the time. The spreadsheet was detailed, including every component in computers, device drivers, and any systems that those devices needed to communicate with. By the time it was done there were 5000 rows and a hundred columns and we started researching which were y2k compliant and which weren’t. My team was focused on Microsoft Exchange prior to the big push for y2k compliance so we got pulled in to cover mail clients, office, and then network drivers since those went quickly. Next thing we knew we were getting a crash course in Cisco networking. I can still remember the big Gantt chart that ran the project. While most of my tasks are managed in Jira these days I still fall back to those in times of need. We had weekly calls tracking our progress, and over the course of a year watched a lot of boxes get checked. I got sent all over the world to “touch” computers in every little office, meeting people who did all kinds of jobs and so used all kinds of software. By the time the final analysis tasks were done we had a list of patches that each computer needed and while other projects were delayed, we got them applied or migrated people to other software. It was the first time I saw how disruptive switching productivity software was to people without training. We would cover that topic in a post Mortem after the project wrapped. And it all happened as we watched the local news in each city we visited having a field day with everything from conspiracy theories to doomsday reports. It was a great time to be young, hungry, and on the road. And we nailed that Gantt chart two months early. We got back to work on our core projects and waited. Then the big day came. The clock struck midnight as I was dancing to what we called techno at the time and I pulled an all-nighter, making it to the airport just in time for my flight. You could see the apprehension about flying on the faces of the passengers and you could feel the mood relax when we landed. I took the train into the city and was there when everyone started showing up for work. Their computers all booted up and they got to work. No interruptions. Nothing unexpected. We knew though. We’d run our simulations. We’d flashed many a bios, watched many a patch install, with status bars crawling, waiting to see what kind of mayhem awaited us after a reboot. And we learned the value of preparation, just as the media downplayed the severity saying it was all a bunch of nothing. They called it a scare. I called it a crisis averted. And an education. So thank you to the excellent project managers for teaching me to be detail oriented. For teaching me to measure twice and cut once. And for teaching me that no project is too big to land ahead of time and on budget. And thank you, listeners, for joining me on this episode of the history of computing podcast. I hope you Have a great day!
2/2/2020 • 7 minutes, 37 seconds
A Brief History Of Cisco
The History Of Cisco Welcome to the History of Computing Podcast, where we explore the history of information technology. Because understanding the past prepares us to innovate (and sometimes cope with) the future! Today we’re going to talk about the history of Cisco. They have defined the routing and switching world for decades. Practically since the beginning of the modern era. They’ve bought companies, they’ve grown and shrunk and grown again. And their story feels similar in many ways to the organizations that came out of the tail end of the grants tossed around by DARPA. These companies harnessed the incredibly innovative ideas and technology to found the companies who commercialized all of that amazing research and changed the world. These companies ushered in a globally connected network, almost instantaneously transmitting thoughts and hopes and dreams and failures and atrocities. They made money. Massive, massive truckloads of money. But they changed the world for the better. Hopefully in an irrevocable kind of way. The Cisco story is interesting because it symbolizes a time when we were moving from the beginnings of the Internet. Stanford had been involved in ARPAnet since the late 60s but Vint Cerf and Bob Kahn had been advancing TCP and IP in the 70s, establishing IPv4 in 1983. And inspired by ALOHAnet, Bob Metcaffe and the team at Xerox PARC had developed Ethernet in 74. And the computer science research community had embraced these, with the use of Email and time sharing spurring more and more computers to be connected to the Internet. Raw research being done out of curiosity and to make the world a better place. The number of devices connected to the growing network was increasing. And Stanford was right in the center of it. Silicon Valley founders just keep coming out of Stanford but this one, they were professors, and early on. They invented the multi-protocol router and finance the startup with their own personal credit cards. Leonard Bosack and Sandy K. Lerner are credited for starting Cisco, but the company rose out of projects to network computers on the Stanford campus. The project got started after Xerox PARC donated some Alto workstations and Ethernet boards they didn’t need anymore in 1980, shortly after Metcaffe left Xerox to start 3COM. And by then Cerf was off to MCI to help spur development of the backbones faster. And NSFnet came along in 1981, bringing even more teams from universities and private companies into the fold. The Director of Computer Facilities, Ralph Gorin, needed to be able to get longer network cables to get even more devices connected. He got what would amount to a switch today. The team was informal. They used a mother board from Andy Bechtolsheim, later the founder of Sun Microsystems. They borrow boards from other people. Bosack himself, who had been an ARPAnet contributor, donated a board. And amongst the most important was the software, which William Yeager wrote, which had a little routing program that connected medical center computers to the computer science department computers and could use the Parc Universal Packet (PUP), XNS, IP and CHAOSNet.. The network linked any types of computers, from Xerox Altos to mainframes using a number of protocols, including the most important for the future, IP, or the Internet Protocol. They called it the Blue Box. And given the number of computers that were at Stanford, various departments around campus started asking for them, as did other universities. There were 5,000 computers connected at Stanford by the time they were done. Seeing a potential business here, Bosack, then running the computers for the Computer Science department, and Lerner, then the Director of Computer Facilities for the Graduate School of Business, founded Cisco Systems in 1984, short for San Francisco, and used an image of the Golden Gate Bridge a their logo. You can see the same pattern unfold all over. When people from MIT built something cool, it was all good. Until someone decided to monetize it. Same with chip makers and others. By 1985, Stanford formally started a new project to link all the computers they could on the campus. Yeager gave the source to Bosack and Kirk Lougheed so they could strip out everything but the Internet Protocol and beef that up. I guess Yeager saw routers as commercially viable and he asked the university if he could sell the Blue Box. They said no. But Bosack and Lougheed were plowing ahead, using Stanford time and resources. But Bosack and Lerner hadn’t asked and they were building these routers in their home and it was basically the same thing as the Blue Box, including the software. Most of the people at Stanford thought they were crazy. They kept adding more code and logic and the devices kept getting better. By 1986, Bosack’s supervisor Les Earnest caught wind and started to investigate. He went to the dean and Bosack was given an ultimatum, it was go the wacky Cisco thing or stay at Stanford. Bosack quit to try to build Cisco into a company. Lougheed ran into something similar and quit as well. Lerner had already left but Greg Satz and Richard Troiano left as well, bringing them up to 5 people. Yeager was not one of them, even though he’d worked a lot on the software, including on nights and weekends. But everyone was learning and when it was to benefit the university, it was fine. But then when things went commercial, Stanford got the lawyers involved. Yeager looked at the code and still saw some of his in there. I’m sure the Cisco team considered that technical debt. Cisco launched the Advanced Gateway Server (AGS) router in 1986, two years after the Mac was released. The software was initially written by Yeager but improved by Bosack and Lougheed, as the operating system, later called Cisco IOS. Stanford thought about filing a criminal complaint of theft but realized it would be hard to prosecute, and ugly especially given that Stanford itself is a non-profit. They had $200,000 in contracts and couldn’t really be paying all this attention to lawsuits and not building the foundations of the emerging Internet. So instead they all agreed to license the software and the imprint of the physical boards being used (known as photomasks), to the fledgling Cisco Systems in 1987. This was crucial as now Cisco could go to market with products without the fear of law suits. Stanford got discounts on future products, $19,300 up front, and $150,000 in royalties. No one knew what Cisco would become so it was considered a fair settlement at the time. Yeager, being a mensch and all, split his 80% of the royalties between the team. He would go on to give us IMAP and Kermit, before moving to Sun Microsystems. Speaking of Sun, there was bad blood between Cisco and Stanford, which I always considered ironic given that a similar thing happened when Sun was founded in some part, using Stanford intellectual property and unused hardware back in 1982. I think the difference is trying to hide things and being effusive with the credit for code and inventions. But as sales increased, Lougheed continued to improve the code and the company hired Bill Graves to be CEO in 1987 who was replaced with John Mordridge in 1988. And the sales continued to skyrocket. Cisco went public in 1990 when they were valued at $224 million. Lerner was fired later that year and Bosack decided to join her. And as is so often the case after a company goes public, the founders who had a vision of monetizing great research, were no longer at the startup. Seeing a need for more switching, Cisco acquired a number of companies including Grand Junction and Crescendo Communications which formed like Voltron to become the Cisco Catalyst, arguably the most prolific switching line in computing. Seeing the success of Cisco and the needs of the market, a number of others started building routers and firewalls. The ocean was getting redder. John Mays had the idea to build a device that would be called the PIX in 1994 and Branley Coile in Athens, Georgia programmed it to become a PBX running on IP. We were running out of IP addresses because at the time, organizations used public IPs. But NAT was about to become a thing and RFC 1918 was being reviewed by the IETF. They brought in Johnson Wu and shipped a device that could run NAT that year, ushering in the era of the Local Area Network. John T. Chambers replaced Mordridge in 1995 and led Cisco as its CEO until 2015. Cisco quickly acquired the company and the Cisco PIX would become the standard firewall used in organizations looking to get their computers on the Internets. The PIX would sell and make Cisco all the monies until it was replaced by the Cisco ASA in 2008. In 1996, Cisco's revenues hit $5.4 billion, making it one of Silicon Valley's biggest success stories. By 1998 they were up to $6B. Their stock peaked in 2000. By the end of the dot-com bubble in the year 2000, Cisco had a more than $500 billion market capitalization. They were building an industry. The CCNA, or Cisco Certified Network Associate, and CCNE, Cisco Certified Network Engineer were the hottest certifications on the market. When I got mine it was much easier than it is today. The market started to fragment after that. Juniper came out strong in 1999 and led a host of competitors that landed in niche markets and expanded into core markets. But the ASA combined Cisco’s IPS, VPN concentration, and NAT functionality into one simpler box that actually came with a decent GUI. The GUI seemed like sacrilege at the time. And instead of sitting on top of a network operating system, it ran on Linux. At the top end they could handle 10 million connections, important once devices established and maintained so many connections to various services. And you could bolt on antivirus and other features that were becoming increasingly necessary at various layers of connectivity at the time. They went down-market for routing devices with an acquisition of Linksys in 2003. They acquired Webex in 2007 for over $3 billion dollars and that became the standard in video conferencing until a solid competitor called Zoom emerged recently. They acquired SourceFire in 2013 for $2.7B and have taken the various services offered there to develop Cisco products, such as the anti-virus to be a client-side malware scanning tool called Cisco AMP. Juniper gave away free training unlike the Cisco training that cost thousands of dollars and Alcatel-Lucent, Linksys, Palo Alto Networks, Fortinet, SonicWall, Barracuda, CheckPoint, and rising giant Huawei led to a death by a thousand competitors and Cisco’s first true layoffs by 2011. Cisco acquired OpenDNS in 2015 to establish a core part of what’s now known as Cisco Umbrella. This gives organizations insight into what’s happening on increasingly geographically distributed devices; especially mobile devices due to a close partnership with Apple. And they acquired Broadsoft in 2017 to get access to even more sellers and technology in the cloud communication space. Why? Because while they continue to pump out appliances for IP connectivity, they just probably can’t command a higher market share due to the market dynamics. Every vendor they acquire in that space will spawn two or more new serious competitors. Reaching into other spaces provides a more diverse product portfolio and gives their sellers more SKUs in the quiver to make quotas. And pushes the world forward with newer concepts, like fog computing. Today, Cisco is still based in San Jose and makes around $50 billion a year in revenue and boasts close to 75,000 employees. A lot has happened since those early days. Cisco is one of the most innovative and operationally masterful companies on the planet. Mature companies can have the occasional bumps in the road and will go through peaks and valleys. But their revenues are a reflection of their market leadership, sitting around 50 billion dollars. Yes, most of their true innovation comes from acquisitions today. However, the insights on whom to buy and how to combine technologies, and how to get teams to work well with one another. That’s a crazy level of operational efficiency. There’s a chance that the Internet explosion could have happened without Cisco effectively taking the mantle in a weird kind of way from BBN for selling and supporting routing during the storm when it came. There’s also a chance that without a supply chain of routing appliances to help connect the world that the whole thing might have tumbled down. So consider this: technological determinism. If it hadn’t of been Cisco, would someone else have stepped up to get us to the period of the dot com bubble? Maybe. And since they made so much money off the whole thing I’ve heard that Cisco doesn’t deserve our thanks for the part they played. But they do. Without their training and appliances and then intrusion prevention, we might not be where we are today. So thank you Cisco for teaching me everything I know about OSI models and layers and all that. And you know… helping the Internet become ubiquitous and all. And thank you, listener, for tuning in to yet another episode of the history of computing podcast. We are so very lucky to have you. Have a great day!
1/30/2020 • 18 minutes, 19 seconds
Polish Innovations In Computing
Computing In Poland Welcome to the History of Computing Podcast, where we explore the history of information technology. Because understanding the past prepares us to innovate (and sometimes cope with) the future! Today we’re going to do something a little different. Based on a recent trip to Katowice and Krakow, and a great visit to the Museum of Computer and Information Technology in Katowice, we’re going to look at the history of computing in Poland. Something they are proud of and should be proud of. And I’m going to mispronounce some words. Because they are averse to vowels. But not really, instead because I’m just not too bright. Apologies in advance. First, let’s take a stroll through an overly brief history of Poland itself. Atilla the Hun and other conquerors pushed Germanic tribes from Poland in the fourth century which led to a migration of Slavs from the East into the area. After a long period of migration, duke Mieszko established the Piast dynasty in 966, and they created the kingdom of Poland in 1025, which lasted until 1370 when Casimir the Great died without an heir. That was replaced by the Jagiellonian dynasty which expanded until they eventually developed into the Polish-Lithuanian Commonwealth in 1569. Turns out they overextended themselves until the Russians, Prussians, and Austria invaded and finally took control in 1795, partitioning Poland. Just before that, Polish clockmaker Jewna Jakobson built a mechanical computing machine, a hundred years after Pascal, in 1770. And innovations In mechanical computing continued on with Abraham Izrael Stern and his son through the 1800s and Bruno’s Intergraph, which could solve complex differential equations. And so the borders changed as Prussia gave way to Germany until World War I when the Second Polish Republic was established. And the Poles got good at cracking codes as they struggled to stay sovereign against Russian attacks. Just as they’d struggled to stay sovereign for well over a century. Then the Germans and Soviets formed a pact in 1939 and took the country again. During the war, Polish scientists not only assisted with work on the Enigma but also with the nuclear program in the US, the Manhattan Project. Stanislaw Ulam was recruited to the project and helped with ENIAC by developing the Monte Carlo method along with Jon Von Neumann. The country remained partitioned until Germany fell in WWII and the Soviets were able to effectively rule the Polish People’s Republic until a socal-Democratic movement swept the country in 1989, resulting in the current government and Poland moving from the Eastern Bloc to NATO and eventually the EU around the same time the wall fell in Berlin. Able to put the Cold War behind them, Polish cities are now bustling with technical innovation and is now home some of the best software developers I’ve ever met. Polish contributions to a more modern computer science began in 1924 when Jan Lukasiewicz developed Polish Notation, a way of writing mathematical expressions such that they are operator-first. during World War II when the Polish Cipher Bureau were the first that broke the Enigma encryption, at different levels from 1932 to 1939. They had been breaking codes since using them to thwart a Russian invasion in the 1920s and had a pretty mature operation at this point. But it was a slow, manUal process, so Marian Rejewski, one of the cryptographers developed a card catalog of permutations and used a mechanical computing device he invented a few years earlier called a cyclometer to decipher the codes. The combination led to the bomba kryptologiczna which was shown to the allies 5 weeks before the war started and in turn led to the Ultra program and eventually Colossus once Alan Turing got a hold of it, conceptually after meeting Rejewski. After the war he became an accountant to avoid being forced into slave cryptographic work by the Russians. In 1948 the Group for Mathematical Apparatus of the Mathematical Institute in Warsaw was formed and the academic field of computer research was formed in Poland. Computing continued in Poland during the Soviet-controlled era. EMAL-1 was started in 1953 but was never finished. The XYZ computer came along in 1958. Jack Karpiński built the first real vacuum tube mainframe in Poland, called the AAH in 1957 to analyze weather patterns and improve forecasts. He then worked with a team to build the AKAT-1 to simulate lots of labor intensive calculations like heat transfer mechanics. Karpinski founded the Laboratory for Artificial Intelligence of the Polish Academy of Sciences. He would win a UNESCO award and receive a 6 month scholarship to study in the US, which the polish government used to spy on American progress in computing. He came home armed with some innovative ideas from the West and by 1964 built what he called the Perceptron, a computer that could be taught to identify shapes and even some objects. Nothing like that had existed in Poland or anywhere else controlled by communist regimes at the time. From 65 to 68 he built the KAR-65, even faster, to study CERN data. By then there was a rising mainframe and minicomputer industry outside of academia in Poland. Production of the Odra mainframe-era computers began in 1959 in Wroclaw, Poland and his work was seen by them and Elwro as a threat do they banned him from publishing for a time. Elwro built a new factory in 1968, copying IBM standardization. In 1970, Karpiński realized he had to play ball with the government and got backing from officials in the government. He would then designed the k-202 minicomputer in 1971. Minicomputers were on the rise globally and he introduced the concept of paging to computer science, key in virtual memory. This time he recruited 113 programmers and hardware engineers and by 73 were using Intel 4004 chips to build faster computers than the DEC PDP-11. But the competitors shut him down. They only sold 30 and by 1978 he retired to Switzerland (that sounds better than fled) - but he returned to Poland following the end of communism in the country and the closing of the Elwro plant in 1989. By then the Personal Computing revolution was upon us. That had begun in Poland with the Meritum, a TRS-80 clone, back in 1983. More copying. But the Elwro 800 Junior shipped in 1986 and by 1990 when the communists split the country could benefit from computers being mass produced and the removal of export restrictions that were stifling innovation and keeping Poles from participating in the exploding economy around computers. Energized, the Poles quickly learned to write code and now graduate over 40,000 people in IT from universities, by some counts making Poland a top 5 tech country. And as an era of developers graduate they are founding museums to honor those who built their industry. It has been my privilege to visit two of them at this point. The description of the one in Krakow reads: The Interactive Games and Computers Museum of the Past Era is a place where adults will return to their childhood and children will be drawn into a lots of fun. We invite you to play on more than 20 computers / consoles / arcade machines and to watch our collection of 200 machines and toys from the '70's-'90's. The second is the Museum of Computer and Information Technology in Katowice, and the most recent that I had the good fortune to visit. Both have systems found at other types of computer history museums such as a Commodore PET but showcasing the locally developed systems and looking at them on a timeline it’s quickly apparent that while Poland had begun to fall behind by the 80s, it was more a reflection of why the strikes throughout caused the Eastern Bloc to fall, because Russian influence couldn’t. Much as the Polish-Lithuanian Commonwealth couldn’t support Polish control of Lithuania in the late 1700s. There were other accomplishments such as The ZAM-2. And the first fully Polish machine, the BINEG. And rough set theory. And ultrasonic mercury memory.
1/27/2020 • 12 minutes, 13 seconds
Iran and Stuxnet
Attacking Iran with Stuxnet Welcome to the History of Computing Podcast, where we explore the history of information technology. Because understanding the past prepares us to innovate (and sometimes cope with) the future! Today we’re going to cover Stuxnet, which we now considered the first real act of cyber warfare. Iran has arguably been in turmoil since the fall of the Persian empire. Alexander the Great conquered Iran in 336 BC and then the Macedonians ruled until the empire fragmented and one arm, the Seleucids ruled until the Parthians took it in 129BC. Then the Sasanians, of Persian descent, ruled until the Muslim conquest of Persia in 651. The region was then ruled by a collection of Muslim dynasties until this weirdo Ghengis Khan showed up around 1220. After a few decades the Muslim forces regained control in 1256 and the area returned to turning over to different Muslim dynasties every couple hundred years on average until 1925 when the Pahlavi took control. The final Shah of that regime was ousted during the Islamic Revolution in Iran in 1979. Ruholla Khomeini ruled for the first ten years until Sayyid Ali Hosseini Khameneh took over after his death in 1989. Something very important happened the year before that would shape Iran up until today. In 1988 Pakistan became a nuclear power. Iran started working toward a nuclear program shortly thereafter, buying equipment from Pakistan. Those centrifuges would be something those, including the US, would attempt to keep out of Iranian hands through to today. While you can argue the politics of that, those are the facts. Middle Eastern politics, wars over oil, and wars over territory have all ensued. In 2015, Iran reached agreement on the Joint Comprehensive Plan of Action, commonly referred to as the Iran nuclear deal, with the US and the EU, and their nuclear ambitions seemed to be stalled until US president Donald Trump pulled out of it. A little before the recording of this episode General Sullemani was killed by a US attack. One of the reasons negotiated the JCPA was that the Iranians received a huge setback in their nuclear program in 2010 when the US attacked an Iranian nuclear facility. It’s now the most Well researched computer worm. But Who was behind stuxnet? Kim Zetter took a two year journey researching the worm, now documented in her book Countdown to 0 day. The Air Force was created in 1947. In the early 2000s, advanced persistent threat, or APTs, began to emerge following Operation Eligible Receiver in 1997. These are pieces of malware that are specifically crafted to attack specific systems or people. Now that the field was seen as a new frontier of war, the US Cyber command was founded in 2009. And they developed weapons to attack SCADA systems, or supervisory control and data acquisition (SCADA) systems amongst other targets. By the mid-2000s, Siemens has built these industrial control systems. The Marrucci incident had brought these systems to light as targets and developers had not been building these systems with security in mind, making them quite juicy targets. So the US and Israel wrote some malware that destroyed centrifuges by hitting the Siemens software sitting on windows embedded operating systems. It was initially discovered by virus Blocada engineer Sergey Ulasen, and called Tootkit.Tmphider. Symantec originally called it W32.Temphid and then changed the name to W32.Stuxnet based on a mashup of stub and mrxnet.sys from the source code. The malware was signed and targeted a bug in the operating system to install a root kit. Sergey reported the bug to Microsoft and went public with the discovery. This led us into an era of cyber warfare as a the first widespread attack hitting industrial control systems. Stuxnet wasn’t your run of the mill ddos attack. Each of the 3 variants from 2010 had 150,000 lines of code and targeted those control systems and destroyed a third of Iranian centrifuges by causing the step-7 software systems to handle the centrifuges improperly. Iranian nuclear engineers had obtained the Step-7 software even though it was embargoed and used a back door password to change the rotation speed of engines that targeted a specific uranium enrichment facility. In 2011, Gary Samore, acting White House Coordinator for Arms Control and Weapons of Mass Destruction, would all but admit the attack was state sponsored. After that, in 2012, Iranian hackers use wiper malware, destroying 35,000 computers of Saudi Aramco costing the organization tens of millions of dollars. Cypem was hit in 2018. And the Sands casino after Sheldon Adelsyon said the US should nuke Iran. While not an official response, Stuxnet would hit another plant in the Hormozgon province a few months later. And continues in some form today. Since Iran and Israel are such good friends, it likely came as a shock when Gabi Ashkenazi, head of the Israeli Defense Forces, listed Stuxnet as one of his successes. And so the age of State sponsored Asymmetric cyber conflicts was born. Iran, North Korea, and others were suddenly able to punch above their weight. It was proven that what began in cyber could have real-world consequences. And very small and skilled teams could get as much done as larger, more beaurocratic organizations - much as we see small, targeted teams of developers able to compete head-on with larger software products. Why is that? Because often times, a couple of engineers with deep domain knowledge are equally as impactful as larger teams with a wider skill set.
1/24/2020 • 9 minutes, 18 seconds
Windows 95
Windows 95 Welcome to the History of Computing Podcast, where we explore the history of information technology. Because understanding the past prepares us to innovate (and sometimes cope with) the future! Today’s episode is the third installment of our Microsoft Windows series, covering Windows 95. Windows 1 was released in 1985 and Windows 3 came along a few years later. At the time, Windows 95 was huge. I can remember non-technical people talking about how it was 32-bit. There was a huge media event. Microsoft paid massive amounts to bring the press in from all over the world. They promised a lot. They made a huge bet. And it paid off. After all this time, no single OS has come with as much fanfare or acclaim. Codenamed Chicago, development began back in 1992 alongside Cairo, which would be NT 4. New processors and memory had continued getting faster, smaller, and cheaper trending along Moore’s law. The Intel 80486 was out now, and RAM was actually in the megabytes. Microsoft required a 386 and 4 megabytes of memory but recommended one of those 486 chips and 8 megabytes of memory. And the 32-bit OS promised to unlock all that speed for a better experience that was on par, if not better, than anything on the market at the time. And it showed in gaming. Suddenly DirectX and new video options unlocked an experience that has evolved into the modern era. Protected Mode programs also had preemptive multitasking, a coup at the time. Some of those were virtual device drivers or vxds. Windows 95 kinda’ sat on top of DOS but when Windows loaded, the virtual machine manager coordinated a lot of the low-level functions of the machine for The New Shell as they called it at the time. And that new GUI was pretty fantastic. It introduced the world to that little row of icons known as the Taskbar. It introduced the Start menu, so we could find the tools we needed more easily. That Start Menu triggered an ad campaign that heavily used the Start Me Up hit from The Rolling Stones. Jennifer Anniston and Matthew Perry showcased a $300 million dollar ad campaign. There were stories on the news of people waiting in lines that wrapped around computer stores. They had the Empire State Building fly the Microsoft Windows colors. They sold 4 million copies in 4 days and within a couple of years held nearly 60% of the operating system market share. This sparked a run from computer manufacturers to ship devices that had Windows 95 OEM versions pre-installed. And they earned that market share, bringing massive advancements to desktop computing. We got the Graphics Device Interface, or GDI and user.exe, which managed the windows, menus, and buttons. The desktop metaphor was similar to the Mac but the underpinnings had become far more advanced at the time. And the Stones weren’t the only musicians involved in Windows 95. Brian Eno composed all 6 seconds of the startup sound, which was eventually called The Microsoft Sound. It was a threaded OS. Many of the internals were still based on 16-bit Windows 3.1 executables. In fact while many hardware components could use built-in or even custom 32-bit drivers, it could fallback to generic 16-but drivers, making it easier to get started and use. One was it was easier to use was the Plug and Play wizards that prompted you to install those drivers when new hardware was detected. At release time the file system still used FAT16 and so was limited to 2 gigabytes in drive sizes. But you could have 255 character file names. And we got Windows Briefcase to sync files to disks so we could sneakernet them between computers. The program manager was no longer necessary. You could interact with the explorer desktop and have a seamless experience interacting with files and applications. Windows 95 was made for networking. It shipped with TCP/IP which by then was the way most people connected to the Internet. It also came with IPX/SPX so you could access the Netware file servers it seemed everyone had at the time. These features and how simple they suddenly were as impactful to the rise of the Internet as were the AOL disks floating around all over the place. Microsoft also released MSN alongside Windows 95, offering users a dial-up service to compete with those AOL disks. And Windows 95 brought us Microsoft's Internet Explorer web browser by installing Windows 95's Plus! Installation pack, which also included themes. Unix had provided support for multiple users for awhile. But Windows 95 also gave us a multi-user operating system for consumers. Sure, the security paradigm wasn’t complete but it was a start. And importantly users started getting accustomed to working in these types of environments. Troubleshooting was a thing. Suddenly you had GUI-level control of IRQs and Windows 95 gave us Safe Mode, making it easy to bypass all those drivers and startup items, since most boot problems were their fault and all. I remember the first time I installed 95. We didn’t have machines that could use the CD-ROM that the OS came with so we had to use floppy disks. It took 13. We got CD-ROMs before installing 95 on more computers. It was the first time I saw people change desktop backgrounds just to mess with us. Normally there were inappropriate images involved. Windows 95 would receive a number of updates. These included Service Release 2 in 1996 which brought us FAT32, which which allowed for 2 terabyte partitions. It wasn’t an easy process to move from fat16 to fat32 so I remember a lot of people just installing another drive and mapping the d drive to it. The Internet Explorer 4 update even brought us into the Active Desktop era, giving way to many of the Bill Gates demands from his famous “The Internet Tidal Wave” memo. And the Internet certainly came. And Microsoft sat able to dominate the market for over 20 years. They built an acceptable operating system with Windows 1. They built a good operating system in Windows 3. They built a great operating system in Windows 95. The competition had been fierce. The Mac might have in some ways been better and in many ways, been the inspiration. But Microsoft out-maneuvered Apple. OS/2 3.0 or “OS/2 Warp” might have been a great OS. But Microsoft out-marketed the company sending them into a tailspin that resulted in layoffs. Hardware had to work with the new Microsoft plug and play paradigm, or it would die a fiery death in the market. Microsoft had paid careful attention in building DOS and all the other DOS makers were soon to be out of business, sending Gary Kildall of CP/M into alcoholism and by then, dead. Everyone standing in Microsoft’s way had been defeated. Not defeated, crushed, destroyed. If you’ve played Civilization it’s terribly difficult to win if you don’t destroy at least a couple of the other empires. And for a long time, Microsoft was able to give us a number of great innovations and push the market forward. This is all as impressive as it is sad. Following a lull in innovation, Microsoft left the door to the operating system market open for a resurgence of Apple and the new player, Google. They built sub-par mobile operating systems that just didn’t resonate. And the market was ready for a shift, anyway. And they got it. And so today, we have competition again, and so Microsoft has become innovative again. Their APIs are amongst the best in my opinion. I’ve worked with developers who built me a graph API endpoint and shipped it over a weekend. So they’re also inspired. Maybe market domination is good for a little while, to solidify the market. But as we’ve seen time and time again, markets need diversity. Otherwise vendors get complacent. And so think about this… What vendors are overly dominant and complacent today? Is it time. Maybe. That you disrupt them? If so, count me as an ally! So thank you for joining us for this episode and thank you for your innovations, I hope I get to do an episode on them soon! Have a great day.
1/21/2020 • 11 minutes, 32 seconds
General Magic Was Almost Magical
General Magic Welcome to the History of Computing Podcast, where we explore the history of information technology. Because understanding the past prepares us to innovate (and sometimes cope with) the future! Today’s episode is on a little-known company called General Magic who certainly had a substantial impact on the modern, mobile age of computing. Imagine if you had some of the best and brightest people in the world. And imagine if they were inspired by a revolutionary idea. The Mac changed the way people thought about computers when it was released in 1984. And very quickly thereafter they had left Apple. What happened to them? They got depressed and many moved on. The Personal Computer Revolution was upon us. And people who have changed the world can be hard to inspire. Especially at A big company like what Apple was becoming, where they can easily lose the ability to innovate. Mark Pratt had an idea. The mobile device was going to be the next big thing. The next wave. I mean, Steve Jobs has talked about mobile computing all the way back in 83. And it had been researched at PARC before that and philosophically the computer science research community had actually conceptualized ubiquitous computing. But Pratt knew they couldn’t build something at Apple. So in 1990 John Sculley, then CEO at Apple, worked with Pratt and they got The Apple board of directors to invest in the idea, which they built a company for, called General Magic. He kept his ideas in a book called Pocket Crystal. Two of the most important members of the original Mac team, Bill Atkinson and Andy Hertzfeld were inspired by the vision and joined on as well. Now legends, everyone wanted to work with them. It was an immediate draw for the best and brightest in the world. Megan Smith, Dan Winkler, amy Lindbergh, Joanna Hoffman, Scott Canaster, Darin Adler, Kevin Lynch, big names in software. They were ready to change the world. Again. They would build a small computer into a phone. A computer... in your pocket. It would be described as a telephone, a fax, and a computer. They went to Fry’s. A lot. USB didn’t exist yet. So they made it. ARPANET was a known quantity but The Internet hadn’t been born yet. Still, a pocket computer with the notes from your refrigerator, files from your computer, contacts , schedules, calculators. They had a vision. They wanted expressive icons, so they invented emoticons. And animated them. There was no data network to connect computers on phones with. So they reached out to AT&T and Go figure, they signed on. Sony, Phillips, Motorola, Mitsubishi gave them 6 million each. And they created an alliance of partners. Frank Canova built a device he showed off as “Angler” at COMDEX in 1992. Mobile devices were on the way. By 1993, the Apple Board of Directors was pressuring Sculley for the next Mac-type of visionary idea. So the Newton was announced in 1994, with the General Magic team feeling betrayed by Sculley. And General Magic got shoved out of the nest of stealth mode. After a great announcement they got a lot of press. They went public without having a product. The devices were trying to do a lot. Maybe too much. The devices were slow. Some aspects of the devices worked, for other aspects, They faked demos. The web showed up and They didn’t embrace it. In fact, Dean Omijar with Auctionweb was on the team. He thought the web was way cooler than the mobile device but the name needed work so it became eBay. The team didn’t embrace management or working together. They weren’t finishing projects. They were scope creeping the projects. The delays started. Some of the team had missed delays for the Mac and that worked. But other devices shipped. After 4 years, they shipped the Sony Magic Link in 1994. The devices were $800. People weren’t ready to be connected all the time. The network was buggy. They sold less than 3k. The stock tumbled and by 95 the Internet miss was huge. They were right. The future was in mobile computing. They needed the markets to be patient. They weren’t. They had inspired a revolution in computing and it slipped through their fingers. AT&T killed the devices, Marc was ousted as CEO, and after massive losses, they laid off nearly a quarter of the team and ultimately filed chapter 11. They weren’t the only ones. Sculley has invested so much into the Newton that he got sacked from Apple. But the vision and the press. They inspired a wave of technology. Rising like a Phoenix from the postPC, ubiquitous ashes CDMA would slowly come down in cost over the next decade and evolve connectivity through 3g and the upcoming 5g revolution. And out of their innovations came the Simon Personal Communicator by BellSouth and manufactured as the IBM Simon by Mitsubishi. The Palm, Symbian, and Pocket PC, or Windows CE would come out shortly thereafter and rise in popularity over the next few years. Tony Farrell repeated the excersize when helping invent the iPod as well and Steve Jobs even mentioned he had considered some of the tech from Magic Hat. He would later found Nest. And Andy Rubin, one of the creators of Android, also come from General Magic. Next time you read about the fact that Samsung and Apple combined control 98% of the mobile market or that Android overtook Windows for market share by double digits you can thank General Magic for at least part of the education that shaped those. The alumni include the head of speech recognition from Google, VPs from Google, Samsung, Apple, Blacberry, ebay, the CTOs of Twitter, LinkedIn, Adobe, and the United States. Alumni also include the lead engineers of the Safari browser and AI at Apple, cofounders of webtv, leaders from Pinterest, creator of dreamweaver. And now there’s a documentary about their journey called appropriately, General Magic. Their work and vision inspired the mobility industry. They touch nearly every aspect of mobile devices today and we owe them for bringing us forward into one of the most transparent and connected eras of humanity. Next time you see a racist slur recorded from a cell phone, next time a political gaffe goes viral, next time the black community finally shows proof of the police shootings they’ve complained about for decades, next time political dissenters show proof of mass killings, next time abuse at the hands of sports coaches is caught and next time all the other horrible injustices of humanity are forced upon us, thank them. Just as I owe you my thanks. I am sooooo lucky you chose to listen to this episode of the history of computing podcast. Thank you so much for joining me. Have a great day!
1/18/2020 • 10 minutes, 50 seconds
Windows 3.x
Welcome to the History of Computing Podcast, where we explore the history of information technology. Because by understanding the past prepares us to innovate the future! Today we’re going to look at one of the more underwhelming operating systems released: Windows 1.0. In our previous episode, we covered Windows 1.0. Released in 1985, it was cute. Windows 2 came in 1987 and then Windows 3 came in 1990. While a war of GUIs had been predicted, it was clear by 1990 that Microsoft was winning this war. Windows 3.0 sold 10 million licenses. It was 5 megabytes fully installed and came on floppies. The crazy thing about Windows 3 is that it wasn’t really supposed to happen. IBM had emerged as a juggernaut in the PC industry, largely on the back of Microsoft DOS. Windows 1 and 2 were fine, but IBM seeing that Microsoft was getting too powerful would not run it on their computers. Instead, they began work on a new operating system called OS/2, which was initially released in 1987. But David Weise from the Windows team at Microsoft wanted to reboot the Windows project. He brought in Murray Sargent and the two started work in 1988. They added a debugger, Microsoft Word, Microsoft Excel, and Microsoft PowerPoint, and I’m pretty sure everyone knew they were on to something big. IBM found out and Microsoft placated them by saying it would kill Windows after they spent all this money on it. You could tell with the way they upgraded the UI, with how they made memory work so much better, and with the massive improvements to multitasking. Lies. They added File Manager, which would later evolve into File Explorer. They added the Control Panel which lives on to the modern era of Windows and they made it look more like the one in the Mac OS at the time. They added the Program Manager (or progman.exe), parts of which would go on to Windows Explorer and other parts which would form the Start Menu in the future. But it survived until XP Service Pack 2. They brought us up to 16 simultaneous colors and added support for graphics cards that could give us 256 colors. Pain was upgraded to Painbrush and they outsourced some of the graphics for the famed Microsoft Solitaire to Susan Kare. They also added macros using a program called Recorder, which Apple released the year before with Macro Maker. They raised the price from $100 to $149.95. And they sold 4 million copies in the first year, a huge success at the time. They added a protected mode for applications, which had supposedly been a huge reason IBM insisted on working on OS/2. One result of all of this was that IBM and Microsoft would stop developing together and Microsoft would release their branch, then called Windows NT, in 1991. NT had a new 32-bit API. The next year they would release Windows 3.1 and Windows for Workgroups 3.1, which would sell another 3 million copies. This was the first time I took Windows seriously and it was a great release. They replaced Reverse with the now-iconic Minesweeper. They added menuing customization. They removed Real Mode. They added support to launch programs using command.com. They brought in TrueType fonts and added Arial, Courier New, and the Times New Roman fonts. They added multimedia support. And amongst the most important additions, they added the Windows Registry, which still lives on today. That was faster that combing through a lot of .ini files for settings. The Workgroups version also added SMB file sharing and supported NetBIOS and IPX networking. The age of the Local Area Network, or LAN, was upon us. You could even install Winsock to get the weird TCP/IP protocol to work on Windows. Oh and remember that 32-bit API, you could install the Win32 add-on to get access to that. And because the browser wars would be starting up, by 1995 you could install Internet Explorer on 3.1. I remember 3.11 machines in the labs I managed in college and having to go computer to computer installing the browser on each. And installing Mosaic on the Macs. And later installing Netscape on both. I seem to remember that we had a few machines that ran Windows on top of CP/M successor Dr DOS. Nothing ever seemed to work right for them, especially the Internets. So… Where am I going with this episode? Windows 3 set Microsoft up to finally destroy CP/M, protect their market share from Microsoft and effectively take over the operating system, allowing them to focus on adjacencies like Internet and productivity tools. This ultimately made Bill Gates the richest man in business and set up a massive ride in personal computing. But by the time Windows 95 was announced, enough demand had been generated to sell 40 million copies. Compaq, Dell, Gateway, HP, and many others had cannibalized the IBM desktop business. Intel had AMD nipping at their heels. Mother board, power supply, and other components had become commodities. But somehow, Microsoft had gone from being the cutesy little maker of BASIC to owning the market share for Operating systems with NT, Windows 95, 98, Millenium, 2000, XP, 7, 8, 10, and it wasn’t until Google made Android and ChromeOS. They did it, not because they were technologically the best solution available. Although arguably the APIs in early Windows were better than any other available solution. And developing Windows NT alongside 95 and on once they saw there would be a need for a future OS was a master-stroke. There was a lot of subterfuge and guile. And there were a lot of people burned during the development but there’s a distinct chance that the dominance of a single operating system really gave the humans the ability to focus on a single OS to care about and an explosion in the number of software titles. Once that became a problem, and was stifling innovation, Steve Jobs was back at Apple, Android was on the rise, and Linux was always an alternative for the hacker-types and given a good market potential it’s likely that someone could have built a great windowing system on top of it. Oh wait, they did. Many times. So whether we’re Apple die-hards, Linux blow-hards, crusty old Unix grey beards, or maybe hanging on to our silly CP/M machines to write scripts on, we still owe Microsoft a big thanks. Without their innovations the business world might have been fragmented so much on the operating system side that we wouldn’t have gotten the productivity levels we needed out of apps. And so Windows 95 replaced Windows 3, and Windows 3 rode off into the sunset. But not before leaving behind a legacy of the first truly dominant OS. Thanks for everything, Microsoft, the good and the bad. And thanks to you, sweet listeners. It’s been a blast. You’re the best. Unlike Windows 1. Till next time, have a great day!
1/15/2020 • 9 minutes, 39 seconds
Microsoft Windows 1.0
Welcome to the History of Computing Podcast, where we explore the history of information technology. Because understanding the past prepares us to innovate the future! Today we’re going to look at one of the more underwhelming operating systems released: Windows 1.0. Doug Englebart released the NLS, or oN-Line System in 1968. It was expensive to build, practically impossible to replicate, and was only made possible by NASA and ARPA grants. But it introduced the world to the computer science research community to what would be modern video monitors, windowing systems, hypertext, and the mouse. Modern iterations of these are still with us today, as is a much more matured desktop metaphor. Some of his research team ended up at Xerox PARC and the Xerox Alto was released in 1973, building on many of the concepts and continuing to improve upon them. They sold about 2,000 Altos for around $32,000. As the components came down in price, Xerox tried to go a bit more mass market with the Xerox Star in 1981. They sold about 25,000 for about half the price. The windowing graphics got better, the number of users were growing, the number of developers were growing, and new options for components were showing up all over the place. Given that Xerox was a printing company, the desktop metaphor continued to evolve. Apple released the Lisa in 1983. They sold 10,000 for about $10,000. Again, the windowing system and desktop metaphor continued on and Apple quickly released the iconic Mac shortly thereafter, introducing much better windowing and a fully matured desktop metaphor, becoming the first computer considered mass market that was shipped with a graphical user interface. It was revolutionary and they sold 280,000 in the first year. The proliferation of computers in our daily lives and the impact on the economy was ready for the j-curve. And while IBM had shown up to compete in the PC market, they had just been leapfrogged by Apple. Jobs would be forced out of Apple the following year, though. By 1985, Microsoft had been making software for a long time. They had started out with BASIC for the Altair and had diversified, bringing BASIC to the Mac and releasing a DOS that could run on a number of platforms. And like many of those early software companies, it could have ended there. In a masterful stroke of business, Bill Gates ended up with their software on the IBM PCs that Apple had just basically made antiques - and they’d made plenty of cash off of doing so. But then Gates sees Visi On at COMDEX and it’s not surprise that the Microsoft version of a graphical user interface would look a bit like Visi, a bit like what Microsoft had seen from Xerox PARC on a visit in 1983, and of course, with elements that were brought in from the excellent work the original Mac team had made. And of course, not to take anything away from early Microsoft developers, they added many of their own innovations as well. Ultimately though, it was a 16-bit shell that allowed for multi-tasking and sat on top of the Microsoft DOS. Something that would continue on until the NT lineage of operating systems fully supplanted the original Windows line, which ended with Millineum Edition. Windows 1.0 was definitely a first try. IBM TopView had shipped that year as well. I’ve always considered it more of a windowing system, but it allowed multitasking and was object-oriented. It really looked more like a DOS menu system. But the Graphics Environment Manager or GEM had direct connections to Xerox PARC through Lee Lorenzen. It’s hard to imagine but at the time CP/M had been the dominant operating system and so GEM could sit on top of it or MS-DOS and was mostly found on Atari computers. That first public release was actually 1.01 and 1.02 would come 6 months later, adding internationalization with 1.03 continuing that trend. 1.04 would come in 1987 adding support for Via graphics and a PS/2 mouse. Windows 1 came with many of the same programs other vendors supplied, including a calculator, a clipboard viewer, a calendar, a pad for writing that still exists called Notepad, a painting tool, and a game that went by its original name of Reversi, but which we now call Othello. One important concept is that Windows was object-oriented. As with any large software project, it wouldn’t have been able to last as long as it did if it hadn’t of been. One simplistic explanation for this paradigm is that it had an API and there was a front-end that talked to the kernel through those APIs. Microsoft hadn’t been first to the party and when they got to the party they certainly weren’t the prettiest. But because the Mac OS wasn’t just a front-end that made calls to the back-end, Apple would be slow to add multi-tasking support, which came in their OS 5, in 1987. And they would be slow to adopt new technology thereafter, having to bring Steve Jobs back to Apple because they had no operating system of the future, after failed projects to build one. Windows 1.0 had executable files (or exe files) that could only be run in the Windowing system. It had virtual memory. It had device drivers so developers could write and compile binary programs that could communicate with the OS APIs, including with device drivers. One big difference - Bill Atkinson and Andy Hertzfeld spent a lot of time on frame buffers and moving pixels so they could have overlapping windows. The way Windows handled how a window appeared were in .ini (pronounced like any) files and that kind of thing couldn’t be done in a window manager without clipping, or leaving artifacts behind. And so it was that, by the time I was in college, I was taught by a professor that Microsoft had stolen the GUI concept from Apple. But it was an evolution. Sure, Apple took it to the masses but before that, Xerox had borrowed parts from NLS and NLS had borrowed pointing devices from Whirlwind. And between Xerox and Microsoft, there had been IBM and GEM. Each evolved and added their own innovations. In fact, many of the actual developers hopped from company to company, spreading ideas and philosophies as they went. But Windows had shipped. And when Jobs called Bill Gates down to Cupertino, shouting that Gates had ripped off Apple, Gates responded with one of my favorite quotes in the history of computing: "I think it's more like we both had this rich neighbor named Xerox and I broke into his house to steal the TV set and found out that you had already stolen it." The thing I’ve always thought was missing from that Bill Gates quote is that Xerox had a rich neighbor they stole the TV from first, called ARPA. And the US Government was cool with it - one of the main drivers of decades of crazy levels of prosperity filling their coffers with tax revenues. And so, the next version of Windows, Windows 2.0 would come in 1987. But Windows 1.0 would be supported by Microsoft for 16 years. No other operating system has been officially supported for so long. And by 1988 it was clear that Microsoft was going to win this fight. Apple filed a lawsuit claiming that Microsoft had borrowed a bit too much of their GUI. Apple had licensed some of the GUI elements to Microsoft and Apple identified over 200 things, some big, like title bars, that made up a copyrightable work. That desktop metaphor that Susan Kare and others on the original Mac team had painstakingly developed. Well, turns out that they live on in every OS because Judge Vaughn Walker on the Ninth Circuit threw out the lawsuit. And Microsoft would end up releasing Windows 3 in 1990, shipping on practically every PC built since. And so I’ll leave this story here. But we’ll do a dedicated episode for Windows 3 because it was that important. Thank you to all of the innovators who brought these tools to market and ultimately made our lives better. Each left their mark with increasingly small and useful enhancements to the original. We owe them so much no matter the platform we prefer. And thank you, listeners, for tuning in for this episode of the History of Computing Podcast. We are so lucky to have you.
1/12/2020 • 11 minutes, 45 seconds
The Monk And The Riddle
Welcome to the History of Computing Podcast, where we explore the history of information technology. Because understanding the past prepares us to innovate (and sometimes cope with) the future! Today we’re going to review a fantastic little book, called “The Monk and The Riddle” by Virtual CEO Randy Kommisar Like a lot of authors, I’m a reader. There have always been a lot of technical books in my house. In fact I had to downsize at some point because they were getting out of control. So I made a dedicated Instagram account called deadtechbooks to post photos of books. Separation anxiety is for reals. But I’ve also read a lot of books about startups and venture capital. And it never ceases to amaze me just how big a jerk most of the authors are. There are the super jerks who just come right out with it and let the reader know they invested in Google or Amazon and have billions to throw away. And that they’re so god-like that should you pitch to them, you’ll be struck down by some kind of Jesus fire. You’re left wondering why they bothered to write a book. But then you realize it’s an elitist business card. Then there are the overly eloquent nerdy jerks who let the reader know how rich they are by hiring a ghostwriter with such great prose that the biography pretending to be an autobiography would likely be taught in literature courses along with their fellow literary greats had they not chased a big old vapid paycheck. you can feel the disdain they have for the giant douchenozzle paying their check oozing between the book bindings. You can empathize with the ghostwriter, given the landmass of ego they distilled into perfectly digestible 7th grade prose. Nerdy founders who go down this route likely need to partake in the spirits just as badly given many will never have a good enough idea to found another company that actually bothers to launch a product. Then there’s the opposite; the autobiography masquerading as a biography. You can feel the subject become the author. Sometimes they commission the book. Other times the author becomes enamored with the subject of the book and all objectivity is lost. There may be thinly veiled attempts but founders, investors - they can be seductive to a storyteller. Whether through accomplishments or wealth. Don’t get me wrong, love stories are great; they just belong in the romance section of the bookstore. Then there are the startup guides. Be very careful because one size doesn’t fit all. Saying you have to do things using a formula is as dangerous as it is delusional. There are no best practices when starting a company, only worst practices. Some use such arcane tactics that many a buyer for a new startup might consider them offensive. To be clear though writing potential customers hand-written thank you cards is quaint and totally legit. Then there’s the books that focus on the facts. But without opinion or feelings they read more like code. The worst of the bunch are the humble-brags. These self-defacing tomes make sure you know just how smart the author is. Their wealth or brilliant ideas tell you exactly what you need to know. Or they would if the author didn’t tell you it was just a matter of timing, right before taking you through all of their business master strokes in a wild stimulation of... themselves. I’ve learned to read between the archetypes. Sometimes I get lost and later realize what happened but I frequently start a book just a bit leary. Randy Komisar is none of these. He began his professional life as a lawyer and then worked on the deal that put Pixar in the hands of Steve Jobs. From there, he landed at Apple and worked to license the Mac operating system. When that fell through he ended up co-founding Claris with Bill Campbell. He tells the reader some of his failures along the way, but never with an air of the humble-brag. He speaks of Campbell and others with reverence. Not as heroes but as mentors; not with adoration but with warmth. He then goes through how he landed in what we now call venture capital. He addresses his good fortunes and privilege and mixes pronouns in a way that doesn’t feel the least bit contrived. He mentions his time and involvement in the early days of webtv and TiVo in the book and explains the differences between a few types of startups in a way that’s easy to understand. Komisar warns against being bigger, faster and cheaper while telling the reader that sometimes that’s actually the right way to launch a company. He doesn’t bother to tell the reader too much of his own story until pretty late in the book. Instead he focuses on a startup who is pitching him. He explores motivations and the type that align to his world view. He does so with a genuine desire to help a person he doesn’t initially like. Why? Because he sees something in the big idea the guy has. But through the book the young founder pivots into a content portal over time with no clear route to make money but instead a focus helping people. Throughout the journey he drops insights that can help a reader navigate that world but not overtly. By the end you’re left wondering where the journey has taken you, and you almost forgot about where it began. The book opens with a story of meeting the Dalai Lama. It’s not a bragadochio opening. Instead It’s a “this isn’t going to be your typical startup book” opening. By the end you’ve almost forgotten about where the book began. The monk had asked the author about why people aren’t more compassionate to one another. By the time the book is done the author has become the monk and helped the startup reorient around compassion. And the author has shown compassion to the young founders by looking past their initial presentations full of charts and graphs and helping them recapture the why behind their idea, moving away from big, fast and cheap. So next time you’re out trying to sell your ideas, dig below the surface. Revisit your passion. Let your motivation show and you just might find a champion who can help you find a path to something that you maybe didn’t see in your own journey. This book won’t be for everyone. But I’m lucky it landed in my lap at a time when I was able to accept the message. Just as I’m lucky you chose to listen to this episode of the history of computing podcast. Thank you so much for joining me. Have a great day!
1/9/2020 • 9 minutes, 45 seconds
The App Store
Picture this. It’s 1983. The International Design Conference in Aspen has a special speaker: Steve Jobs from Apple. He’s giving a talk called “The Future Isn’t What It Used To Be.” He has a scraggly beard and really, really wants to recruit some industrial designers. In this talk, he talked about software. He talked about dealers. After watching the rise of small computer stores across the country and seeing them selling, and frequently helping people pirate, apps for their iconic Apple II, Steve Jobs predicts that the dealers were adept at selling computers, but not software. There weren’t categories of software yet. But there were radio stations and television programs. And there were record stores. And he predicted we would transmit software electronically over the phone line. And that we’d pay for it with a credit card if we liked using it. If you haven’t listened to the talk, it’s fascinating. https://www.youtube.com/watch?v=KWwLJ_6BuJA In that talk, he parlayed Alan Kay’s research into the DynaBook while he was at Xerox PARC to talk about what would later be called tablet computers and ebooks. Jobs thought Apple would do so in the 80s. And they did dabble with the Newton MessagePad in 1993, so he wasn’t too far off. I guess the writers from Inspector Gadget were tuned into the same frequency as they gave Penny a book computer in 1983. Watching her use it with her watch changed my life. Or maybe they’d used GameLine, a service that let Atari 2600 owners rent video games using a cartridge with a phone connection. Either way, it took awhile, but Jobs would eventually ship the both the App Store and the iPad to the masses. He alluded to the rise of the local area network, email, the importance of design in computers, voice recognition, maps on devices (which came true with Google and then Apple Maps), maps with photos, DVDs (which he called video disks), the rise of object-oriented programming, and the ability to communicate with a portable device with a radio link. So flash forward to 1993. 10 years after that brilliant speech. Jobs is shown the Electronic AppWrapper at NextWORLD, built by Paget Press. Similar to the Whole Earth Catalog, EAW had begun life as a paper catalog of all software available for the NeXT computers but evolved into a CD-based tool and could later transmit software over the Internet. Social, legal, and logistical issues needed to be worked out. They built digital rights management. They would win the Content and Information Best of Breed award and there are even developers from that era still designing software in the modern era. That same year, we got Debian package managers and rpm. Most of this software was free and open source, but suddenly you could build a binary package and call it. By 1995 we had pan, the Comprehensive Perl Archive Network. An important repository for anyone that’s worked with Linux. 1998 saw the rise of apt-get. But it was 10 years after Jobs saw the Electronic AppWrapper and 20 years after he had publicly discussed what we now call an App Store that Apple launched the iTunes Store in 2003, so people could buy songs to transfer from their Mac to their iPod, which had been released in 2001. Suddenly you could buy music like you used to in a record store, but on the Internet. Now, the first online repository of songs you could download had come about back in 93 and the first store to sell songs had come along in 98 - selling MP3 files. But the iTunes Store was primarily to facilitate those objects going to a mobile device. And so 2007 comes along and Jobs announces the first iPhone at the Apple Worldwide Developers Conference . A year later, Apple would release the App Store, the day before the iPhone 3G dropped, bringing apps to phones wirelessly in 2008, 25 years after Jobs had predicted it in 1983. It began with 500 apps. A few months later the Google Play store would ship as well, although it was originally called the Android Market. It’s been a meteoric rise. 10 years later, in 2018, app revenue on the iOS App Store would hit 46.6 billion dollars. And revenue on the Google Play store would hit 24.8 billion with a combined haul between $71 billion and 101 billion according to where you look. And in 2019 we saw a continued 2 digit rise in revenues, likely topping $120 billion dollars. And a 3 digit rise in China. The global spend is expected to double by 2023, with Africa and South America expected to see a 400% rise in sales in that same time frame. There used to be shelves of software in boxes at places like Circuit City and Best Buy. The first piece of software I ever bought was Civilization. Those boxes at big box stores are mostly gone now. Kinda’ like how I bought Civilization on the App Store and have never looked back. App developers used to sell a copy of a game, just like that purchase. But game makers don’t just make money off of purchases any more. Now they make money off of in-app advertising and in-app purchases, many of which are for subscriptions. You can even buy a subscription for streaming media to your devices, obviating the need for buying music and sometimes video content. Everyone seems to be chasing that sweet, sweet monthly recurring revenue now. As with selling devices, Apple sells less but makes much, much more. Software development started democratically, with anyone that could learn a little BASIC, being able to write a tool or game that could make them millions. That dropped for awhile as software distribution channels matured but was again democratized with the release of the App Store. Those developers have received Operating systems, once distributed on floppies, have even moved over to the App Store - and with Apple and Google, the net result is that they’re now free. And you can even buy physical things using in-app purchases, Apple Pay through an Apple credit card, and digital currency, closing the loop and fully obfuscating the virtual and the physical. And today any company looking to become a standard, or what we like to call in software, a platform, will have an App Store. Most follow the same type of release strategy. They begin with a catalog, move to facilitating the transactions, add a fee to do so, and ultimately facilitate subscription services. If a strategy aint broke, don’t fix it. The innovations are countless. Amazon builds services for app developers and sells them a tie to wear at their pitches to angels and VCs. Since 1983, the economy has moved on from paying cash for a box of software. And we’re able to conceptualize disrupting just about anything thanks to the innovations that sprang forth in that time where those early PCs were transitioning into the PC revolution. Maybe it was inevitable without Steve Jobs right in the thick of it. Technological determinism is impossible to quantify. Either way, app stores and the resultant business models have made our lives better. And for that we owe Apple and all of the other organizations and individuals that helped make them happen, our gratitude. Just as I owe you mine for tuning in, to yet another episode, of the history of computing podcast. We are so lucky to have you. Have a great day!
1/6/2020 • 11 minutes, 17 seconds
IETF: Guardians of the Internet
Today we’re going to look at what it really means to be a standard on the Internet and the IETF, the governing body that sets those standards. When you open a web browser and visit a page on the Internet, there are rules that govern how that page is interpreted. When traffic sent from your computer over the Internet gets broken into packets and encapsulated, other brands of devices can interpret the traffic and react, provided that the device is compliant in how it handles the protocol being used. Those rules are set in what are known as RFCs. It’s a wild concept. You write rules down and then everyone follows them. Well, in theory. It doesn’t always work out that way but by and large the industry that sprang up around the Internet has been pretty good at following the guidelines defined in RFCs. The Requests for Comments gives the Internet industry an opportunity to collaborate in a non-competitive environment. Us engineers often compete on engineering topics like what’s more efficient or stable and so we’re just as likely to disagree with people at your own organization as we are to disagree with people at another company. But if we can all meet and hash out our differences, we’re able to get emerging or maturing technology standards defined in great detail, leaving as small a room for error in implementing the tech as possible. This standardization process can be lengthy and slows down innovation, but it ends up creating more innovation and adoption once processes and technologies become standardized. The concept of standardizing advancements in technologies is nothing new. Alexander Graham Bell saw this when he started The American Institute of Electrical Engineers in 1884 to help standardize the new electrical inventions coming out of Bell labs and others. That would merge with the Institute of Radio Engineers in 1963 and now boasts half a million members spread throughout nearly every company in the world. And the International Organization for Standardization was founded in 1947. It was as a merger of sorts between the International Federation of the National Standardizing Associations, which had been founded in 1928 and the newly formed United Nations Standards Coordinating Committee. Based in Geneva, they’ve now set over 20,000 standards across a number of industries. I’ll over-simplify this next piece and revisit it in a dedicated episode. The Internet began life as a number of US government funded research projects inspired by JCR Licklider around 1962, out of ARPAs Information Processing Techniques Office, or IPTO. The packet switching network would evolve into ARPANET based on a number of projects he and his successor Bob Taylor at IPTO would fund straight out of the pentagon. It took a few years, but eventually they brought in Larry Roberts, and by late 1968 they’d awarded an RFQ to a company called Bolt Beranek and Newman (BBN) to build Interface Message Processors, or IMPs, to connect a number of sites and route traffic. The first one went online at UCLA in 1969 with additional sites coming on frequently over the next few years. Given that UCLA was the first site to come online, Steve Crocker started organizing notes about protocols in what they called RFCs, or Request for Comments. That series of notes would then be managed by Jon Postel until his death 28 years later. They were also funding a number of projects to build tools to enable the sharing of data, like file sharing and by 1971 we also had email. Bob Kahn was brought in, in 1972, and he would team up with Vinton Cerf from Stanford who came up with encapsulation and so they would define TCP/IP. By 1976, ARPA became DARPA and by 1982, TCP/IP became the standard for the US DOD and in 1983, ARPANET moved over to TCP/IP. NSFNet would be launched by the National Science Foundation in 1986. And so it was in 1986 when The Internet Engineering Task Force, or IETF, was formed to do something similar to what the IEEE and ISO had done before them. By now, the inventors, coders, engineers, computer scientists, and thinkers had seen other standards organizations - they were able to take much of what worked and what didn’t, and they were able to start defining standards. They wanted an open architecture. The first meeting was attended by 21 researchers who were all funded by the US government. By the fourth meeting later that year they were inviting people from outside the hollowed halls of the research community. And it grew, with 4 meetings a year that continue on to today, open to anyone. Because of the rigor practiced by Postel and early Internet pioneers, you can still read those notes from the working groups and RFCs from the 60s, 70s, and on. The RFCs were funded by DARPA grants until 1998 and then moved to the Internet Society, who runs the IETF and the RFCs are discussed and sometimes ratified at those IETF meetings. You can dig into those RFCs and find the origins and specs for NTP, SMTP, POP, IMAP, TCP/IP, DNS, BGP, CardDAV and pretty much anything you can think of that’s become an Internet Standard. A lot of companies claim to the “the” standard in something. And if they wrote the RFC, I might agree with them. At those first dozen IETF meetings, we got up to about 120 people showing up. It grew with the advancements in routing, application protocols, other networks, file standards, peaking in Y2K with 2,810 attendees. Now, it averages around 1,200. It’s also now split into a number of working groups with steering committees, While the IETF was initially funded by the US government, it’s now funded by the Public Interest Registry, or PIR, which was sold to Ethos Capital in November of 2019. Here’s the thing about the Internet Society and the IETF. They’re mostly researchers. They have stayed true to the mission since they took over from Pentagon, a de-centralized Internet. The IETF is full of super-smart people who are always trying to be independent and non-partisan. That independence and non-partisanship is the true Internet, the reason that we can type www.google.com and have a page load, and work, no matter the browser. The reason mail can flow if you know an email address. The reason the Internet continues to grow and prosper and for better or worse, take over our lives. The RFCs they maintain, the standards they set, and everything else they do is not easy work. They iterate and often don’t get credit individually for their work other than a first initial and a last name as the authors of papers. And so thank you to the IETF and the men and women who put themselves out there through the lens of the standards they write. Without you, none of this would work nearly as well as it all does. And thank you, listeners, for tuning in for this episode of the History of Computing Podcast. We are so lucky to have you.
1/3/2020 • 9 minutes, 13 seconds
TiVo
TiVo is a computer. To understand the history, let’s hop in our trusty time machine. It’s 1997. England gives Hong Kong back to China, after 156 years of British rule. The Mars Pathfinder touches down on Mars. The OJ Simpson trials are behind us, but the civil suit begins. Lonely Scottish scientists clone a sheep and name it Dolly. The first Harry Potter book is published. Titanic is released. Tony Blair is elected the Prime Minister of Great Britain. Hanson sang Mmmm Bop. And Pokemon is released. No not Pokemon Go, but Pokemon. The world was changing. The Notorious BIG was gunned down not far from where I was living at the time. Blackstreet released No Diggity. Third Eye Blind led a Semi-Charmed life and poppy grunge killed grunge grunge. And television. Holy buckets. Friends, Seinfeld, X Files, ER, Buff and the Vampire Slayer, Frasier, King of the Hill, Dharma and Greg, South Park, The Simpsons, Stargate, Home Improvement, Daria, Law and Order, Oz, Roseanne, The View, The Drew Carey Show, Family Matters, Power Rangers, JAG, Tenacious D, Lois and Clark, Spawn. Mosaic the first web browser, was released, Sergey Brin and Larry Page registered a weird domain name called Google because BackRub just seemed kinda’ weird. Facebook, craigslist, and Netflix were also purchased. Bill Gates became the richest business nerd in the world. DVDs were released. The hair was big. But commercials were about to become a thing of the past. So were cords. 802.11, also known as Wi-Fi, became a standard. Microsoft bought WebTV, but something else was about to happen that would forever change the way we watched television. We’d been watching television for roughly the same way for about 70 years. Since January 13th in 1928, when the General Electric factory in Schenectady, New York broadcast as WGY Television, using call letters W2XB. That was for experiments, but they launched W2XBS a little later, now known as WNBC. They just showed a Felix the Cat spinning around on a turntable for 2 hours a day to test stuff. A lot of testing around different markets were happening and The Queen’s Messenger would be the first drama broadcast on television in LA later that year. But it wasn’t until 1935 that the BBC started airing regular content and the late 1930s that regular programming started in the US, spreading slowly throughout the world, with Japan being one of the last countries to get a regular broadcast in 1953. So for the next several decades a love affair began with humans and their televisions. Color came to prime time in 1972, after the price of color TVs introduced over the couple of decades before started to come down in price. Entire industries sprang up around the television, or at least migrated from newspapers and radio to television. Moon landings, football, baseball, the news, game shows. Since that 1972 introduction of color tv, the microcomputer revolution had come. Computers were getting smaller. Hard drive capacity was growing. I could stroll down to the local Fry’s and buy a Western Digital, IBM Deskstar, Seagate Barracuda, an HP Kitty Hawk, or even a 10,000 RPM Cheetah. But the cheaper drives had come down enough for mass distribution. And so it was when Time Warner, a major US cable company at the time, decided to test a digital video system. They tapped Silicon Graphics alumni Jim Barton and Mike Ramsay to look into a set top box, or network appliance, or something. After initial testing, Time Warner didn’t think it was quite the right time to build nation-wide. They’d spent $100 million dollars testing the service in Orlando. So the pair struck out on their own. Silicon Valley was abuzz about set top boxes, now that the web was getting big, dialup was getting easy, and PCs were pretty common fare. Steve Perlman’s WebTV got bought by Microsoft for nearly half a billion dollars. Which became MSN TV and played the foundation for the Xbox hardware. I remember well that the prevailing logic of the time was that the set top box was the next big thing. The lagerts would join the Internet revolution. Grandma and Grandpa would go online. So Ramsay and Barton got a check for $3M from VC firms to further develop their idea. They founded a company called Teleworld and started running public trials of a new device that came out of their research, called TiVo. The set top box would go beyond television and be a hub for home networking, managing refrigerators, thermostats, manage your television, order a grocery delivery, and even bring the RFC for an internet coffee pot to life! But they were a little before their time on some of this. After some time, they narrowed the focus to a television receiver that could record content. The VC firms were so excited they ponied up another $300 million dollars to take the product to market. Investors even asked how long it would take the TV networks to shut them down. Disruption was afoot. When Ramsay and Barton approached Apple, Claris and Lucas Arts veteran Randy Komisar, he suggested they look at charging for a monthly service. But he, as with the rest of Silicon Valley, bought their big idea, especially since Komisar had sat on the board of WebTV. TiVo would need to raise a lot of money to ink deals with the big content providers of the time. They couldn’t alienate the networks. No one knew, but the revolution in cutting the cord was on the way. Inking deals with those providers would prove to be much more expensive than building the boxes. They set about raising capital. They inked deals with Sony, Philips, Philips, and announced a release of the first TiVo at the Consumer Electronics Show in January of 1999. They’d built an outstanding executive team. They’d done their work. And on March 31st, 1999, a Blue Moon, they released the Series 1 for about $500 and with a $9.95 monthly subscription fee. The device would use a modem to download tv show listings, which would later be replaced with an Ethernet, then Wi-Fi option. The Series1, like Apple devices at the time, would sport a PowerPC processor. Although this one was a 403GCX that only clocked in at 54 MHz - but cheap enough for an embedded system like this. It also came with 32 MB of RaM, a 13 to 60 gig IDE/ATA drive, and would convert analog signal into MPEG-2, storing from 14 to 60 hours of television programming. Back then, you could use the RCA cables or S-Video. They would go public later that year, raising 88 million dollars and nearly doubling in value overnight. By 2000 TiVo was in 150,000 homes and burning through cash far faster than they were making it. It was a huge idea and if big ideas take time to percolate, huge ideas take a lot of time. And a lot of lawsuits. In order to support the new hoarder mentality they were creating, The Series2 would come along in 2002 and would come with up to a 250 gig drive, USB ports, CPUs from 166 to 266 MHz, from 32 to 64 megs of RAM, and the MPEG encoder got moved off to the Broadcom BCM704x chips. In 2006, the Series 3 would introduce HD support, add HDMI, 10/100 Ethernet, and support drives of 2 terabytes with 128 megs of RAM. Ramsay left the company in 2007 to go work at Venture Partners. Barton, the CTO, would leave in 2012. Their big idea had been realized. They weren’t needed any more. Ramsay and Barton would found streaming service Qplay, but that wouldn’t make it over two years. By then, TiVo had become a verb. Series4 brought us to over a thousand hours of television and supported bluetooth, custom apps, and sport a Broadcom 400 MHZ dual core chip. But it was 2010. Popular DVD subscription service Netflix had been streaming and now had an app that could run on the Series 4. So did Rhapsody, Hulu, and YouTube. The race was on for streaming content. TiVo was still aiming for bigger, faster, cheaper set top boxes. But people were consuming content differently. TiVo gave apps, but Apple TV, Roku, Amazon, and other vendors were now in the same market for a fraction of the cost and without a subscription. By 2016 TiVo was acquired by Rovi for 1.1 Billion dollars and as is often the case in these kinds of scenarios seems listless. Direction… Unknown. After such a disruptive start, I can’t imagine any innovation will ever recapture that spirit from the turn of the millennia. And so in December of 2019 (the month I’m recording this episode), after months trying to split TiVo into two companies so they could be sold separately TiVo scrapped that idea and merged with Xperi. I find that we don’t talk about Tivo much any more. That doesn’t mean they’ve gone anywhere, just that the model has shifted over the years. According to TechCrunch “TiVo CEO David Shull noted also that Xperi’s annual licensing business includes over 100 million connected TV units, and relationships with content providers, CE manufacturers, and automotive OEMs, which now benefit from TiVo’s technology.” TiVo was a true disruptor. Along with Virtual CEO Randy Komisar, they sold Silicon Valley on Monthly Recurring Revenue as a key performance indicator. They survived the .com bubble and even thrived in it. They made television interactive. They didn’t cut our cords, but they expanded our minds so we could cut them. They introduced the idea of responsibly selling customer data as a revenue stream to help keep those fees in check. And in so doing, they let manufacturers micro market goods and services. They revolutionized the way we consume content. Something we should all be thankful for. So next time you’re binging a show from one of your favorite providers, just think about the fact that you might have to spend time with your family or friends if it weren’t for TiVo. You owe them a huge thanks.
12/31/2019 • 14 minutes, 27 seconds
Dungeons && Dragons
What does insurance, J.R.R. Tolkien, HG Wells, and the Civil War have in common? They created a perfect storm for the advent of Dungeons and Dragons. Sure, D&D might not be directly impactful on the History of Computing. But it’s impacts are far and wide. The mechanics have inspired many a game. And the culture impact can be seen expansively across the computer gaming universe. D&D came of age during the same timeframe that the original PC hackers were bringing their computers to market. But how did it all start? We’ll leave the history of board games to the side, given that Chess sprang up in northern India over 1500 years ago, spreading first to the Persian empire and then to Spain following the Moorish conquest of that country. And given that card games go back to a time before the Tang Dynasty in 9th century China. And Gary Gygax, the co-creator and creative genius behind D&D loved playing chess, going back to playing with his grandfather as a young boy. Instead, we’ll start this journey in 1780 with Johann Christian Ludwig Hellwig, who invented the first true war-game to teach military strategy. It was good enough to go commercial. Then Georg Julis Venturini made a game in 1796, then Opiz in 1806, then Kriegsspiel in 1824, which translates from German to wargame. And thus the industry was born. There were a few dozen other board games but in 1913, Little Wars, by HG Wells, added hollow lead figures, ornately painted, and distance to bring us into the era of miniature wargaming. Infantry moved a foot, cavalry moved two, and artillery required other troops to be around it. You fought with spring loaded cannons and other combat resulted in a one to one loss usually, making the game about trying to knock troops out while they were setting up their cannons. It was cute, but in the years before World War II, many sensed that the release of a war game by the pacifist Wells was a sign of oncoming doom. Indeed it was. But each of these inventors had brought their own innovations to the concept. And each impacted real war, with wargaming being directly linked to the blitzkrieg. Not a lot happened in innovative new Wargames between Wells and the 1950s. Apparently the world was busy fighting real war games. But Jack Scruby started making figures in 1955 and connecting communities, writing a book called All About Wargames in 1957. Then Gettysburg was created by Charles Roberts and released by Avalon Hill, which he founded, in 1958. It was a huge success and attracted a lot of enthusiastic if not downright obsessed players. In the game, you could play the commanders of the game, like Robert E Lee, Stonewall Jackson, Meade, and many others. You had units of varying sizes and a number of factors could impact the odds of battle. The game mechanics were complex, and it sparked a whole movement of war games that slowly rose through the 60s and 70s. One of those obsessed gamers was Gary Gygax, an insurance underwriter, who started publishing articles and magazines, Gygax started a the Lake Geneva Wargames Convention in 1968, which has since moved to Indianapolis after a pitstop in Milwaukee and now brings in upwards of 30,000 attendees. Gygax collaborated with his friend Jeff Perren on a game they released in 1970 called Chainmail. Chaimail got a supplement that introduced spells, magic items, dwarves, and hobbits - which seems based on Tolkien novels, but according to Gygax was more a composite of a lot of pulp novels, including one of his favorite, the Conan series. 1970 turned out to be a rough year, as Gygax got laid off from the insurance company and had a family with a wife and 5 kids to support. That’s when he started making games as a career. At first, it didn’t pay too well, but he started making games and published Chainmail with Guidon Games which started selling a whopping 100 copies a month. At the time, they were using 6 sided dice but other numbering systems worked better. They started doing 1-10 or 1-20 random number generation by throwing poker chips in a coffee can, but then Gary found weird dice in a school supply catalog and added the crazy idea of a 20 sided dice. Now a symbol found on t-shirts and a universal calling card of table top gamers. At about the same time University of Minnesota history student, Dave Arneson met Gygax at Gencon and took Chainmail home to the Twin Cities and started improving the rules, releasing his own derivative game called Blackmoor. He came back to Gencon the next year after testing the system and he and Gygax would go on to collaborate on an updated and expanded set of rules. Gygax would codify much of what Arneson didn’t want to codify, as Arneson found lawyer balling rules to be less fun from a gameplay perspective. But Gary, the former underwriter, was a solid rule-maker and thus role-playing games were born, in a game first called The Fantasy Game. Gary wrote a 50 page instruction book, which by 1973 had evolved into a 150-page book. He shopped it to a number of game publishers, but none had a book that thick or could really grock the concept of role-playing. Especially one with concepts borrowed from across the puIn the meantime, Gygax had been writing articles and helping others with games, and doing a little cobbling on the side. Because everyone needs shoes. And so in 1973, Gygax teamed up with childhood friend Don Kaye and started Tactical Studies Rules, which would evolve into TSR, witch each investing $1,000. They released Cavaliers and Roundheads on the way to raising the capital to publish the game they were now calling… Dungeons and Dragons. The game evolved further and in 1974 they put out 1,000 copies of in a boxed set. To raise more capital they brought in Brian Blume, who invested 2,000 more dollars. Sales of that first run were great, but Kaye passed away in 1975 and Blume’s dad stepped in to buy his shares. They started Dragon magazine, opened The Dungeon Hobby Shop and started hiring people. The game continued to grow, with Advanced Dungeons & Dragons being released with a boatload of books. They entered what we now call a buying tornado and by 1980, sales were well over 8 million dollars. But in 1979 James Egbert, a Michigan State Student, disappeared. A private eye blamed Dungeons and Dragons. He later popped up in Louisiana but the negative publicity had already started. Another teen, Irving Pulling committed suicide in 1982 and his mom blamed D&D and then started a group called Bothered About Dungeons and Dragons, or BADD. There’s no such thing as bad publicity though and sales hit $30 million by 83. In fact, part of the allure for many, including the crew I played with as a kid, was that it got a bad wrap in some ways… At this point Gary was in Hollywood getting cartoons made of Dungeons and Dragons and letting the Blume’s run the company. But they’d overspent and nearing bankruptcy due to stupid spending, Gygax had to return to Lake Geneva to save the company, which he did by releasing the first book in a long time, one of my favorite D&D books, Unearthed Arcana. Much drama running the company ensued, which isn’t pertinent to the connection D&D has to computing but basically Gary got forced out and the company lost touch with players because it was being run by people who didn’t really like gamers or gaming. 2nd edition D&D wasn’t a huge success But in 1996, Wizards of the Coast bought TSR. They had made a bundle off of Magic The Gathering and now that TSR was in the hands of people who loved games and gamers again, they immediately started looking for ways to reinvigorate the brand - which their leadership had loved. 3rd edition open gaming license was published by Wizards of the Coast and allowed third-part publishers to make material compatible with D&D products using what was known as the d20 System Trademark License. Fourth edition came along and in 2008 but that open gaming License was irrevocable so most continued using it over the new Game System License, which had been more restrictive. By 2016 when 5th edition came along, this is all felt similar to what we’ve seen with Apache, BSD, and MIT licenses, with TSR moving back to the Open Gaming License which had been so popular. Now let’s connect Dungeons and Dragons to the impact on Computing. In 1975, Will Crowther was working at Bolt, Beranek, and Newman. He’d been playing some of those early copies of Dungeons and Dragons and working on natural language processing. The two went together like peanut butter and chocolate and out popped something that tasted a little like each, a game called Colossal Cave Adventure. If you played Dungeons and Dragons, you’ll remember drawing countless maps on graph paper. Adventure was like that and loosely followed Kentucky’s Mammoth Cave system, given that Crowther was an avid caver. It ran on a PDP-10, and as those spread, so spread the fantasy game, getting updated by Stanford grad student Don Woods in 1976. Now, virtual words weren’t just on table tops, but they sprouted up in Rogue and by the time I got to college, there were countless MUDs or Multi-User Dungeons where you could kill other players. Mattel shipped the Dungeons & Dragons Computer Fantasy Game in 1981 then Dungeon! For the Apple II and another dozen or so games over the the years. These didn’t directly reflect the game mechanics of D&D though. But Pool of Raidance, set in the Forgotten Realms campaign setting of D&D popped up for Nintentendo and PCs in 1988, with dozens of D&D games shipping across a number of campaign settings. You didn’t have to have your friends over to play D&D any more. Out of that evolved Massive Multiplayer Online RPGs, including EverQuest, Ultima Online, Second Life, Dungeons and Dragons, Dark Age of Camelot, Runescape, and more. Even more closely aligned with the Dungeons and Dragons game mechanics you also got Matrix online, Star Wars Old Republic, Age of Conan and the list goes on. Now, in the meantime, Wizardy had shipped in 1981, Dragon Warrior shipped in 1986, and the Legend of Zelda had shipped in 1986 as well. And these represented an evolution on a simpler set of rules but using the same concepts. Dragon Warrior had started as Dragon Quest after the creators played Wizardy for the first time. These are only a fraction of the games that used the broad concepts of hit points, damage, probability of attack, including practically every first person shooter ever made, linking nearly every video game created that includes combat, to Dungeons and Dragons if not through direct inspiration, through aspects of game mechanics. Dungeons and Dragons also impacted media, appearing in movies like Mazes and Monsters, an almost comedic look at playing the game, ET, where I think I first encountered the game, reinvigorating Steven Jackson to release nearly the full pantheon of important Tolkien works, Krull, The Dark Crystal, The Princess Bride, Pathfinder, Excalibur, Camelot, and even The Last Witch Hunter, based off a Vin Diesel character he had separation anxiety with. The genre unlocked the limitations placed on the creativity by allowing a nearly unlimited personalization of characters. It has touched every genre of fiction and non-fiction. And the game mechanics are used not only for D&D but derivatives are also used for a variety of other industries. The impact Dungeons and Dragons had on geek culture stretches far and wide. The fact that D&D rose to popularity as many felt the geeks were taking over, with the rise of computing in general and the reinvention of entire economies, certainly connects it to so many aspects of our lives, whether realized or not. So next time you pick up that controller and hit someone in a game to do a few points of damage, next time you sit in a fantasy movie, next time you watch Game of Thrones, think about this. Once upon a time, there was a game called Chainmail. And someone came up with slightly better game mechanics. And that collaboration led to D&D. Now it is our duty to further innovate those mechanics in our own way. Innovation isn’t replacing manual human actions with digital actions in a business process, it’s upending the business process or industry with a whole new model. Yet, the business process usually needs to be automated to free us to rethink the model. Just like the creators of D&D did. If an insurance underwriter can have such an outsized impact on the world in the 1970s, what kind of impact could you be having today. Roll a d20 and find out! If you roll a 1, repeat the episode. Either way, have a great day, we’re lucky you decided to listen in!
12/27/2019 • 15 minutes, 35 seconds
Whirlwind And Core Memory
Whirlwind Welcome to the History of Computing Podcast, where we explore the history of information technology. Because by understanding the past prepares us to innovate the future! Today we’re going to look at a computer built at the tail end of World War II called Whirlwind. What makes Whirlwind so special? It took us from an era of punch card batch processed computing and into the era of interactive computing. Sometimes the names we end up using for things evolve over time. Your memories are a bit different than computer memory. Computer memory is information that is ready to be processed. Long term memory, well, we typically refer to that as storage. That’s where you put your files. Classes you build in Swift are loaded into memory at runtime. But that memory is volatile and we call it random-access memory now. This computer memory first evolved out of MIT with Whirlwind. And so they came up with what we now call magnetic-core memory in 1955. Why did they need speeds faster than a vacuum tube? Well, it turns out vacuum tubes burn out a lot. And the flip-flop switching they do was cool for payroll. But not for tracking Intercontinental Ballistic Missiles in real time and reacting to weather patterns so you can make sure to nuke the right target. Or intercept one that’s trying to nuke you! And in the middle of the Cold War, that was a real problem. Whirlwind didn’t start off with that mission. When MIT kicked things off, computers mostly used vacuum tubes. But they needed something… faster. Perry Crawford had seen the ENIAC in 1945 and recommended a digital computer to run simulations. They were originally going to train pilots in flight simulation and they had Jay Forrestor start working on it in 1947 ‘cause they needed to train more pilots faster. But as with many a true innovation in computing, this one was funded by the military and saw Forrestor team up with Robert Everett to look for a way to run programs fast. This meant they needed to be stored on the device rather than batch modes run off punch cards that got loaded into the system. They wanted something really wild at the time. They wanted to see things happening on screens. It started with flight simulation, which would later become a popular computer game. But as the Cold War set in, the Navy didn’t need to train pilots quite as fast. Instead, then they wanted to watch missiles traveling over the ocean, and they wanted computers that could be programmed to warn that missiles were in the air and potentially even intercept them. This required processing at speeds unheard of at the time. So they got a military grant for a million bucks a year, brought in 175 people and built a 10 ton computer. And they planned to build 2k of random-access memory. To put things in context, the computer we’re recording on today has 16 gigs of memory, roughly 8,000,000 times more storage. And almost immeasurably faster. Also, cheaper. The Williams Tubes they used at first would cost them $1 per bit per month. None of the ways people usually got memory were working. Flip-flopping circuits took to long, other forms of memory at the time were unreliable. And you know what they say about necessity being the mother of invention. By the end of 1949 the computer could solve an equation and output to an oscilloscope, which were used as monitors before we had… um… monitors. An Wang had researched using magnetic fields to switch currents and Forrester ended up trying to do the same thing, but had to manage the project and so he brought in William Papian and Dudley Buck to test various elements until they could find something that would work as memory. After a couple of years they figured it out, and built a core that was 1024 cores, or 32 x 32. They filed for a patent for it in 1951. Wang also got a patent, as did Jan Rajchman from RCA, although MIT would later dispute that Buck had leaked information to Rajchman. Either way they had the first real memory, which would be used for decades to come! The tubes used for processing in the Whirlwind would end up leading Ken Olson to transistors, which led to the transistorized TX-O (the love of many a tech model railroad clubber) and later to Olson founding DEC. Suddenly, the Whirlwind was the fastest computer of the day. They also worked on the first pointing devices used in computing. Light sensing vacuum tubes had been introduced in the 1930s, so they introduced a pen that could interact with the tubes in the oscilloscopes people used to watch objects moving on the screen. There was an optical sensor in the gun that took input from the light shown on the screen. They used light pens to select an object. Today we use fingers. Those would evolve into the Zapper so we could play Duck Hunt by the 80s but began life in missile defense. Whirlwind would evolve into Whirlwind II, and Forrester would end up fathering the SAGE missile defense system on the technology. SAGE, or Semi-Automatic Ground Environment, would weight 250 tons and be the centerpiece of NORAD, or North American Aerospace Defense Command. Remember the movie War Games? That. Dudley Buck would end up giving us content-addressable memory and helium cooled processors that almost ended up with him inventing the microprocessor. Although many of the things he theorized and built on the way to getting a functional “cryotron” as he called superconductors, would be used in the later production of chips. IBM wanted in on these faster computers. So they paid $500,000 to Wang, who would use that money to found Wang Laboratories, which by the 80s would build word processors and microcomputers. Wang would also build a tablet with email, a phone handset, and a word processing tool called Wang Office. That was the 1990 version of an iPad! After SAGE, Forrester would go on to teach for the Sloan School of Management and come up with system dynamics, the ultimate “what if” system. Basically, after he pushed the boundries of what computers could do, helping us to maybe not end up in a nuclear war, he would push the boundaries of social systems. Whirlwind gave us memory, and tons of techniques to study, produce, and test transistorized computers. And without it, no SAGE, and none of the innovations that exploded out of that program. And probably no TX-0, and therefore PDP-1, and all of the innovations that came out of the minicomputer era. It is a recognizable domino on the way from punch cards to interactive computers. So we owe a special thanks to Forrester, Buck, Olson, Papian, and everyone else who had a hand in it. And I owe a special thanks to you, for tuning into this episode of the History Of Computing Podcast. We’re so lucky to have you. Have a great day!
12/21/2019 • 9 minutes, 10 seconds
TidBITS and The Technology of Publishing
Today we’re going to look at publishing and from a different perspective than the normal history of computing podcast episodes, we’ll actually interview someone that has been living and breathing publishing technical content since, well the inception - and so one of the most qualified people to actually have that conversation: Adam Engst of TidBITS.
12/17/2019 • 25 minutes, 21 seconds
The History Of Minecraft
Minecraft Welcome to the History of Computing Podcast, where we explore the history of information technology. Because by understanding the past prepares us to innovate the future! Today we’re going to look at one of my daughter’s favorite things in the whole wide world: Minecraft. Oh, and it’s also one the most popular games ever. Modding games had been around for a long, long time. Before Minecraft, there was Dungeon Keeper and Dwarf Fortress. A lot of people my age don’t really get Minecraft as a game. I mean, it’s not that far off from the world-builder aspect of World of Warcraft from 20 years ago. But there, you built a world to play in. Even before World Of Warcraft, when it was all about building and controlling a village of orcs or humans and conquering another. By the late 90s, a lot of people were tinkering with the Ultima world builder and building new games The Unreal Engine ended up getting used to build another dozen games. World building was going commercial. A few years go buy and in 2009, Swedish video game programmer Notch, known as Marcus Persson in the real world, writes a little game called Cave Game. Persson had been born in 1979 and started programming on the Commodore 128 at 7. He built his first text based game a year later and would go on to write software for others and co-found Wurm Online, a massive multiplayer online role playing game. Cave Game was more a world designer than a game, but the stage was set for something more. He added resource simulation so you could generate resource tiles and manage resources. Suddenly it was becoming a game, which he renamed to Minecraft. Then you could build things with the resources you collected. Like buildings. They were intentionally blocky. The world is generated based on code that seeds objects based on the clock when the world is created, giving it a nice random allocation of resources and areas to explore. You can travel in a 30 million block radius in a biome. These biomes might be deserts or the snow according to how the terrain is laid out. Since people could collect things and build things out of what they’d collected, the creations took on a new sense of meaning. That specific game wasn’t exactly unique. It was common going back to before even Civilization 1. The difference is you built buildings as a whole unit. In Minecraft you laid out the blocks to build things and so the buildings took on the shape you gave them. If you wanted to build a house that looked like a famous castle, go for it, if you wanted to design a dungeon like we used to do in Dungeons and Dragons, but in three dimensions, go for it! Other games eventually integrated the same mechanic, allowing you to design buildings within your worlds. Like Skyrim, which made an Axe named after Notch. And just as you can fight in Skyrim, Minecraft eventually added monsters. But famously blocky ones. You could craft weapons, mining tools, crafting tools, and all kinds of things. Even a bed for yourself. You could terraform a world. You could build islands, chop down trees, take eggs from chickens. While the game was still in an alpha state, he added modes. Like Survival, where you could get killed by those wacky zombies, Indev and Infdev. Today there are 5 modes: survival, creative, adventure, hardcore, and spectator. Bugs were fixed, gameplay tweaked, and in 2010, it was time to go beta. Notch quit his job and started to work on Minecraft full-time. Notch founded a company called Mojang to take the game to market. After another year, they took Minecraft to market in 2011. That’s when Jeb Bergensten became the lead designer of the game. The sound design was given to us by German composer Daniel Rosenfeld, or C418. By the way, he also produced the Beyond Stranger Things theme, an inspiration for what we use in this podcast! They added servers for better coop play and they added more and more areas. It was vast. Expansive. And growing. Notch made over a hundred million dollars off the game in 2012. Kids watch YouTube videos of other kids playing Minecraft, and many make money off of showing their games. Not as much as Notch has made of course. And the kids watch the game for as long as you’ll let them. Like for hours. You default as Alex or Steve. By day you can build and by night, you run away from or kill the zombie, spider, enderman, creeper, or skeleton. The blocky characters are cute. If they weren’t so simple and cute, there’s a chance the game never would have gone anywhere. But they were, and it has. In fact, it grew so fast that, check this out, Microsoft ended up buying Mojang for 2 and a half billion dollars. And since 2014 they’ve made well over half a billion dollars off Minecraft and they have over 90 million active players every month, just on mobile. In 2016 they crossed 100 million copies sold. Now they have nearly that many people playing consistently. One of which is my kid. And they’ve crossed 176 million copies sold. Microsoft took a beating from certain investment There are books to help you play, costumes so you can dress up like the characters, toys so you can play with them, legos because they’re blocky as well, apparel so you can show your Minecraft love, sheets to help you sleep when you’ve played enough. Pretty sure my kid has a little of all of it. The modding nature of the game lives on. Your worlds and mods follow you from device to device. You can buy packs. You can make your own. You can make your own and sell them! You can make money off Minecraft by building packs or by publishing videos. Probably the best summer job ever! The beauty of Minecraft is that you can build worlds and it unlocks a level of creativity in kids I’ve rarely seen with video games. It feels like Legos that way, but virtual. It can be free or you can pay a nominal fee for certain things in the game. Nothing like the whaling you see with some games. It can be competitive or not. It’s even inspired tens of millions of people to learn a little basic coding. It’s funny, Minecraft is more than a game, and the return on investment Microsoft continues to receive from their acquisition shows just how smart they are. Unlike you dear listeners, for wasting time listening to me babble. Now get back to work. Or trying to get a block of Obsidian in Minecraft. But before you do, thank you so much for tuning in. We’re so lucky to have ya!
12/13/2019 • 9 minutes, 4 seconds
Stewart Brand: Hippy Godfather of the Interwebs
Welcome to the History of Computing Podcast, where we explore the history of information technology. Because understanding the past prepares us for the innovations of the future! Today we’re going to look at the impact Stewart Brand had on computing. Brand was one of the greatest muses of the interactive computing and then the internet revolutions. This isn’t to take anything away from his capacity to create, but the inspiration he provided gave him far more reach than nearly anyone in computing. There’s a decent chance you might not know who he his. There’s even a chance that you’ve never heard of any of his creations. But you live and breath some of his ideas on a daily basis. So who was this guy and what did he do? Well, Stewart Brand was born in 1938, in Rockford, Illinois. He would go on to study biology at Stanford, enter the military and then study design and photography at other schools in the San Francisco area. This was a special time in San Francisco. Revolution was in the air. And one of the earliest scientific studies had him legitimately dosing on LSD. One of my all-time favorite books was The Electric Kool-Aid Acid Test, by Tom Wolfe. In the book, Wolfe follows Ken Kesey and his band of Merry Pranksters along a journey of LSD and Benzedrine riddled hippy goodness, riding a converted school bus across the country and delivering a new kind of culture straight out of Haight-Ashbury and to the heart of middle America. All while steering clear of the shoes FBI agents of the day wore. Here he would have met members of the Grateful Dead, Neal Cassady, members of the Hells Angels, Wavy Gravy, Paul Krassner, and maybe even Kerouac and Ginsberg. This was a transition from the Beat Generation to the Hippies of the 60s. Then he started the Whole Earth Catalog. Here, he showed the first satallite imagery of the planet Earth, which he’d begun campaigning NASA to release two years earlier. In the 5 years he made the magazine, he spread ideals like ecology, a do it yourself mentality, self-sufficiency, and what the next wave of progress would look like. People like Craig Newmark of Craig’s List would see the magazine and it would help to form a new world view. In fact, the Whole Earth Catalog was a direct influence on Craig’s List. Steve Jobs compared the Whole Earth Catalog to a 60s era Google. It inspired Wired Magazine. Earth Day would be created two years later. Brand would loan equipment and inspire spinoffs of dozens of magazines and books. And even an inspiration for many early websites. The catalog put him in touch with so, so many influential people. One of the first was Doug Engelbart and The Mother Of All Demos involves him in the invention of the mouse and the first video conferencing. In fact, Brand helped produce the Mother Of All Demos! As we moved into the 70s he chronicled the oncoming hacker culture, and the connection to the 60s-era counterculture. He inspired and worked with Larry Brilliant, Lee Felsenstein, and Ted Nelson. He basically invented being a “futurist” founding CoEvolution Quarterly and spreading the word of digital utopianism. The Whole Earth Software Review would come along with the advent of personal computers. The end of the 70s would also see him become a special advisor to former California governor Jerry Brown. In the 70s and 80s, he saw the Internet form and went on to found one of the earliest Internet communities, called The WELL, or Whole Earth Lectronic Link. Collaborations in the WELL gave us Barlow’s The Electronic Frontier Foundation, a safe haunt for Kevin Mitnick while on the run, Grateful Dead tape trading, and many other Digerati. There would be other virtual communities and innovations to the concept like social networks, eventually giving us online forums, 4chan, Yelp, Facebook, LinkedIn, and corporate virtual communities. But it started with The Well. He would go on to become a visiting scientist in the MIT Media Lab, organize conferences, found the Global Business Network with Peter Schwarts, Jay Ogilvy and other great thinkers to help with promoting values and various planning like scenario planning, a corporate strategy that involves thinking from the outside in. This is now a practice inside Deloitte. The decades proceeded on and Brand inspired whole new generations to leverage humor to push the buttons of authority. Much as the pranksters inspired him on the bus. But it wasn’t just anti-authority. It was a new and innovative approach in an upcoming era of maximizing short-term profits at the expense of the future. Brand founded The Long Now Foundation with an outlook that looked 10,000 years in the future. They started a clock on Jeff Bezos’ land in Texas, they started archiving languages approaching extinction, Brian Eno led seminars about long-term thinking, and inspired Anathem, a novel from one of my favorite authors, Neal Stephenson. Peter Norton, Pierre Omidyar, Bruce Sterling, Chris Anderson of the Economist and many others are also involved. But Brand inspired other counter-cultures as well. In the era of e-zines, he inspired Jesse Dresden, who Brand knew as Jefferson Airplane Spencer Drydens kid. The kid turned out to be dFx, who would found HoHo Con an inspiration for DefCon. Stewart Brand wrote 5 books in addition to the countless hours he spent editing books, magazines, web sites, and papers. Today, you’ll find him pimping blockchain and cryptocurrency, in an attempt to continue decentralization and innovation. He inherited a playful counter-culture. He watched the rise and fall and has since both watched and inspired the innovative iterations of countless technologies, extending of course into bio-hacking. He’s hobnobbed with the hippies, the minicomputer timeshares, the PC hackers, the founders of the internet, the tycoons of the web, and then helped set strategy for industry, NGOs, and governments. He left something with each. Urania was the muse of astronomy, some of the top science in ancient Greece. And he would probably giggle if anyone compared him to the muse. Both on the bus in the 60s, and in his 80s today. He’s one of the greats and we’re lucky he graced us with his presence on this rock - that he helped us see from above for the first time. Just as I’m lucky you elected to listen to this episode. So next time you’re arguing about silly little things at work, think about what really matters and listen to one of his Ted Talks. Context. 10,000 years. Have a great week and thanks for listening to this episode of the History of Computing Podcast.
12/7/2019 • 8 minutes, 20 seconds
The Microphone
Welcome to the History of Computing Podcast, where we explore the history of information technology. Because understanding the past prepares us to innovate of the future! Todays episode is is on the microphone. Now you might say “wait, that’s not a computer-thing. But given that every computer made in the past decade has one, including your phone, I would beg to differ. Also, every time I record one of these episodes, I seem to get a little better with wielding the instruments, which has led me to spend way more time than is probably appropriate learning about them. So what exactly is a microphone? Well, it’s a simple device that converts mechanical waves of energy into electrical waves of energy. Microphones have a diaphragm, much as we humans do and that diaphragm mirrors the sound waves it picks up. So where did these microphones come from? Well, Robert Hooke got the credit for hooking a string to a cup in 1665 and suddenly humans could push sound over distances. Then in 1827 Charles Wheatstone, who invented the telegraph put the word microphone into our vernacular. 1861 rolls around and Johan Philipp Reis build the Reis telephone, which electrified the microphone using a metallic strip that was attached to a vibrating membrane. When a little current was passed through it, it reproduced sound far away. Think of this as more of using electricity to amplify the effects of the string on the cup. But critically, sound had been turned into signal. In 1876, Emile Berliner built a modern microphone while working on the gramophone. He was working with Thomas Edison at the time and would go on to sell the patent for the Microphone to The Bell Telephone Company. Now, Alexander Graham Bell had designed a telephone transmitter in 1876 but ended up in a patent dispute with David Edward Hughes. And as he did with many a great idea, Thomas Edison made the first practical microphone in 1886. This was a carbon microphone that would go on to be used for almost a hundred years. It could produce sound but it kinda’ sucked for music. It was used in the first radio broadcast in New York in 1910. The name comes from the cranes of carbon that are packed between two metal plates. Edison would end up introducing the diaphragm and the carbon button microphone would become the standard. That microphone though, often still had a built0-in amp, strengthening the voltage that was the signal sound had been converted to. 1915 rolls around and we get the vacuum tube amplifier. And in 1916, E.C. Wente of Bell Laboratories designed the condenser microphone. This still used two plates, but each had an electrical charge and when the sound vibrations moved the plates, the signal was electronically amplified. Georg Neumann then had the idea to use gold plated PVC and design the mic such that as sound reached the back of the microphone it would be cancelled, resulting in a cardioid pattern, making it the first cardioid microphone and an ancestor to the microphone I’m using right now. In the meantime, other advancements were coming. Electromagnets made it possible to add moving coils and ribbons and Wente and A.C. Thuras would then invent the dynamic, or moving-coil microphone in 1931. This was much more of an omnidirectional pattern and It wasn’t until 1959 that the Unidyne III became the first mic to pull in sound from the top of the mic, which would change the shape and look of the microphone forever. Then in 1964 Bell Labs brought us the electrostatic transducer mic and the microphone exploded with over a billion of these built every year. Then Sennheiser gave us clip-on microphones in the 80s, calling their system the Mikroport and releasing it through Telefunken. No, Bootsie Collins was not a member of Telefunken. He’d been touring with James Brown for awhile ad by then was with the Parliament Funkadelic. Funk made a lot of use of all these innovations in sound though. So I see why you might be confused. Other than the fact that all of this was leading us up to a point of being able to use microphones in computers, where’s the connection? Well, remember Bell Labs? In 1962 they invented the electret microphone. Here the electrically biased diaphragms have a capacitor that changes with the vibrations of sound waves. Robert Noyce had given us the integrated circuit in 1959 and of microphones couldn’t escape the upcoming Moore’s law, as every electronics industry started looking for applications. Honeywell came along with silicon pressure sensors, and by 65 Harvey Nathanson gave us a resonant-gated transistors. That would be put on a Monolithic chip by 66 and through the 70s micro sensors were developed to isolate every imaginable environmental parameter, including sound. At this point, computers were still big hulking things. But computers and sound had been working their way into the world for a couple of decades. The technologies would evolve into one another at some point obviously. In 1951, Geoff Hill pushed pules to a speaker using the Australian CSIRAC and Max Mathews at Bell Labs had been doing sound generation on an IBM 704 using the MUSIC program, which went a step further and actually created digital audio using PCM, or Pulse-Code Modulation. The concept of sending multiplexed signals over a wire had started with the telegraph back in the 1870s but the facsimile, or fax machine, used it as far back as 1920. But the science and the math wasn’t explaining it all to allow for the computer to handle the rules required. It was Bernard Oliver and Claude Shannon that really put PCM on the map. We’ve mentioned Claude Shannon on the podcast before. He met Alan Turing in 43 and went on to write crazy papers like A Mathematical Theory of Cryptography, Communication Theory of Secrecy Systems, and A Mathematical Theory of Communications. And he helped birth the field of information theory. When the math nerds showed up, microphones got way cooler. By the way, he liked to juggle on a unicycle. I would too if I could. They documented that you could convert audio to digital by sampling audio and modulation would be mapping the audio on a sine wave at regular intervals. This analog-to-digital converter could then be printed on a chip that would output encoded digital data that would live on storage. Demodulate that with a digital to analog converter, apply an amplification, and you have the paradigm for computer sound. There’s way more, like anti-aliasing and reconstruction filters, but someone will always think you’re over-simplifying. So the evolutions came, giving us multi-track stereo casettes, the fax machines and eventually getting to the point that this recording will get exported into a 16-bit PCM wave file. PCM would end up evolving to LPCM, or Linear pulse-control modulation and be used in CDs, DVDs, and Blu-ray’s. Oh and lossleslly compressed to mp3, mpeg4, etc. By the 50s, MIT hackers would start producing sound and even use the computer to emit the same sounds Captain Crunch discovered the tone for, so they could make free phone calls. They used a lot of paper tape then, but with magnetic tape and then hard drives, computers would become more and more active in audio. By 61 John Kelly Jr and Carol Lockbaum made an IBM 7094 mainframe sing Daisy Bell. Arthur C. Clarke happened to see it and that made it into 2001: A Space Odyssey. Remember hearing it sing that when it was getting taken apart? But the digital era of sound recording is marked as starting with the explosion of Sony in the 1970s. Moore’s Law, they got smaller, faster, and cheaper and by the 2000s microelectromechanical microphones web mainstream, which are what are built into laptops, cell phones, and headsets. You see, by then it was all on a single chip. Or even shared a chip. These are still mostly omnidirectional. But in modern headphones, like Apple AirPods then you’re using dual beam forming microphones. Beamforming uses multiple sensor arrays to extract sounds based on a whole lot of math; the confluence of machine learning and the microphone. You see, humans have known to do many of these things for centuries. We hooked a cup to a wire and sound came out the other side. We electrified it. We then started going from engineering to pure science. We then analyzed it with all the math so we better understood the rules. And that last step is when it’s time to start writing software. Or sometimes it’s controlling things with software that gives us the necessary understanding to make the next innovative leap. The invention of the microphone doesn’t really belong to one person. Hook, Wheatstone, Reis, Alexander Graham Bell, Thomas Edison, Wente, Thuras, Shannon, Hill, Matthews, and many, many more had a hand in putting that crappy mic in your laptop, the really good mic in your cell phone, and the stupidly good mic in your headphones. Some are even starting to move over to Piezoelectric. But I think I’ll save that for another episode. The microphone is a great example of that slow, methodical rise, and iterative innovation that makes technologies truly lasting. It’s not always shockingly abrupt or disruptive. But those innovations are permanently world-changing. Just think, because of the microphone and computer getting together for a blind date in the 40s you can now record your hit album in Garage Band. For free. Or you call your parents any time you want. Now pretty much for free. So thank you for sticking with me through all of this. It’s been a blast. You should probably call your parents now. I’m sure they’d love to hear from you. But before you do, thank you for tuning in to yet another episode of the History of Computing Podcast. We’re so lucky to have you. Have a great day!
12/1/2019 • 12 minutes, 27 seconds
Alibaba
Alibaba Welcome to the History of Computing Podcast, where we explore the history of information technology. Because by understanding the past prepares us to innovate the future! Today we’re going to look at a company called Alibaba. 1964. This was the year that BASIC was written, the year Kleinrock wrote history first paper on package flow and design, the year the iconic IBM System/360 shipped, the year Ken Olson got a patent for the first magnetic core memory, the GPS (then called TRANSIT) went live. But some of the most brilliant minds of the future of computing were born that very same year. Eric Benioff the founder of Salesforce was born then. As was tech writer and editor of Fast Company and PC World Harry McCracken. Obama CTO Megan Smith, a former VP of Google, Alan Emtage of Archie, and Eric Bina an early contributor and coauthor of Netscape and Mosaic. But the Internet stork brought us two notable and ironically distinct people as well. Jeff Bezos of Amazon and Jack Ma of Alibaba. You would need to have been living under a rock for a decade or two in order to not know who Amazon is. But just how much do you know about Alibaba? But Alibaba makes nearly 400 billion dollars per year with assets of nearly a trillion dollars. Amazon has revenues of $230 billion with assets just north of $160 billion. For those of us who do most of our shopping on Amazon and tend to think of them as a behemoth, just think about that. 7 times the assets and way more sales. Alibaba is so big that when Yahoo! got into serious financial trouble, their most valuable asset was shares in Alibaba. If Alibaba is so big why is it that out of 5 Americans I asked, only 1 knew who they were? Because China. Alibaba is the Amazon of China. They have also own most of Lazada, which runs eCommerce operates sites in Indonesia, Malaysia, the Philippines, Singapore, Thailand, and Vietnam. Like Amazon they have supermarkets, streaming services, they lease cloud services, their own online payment platform, instant messaging, a pharmaceutical commerce company, sponsor FIFA, and a couple of years after Bezos bought the Washington Post, Alibaba bought the South China Morning post for a little more than a quarter billion dollars. Oh and you can get almost anything on there, especially if you want counterfeit brands or uranium. OK, so the uranium was a one time thing… Or was it? Oh, and I’m merging a lot of the assets here that are under the Alibaba name. But keep in mind that if you combined Google, eBay, Amazon, and a few others you still wouldn’t have an Alibaba in terms of product coverage, dominance or pure revenue. All while Alibaba maintains less employees than Alphabet (the parent of Google) or Amazon. So how does a company get to the point that they’re just this stupid crazy big? I really don’t know. Ma heard about this weird thing called the internet after he got turned down for more than 30 jobs. One of those was frickin’ KFC. He flew to the US in 1995 and some friends took him on a tour of this weird web thing. There he launched chinapages.com and made just shy of a million bucks in the first few years, building sites for companies based in china. He then went to work for the Chinese government for a couple of years. He started Alibaba with a dozen and a half people in 1999, raising a crapton of money, saying no to sell assets but yes to investments. Especially Yahoo co-founder Jerry Yang, who gave them a billion bucks. And they grew, and they got more and more money, and sales, and really they just all out pwned the Chinese market, slowly becoming the Chinese eBay, the Chinese Amazon, the Chinese google, the Chinese, well, you get the picture. They even have their own Linux distro called AliOS. They own part of Lyft, part of the Chinese soccer team, and are a sponsor of the Olympic Games. Maybe he buys companies using AliGenie, the Alibaba home automation solution that resembles personal assistants built into Amazon Echo and Apple’s Siri. Ma supposedly has ties to Chinese President Xi Jinping that go way back. Apple makes less money than Alibaba but their CEO gets to go hang at the White House whenever he wants. Not that he wants to do so very often… Bezos might be richer, but he doesn’t get to hang at the White House often. Makes you wonder if there’s more there, like… Nevermind. Back to the story. When Ma bought the South China Morning Post the term “firmly discouraged” was used in multiple outlets to describe other potential bidders. Financial reports have described the same from other acquisitions. Through innovation, copy-catting, and a sprinkle of intimidation, Alibaba became a powerhouse, going public in 2014, in an IPO the raised over $25 billion dollars and made Alibaba the most valuable tech firm in the universe. Oh, Ma acts and sings. He rocked a little kung fu in 2017’s Gong Shou Dao. It was super-weird. He was really powerful in that movie. Strong arming goes a lot of different ways though. Ma was reportedly pressured to step down in late 2018, hading the company to Daniel Zhang. I guess he got a little too powerful, supposedly bribing officials in a one-party state and engaging in wonktastic account practices. He owns some vineyards, is only in his mid-50s and has plenty of time on his hands now to enjoy the grapefruits of his labor. This story is pretty fantastic. He was an English teacher in 1999. And he rose to become the richest man in China. That doesn’t happen by luck. Capitalism at its best. And this modern industrialist rose to become the 21st richest person in the world in one of the most unlikely of places. Or was it? He doesn’t write code. He didn’t have a computer until his 30s. He’s never actually sold anything to customers. Communism is beautiful. And so are you. Thank you dear listeners, for your contributions to the world in whatever way they may be. They probably haven’t put you on the Forbes list. But I hope that tuning in helps you find ways to get there. We’re so lucky to have you, have a great day!
11/27/2019 • 8 minutes, 56 seconds
BASIC
BASIC Welcome to the History of Computing Podcast, where we explore the history of information technology. Because by understanding the past prepares us to innovate the future! Today we’re going to look at the computer that was the history of the BASIC programming language. We say BASIC but really BASIC is more than just a programming language. It’s a family of languages and stands for Beginner’s All-purpose Symbolic Instruction Code. As the name implies it was written to help students that weren’t math nerds learn how to use computers. When I was selling a house one time, someone was roaming around in my back yard and apparently they’d been to an open house and they asked if I’m a computer scientist after they saw a dozen books I’d written on my bookshelf. I really didn’t know how to answer that question We’ll start this story with Hungarian John George Kemeny. This guy was pretty smart. He was born in Budapest and moved to the US with his family in 1940 when his family fled anti-Jewish sentiment and laws in Hungary. Some of his family would go on to die in the Holocaust, including his grandfather. But safely nestled in New York City, he would graduate high school at the top of his class and go on to Princeton. Check this out, he took a year off to head out to Los Alamos and work on the Manhattan Project under Nobel laureate Richard Feynman. That’s where he met fellow Hungarian immigrant Jon Von Neumann - two of a group George Marx wrote about in his book on great Hungarian Emmigrant Scientists and thinkers called The Martians. When he got back to Princeton he would get his Doctorate and act as an assistant to Albert Einstein. Seriously, THE Einstein. Within a few years he was a full professor at Dartmouth and go on to publish great works in mathematics. But we’re not here to talk about those contributions to the world as an all around awesome place. You see, by the 60s math was evolving to the point that you needed computers. And Kemeny and Thomas Kurtz would do something special. Now Kurtz was another Dartmoth professor who got his PhD from Princeton. He and Kemeny got thick as thieves and wrote the Dartmouth Time-Sharing System (keep in mind that Time Sharing was all the rage in the 60s, as it gave more and more budding computer scientists access to those computer-things that prior to the advent of Unix and the PC revolution had mostly been reserved for the high priests of places like IBM. So Time Sharing was cool, but the two of them would go on to do something far more important. In 1956, they would write DARSIMCO, or Dartmouth Simplified Code. As with Pascal, you can blame Algol. Wait, no one has ever heard of DARSIMCO? Oh… I guess they wrote that other language you’re here to hear the story of as well. So in 59 they got a half million dollar grant from the Alfred P. Sloan foundation to build a new department building. That’s when Kurtz actually joined the department full time. Computers were just going from big batch processed behemoths to interactive systems. They tried teaching with DARSIMCO, FORTRAN, and the Dartmouth Oversimplified Programming Experiment, a classic acronym for 1960s era DOPE. But they didn’t love the command structure nor the fact that the languages didn’t produce feedback immediately. What was it called? Oh, so in 1964, Kemeny wrote the first iteration of the BASIC programming language and Kurtz joined him very shortly thereafter. They did it to teach students how to use computers. It’s that simple. And as most software was free at the time, they released it to the public. We might think of this as open source-is by todays standards. I say ish as Dartmouth actually choose to copyright BASIC. Kurtz has said that the name BASIC was chosen because “We wanted a word that was simple but not simple-minded, and BASIC was that one.” The first program I wrote was in BASIC. BASIC used line numbers and read kinda’ like the English language. The first line of my program said 10 print “Charles was here” And the computer responded that “Charles was here” - the second program I wrote just added a second line that said: 20 goto 10 Suddenly “Charles was here” took up the whole screen and I had to ask the teacher how to terminate the signal. She rolled her eyes and handed me a book. And that my friend, was the end of me for months. That was on an Apple IIc. But a lot happened with BASIC between 1964 and then. As with many technologies, it took some time to float around and evolve. The syntax was kinda’ like a simplified FORTRAN, making my FORTRAN classes in college a breeze. That initial distribution evolved into Dartmouth BASIC, and they received a $300k grant and used student slave labor to write the initial BASIC compiler. Mary Kenneth Keller was one of those students and went on to finish her Doctorate in 65 along with Irving Tang, becoming the first two PhDs in computer science. After that she went off to Clarke College to found their computer science department. The language is pretty easy. I mean, like PASCAL, it was made for teaching. It spread through universities like wildfire during the rise of minicomputers like the PDP from Digital Equipment and the resultant Data General Nova. This lead to the first text-based games in BASIC, like Star Trek. And then came the Altair and one of the most pivotal moments in the history of computing, the porting of BASIC to the platform by Microsoft co-founders Bill Gates and Paul Allen. But Tiny BASIC had appeared a year before and suddenly everyone needed “a basic.” You had Commodore BASIC, BBC Basic, Basic for the trash 80, the Apple II, Sinclair and more. Programmers from all over the country had learned BASIC in college on minicomputers and when the PC revolution came, a huge part of that was the explosion of applications, most of which were written in… you got it, BASIC! I typically think of the end of BASIC coming in 1991 when Microsoft bought Visual Basic off of Alan Cooper and object-oriented programming became the standard. But the things I could do with a simple if, then else statement. Or a for to statement or a while or repeat or do loop. Absolute values, exponential functions, cosines, tangents, even super-simple random number generation. And input and output was just INPUT and PRINT or LIST for source. Of course, functional programming was always simpler and more approachable. So there, you now have Kemeny as a direct connection between Einstein and the modern era of computing. Two immigrants that helped change the world. One famous, the other with a slightly more nuanced but probably no less important impact in a lot of ways. Those early BASIC programs opened our eyes. Games, spreadsheets, word processors, accounting, Human Resources, databases. Kemeny would go on to chair the commission investigating Three Mile Island, a partial nuclear meltdown that was a turning point in nuclear proliferation. I wonder what Kemeny thought when he read the following on the Statue of Liberty: Give me your tired, your poor, Your huddled masses yearning to breathe free, The wretched refuse of your teeming shore. Perhaps, like many before and after, he thought that he would breathe free and with that breath, do something great, helping bring the world into the nuclear era and preparing thousands of programmers to write software that would change the world. When you wake up in the morning, you have crusty bits in your eyes and things seem blurry at first. You have to struggle just a bit to get out of bed and see the sunrise. BASIC got us to that point. And for that, we owe them our sincerest thanks. And thank you dear listeners, for your contributions to the world in whatever way they may be. You’re beautiful. And of course thank you for giving me some meaning on this planet by tuning in. We’re so lucky to have you, have a great day!
11/24/2019 • 14 minutes, 59 seconds
Snowden
Edward Snowden Welcome to the History of Computing Podcast, where we explore the history of information technology. Because by understanding the past, we’re able to be prepared for the innovations of the future! Todays episode is about Edward Snowden, who leaked a trove of NSA documents that supposedly proved the NSA was storing and potentially weaponizing a lot of personal communications of US and foreign citizens. Now, before I tell an abridged version of his story I should say that I was conflicted about whether to do this episode. But I see the documents Edward Snowden released as a turning point in privacy. Before Snowden, there was talk of digital privacy at DefCon, in the ranks of the Electronic Frontier Foundation, and of course amongst those who made hats of tin foil. But sometimes those tin foil mad hatters are right. Today, you see that word “Privacy” in sessions from developers at Apple, Google, Microsoft, and many other companies that host our data. It’s front and center in sales and marketing. Many of those organizations claim they didn’t know customer data was being captured. And we as a community have no reason not to trust them. But this is not a podcast about politics. For some, what Snowden did is an act of espionage. For others it’s considered politically motivated. But many blame the leaker as a means of not addressing the information leaked. Things I’ve heard people say about what he did include: * He was just a disgruntled contractor * He was working for the Russians all along * This is the problem with Millenials * Espionage should be punishable by death * Wikileaks rapist * He gave Democratic server data to Trump * This is why we shouldn’t allow trans people in the military * He is a hero These responses confuse a few different events. Which is understandable given the rapid rip and replace of these stories by the modern news cycle. Let’s run through a quick review of some otherwise disconnected events. Chelsea Manning, then Bradley Manning, enlisted in 2007 and then leaked classified documents to Wikileaks in 2010. These documents included airstrike footage, diplomatic cables, documents about Guantanamo Bay detainees, and much more. Some of which possibly put lives in danger. Manning served seven years before having her sentence commuted by then US president, Barak Obama. She was not pardoned. Wikileaks.org is an active web site started in 2006 by Julian Assange. The site began as a community-driven wiki - but quickly ended up moving into more of a centralized distribution model, given some of the material that has been posted over the years. Assange has been in and out of courts throughout his adult life, first for hacking at a young age and then pushing the boundaries of freedom of speech, freedom of press, and the rights to the security of information owned by sovereign nations. I’m sure he was right in some of those actions and wrong in others. Wikileaks has been used as a tool for conservatives, liberals, various governments, and the intelligence communities of the US, Russia, and to get bosses or competitors for a promotion fired in the private sector. But hosting the truth knows no master. When those leaked documents help your cause it’s a great freedom of speech. When they hurt your cause then it must be true that Assange and his acolytes are tools of a foreign power or worse, straight up spies. Any of it could be true. But again sometimes the truth can hurt -even if there are a few altered documents in a trove of mostly unaltered documents as has been alleged to be true of the hacked emails of John Podesta, the Democratic National Committee chair during the 2016 elections. Again, these aren’t in any way political views, just facts. Because the Chelsea Manning trials were going on around the same time that Snowden went public, I do find these people and their stories can get all mixed up. Assange was all over the news as well. And I don’t want to scope creep the episode. This episode is about what Edward Snowden did. And what he did was to leak NSA documents to journalists in 2013. These documents went into great detail about an unprecendented level of foreign and domestic data capture under the auspices of what he considered overreach by the intelligence community. Just because it is an unprecedented level doesn’t make it right or wrong, just more than the previous precedent. Just because he considered it an overreach doesn’t mean I do. It also doesn’t mean that that much snooping into our personal lives, without probable cause, wasn’t an overreach. And this is a bi-partisan issue. The overreach arguably began in ernest under Bush, based on research done while Clinton was in office and was then expanded under Obama. And of course, complained about by Trump while he actively sought to expand the programs. They were all complicit. How did Snowden end up with these documents? His father had served in the intelligence community. As with many of us, he became enamored with computers at a young age and turned his hobby into a career early in life. Snowden was working as a web developer when the terrorist attacks of September 11th, 2001 hit. A lot of us were pretty devastated by those events but he wanted to do his patriotic duty and enlisted. Only problem is that he broke his legs during basic training. According to his autobiography it happened when he landed awkwardly while trying to avoid jumping on a snake. He then began life as a contractor in the intelligence community, which was exploding in the wake of 911. As with many, he hopped around into different roles finally joining the NSA for a bit, serving in Geneva before returning home to the DC area to resume life as a contractor. Contractors usually make more than staff in the intelligence community. Snowden would go on to build backup systems that would be used for even more overreach. He then took a step down to be a Sharepoint admin in Hawaii. Because Hawaii. And because he had started suffering from pretty bad epileptic seizures, an ailment he inherited from his mom. The people that do your IT have an unprecedented amount of information at their fingerprints. The backup admin can take almost everything anyone would want to know about your company home with them one day. You know, because it’s Tuesday. In fact, we often defined that as an actual business process they were supposed to follow in the days before the cloud. We called them offsite backups. Sharepoint is a Microsoft product that allows you to share files, resources from other Microsoft products, news, and most anything digital with others. Snowden was a Sharepoint admin and boy did he share. Snowden took some time off from the NSA in 2013 and flew to Hong Kong where he met with Glenn Greenwald, Ewen MacAskill. He leaked a trove of documents to The Guardian and The Washington Post. The documents kept flowing to Der Spiegel and The New York Times. He was charged on violating the Espionage Act of 1917 and after going into hiding in Hong Kong tried to escape to Ecuador. But during a layover in Moscow he discovered his passport had been cancelled and he’s been living there ever since. He has been offered asylum in a few countries but because there are no direct flights there from Moscow. Think about this: just over half of Americans had a cell phone on September 11th, 2001. And practically none had we now consider smart phones. In fact, only 35% had them in 2011. Today, nearly all Americans have a cell phone with 80 percent having a smart phone. Those devices create a lot of data. There’s the GPS coordinates, the emails sent and received, the Facebook messages to our friends, the events we say we’re going to, the recipes about the food we’re going to cook, the type of content we like to assume, our financial records. Even our photos. Once upon a time, and it was not very long ago at all, you had to break into someones house to go through all that. Not any more. During the time since 911 we also moved a lot of data to the cloud. You know how your email lives on a server hosted by Google, Apple, Microsoft, or some other company. That’s the cloud. You know how your documents live on Google, Dropbox or Box instead of on a small business server or large storage area network these days? That’s the cloud. It’s easy. It’s cheaper. And you don’t have to have a Snowden in every company in the world to host them yourself. The other thing that has changed between 2001 and 2013 was the actual law. The USA Patriot Act expanded the ability for the US to investigate the September 11th terror attacks and other incidents of terror. Suddenly people could be detained indefinitely, law enforcement could search records and homes without a court order. It was supposed to be temporary. It was renewed in 2005 under Bush and then extended in 2011 under Obama. It continued in 2015 but under Section 215 the NSA was told to stop collecting everyone’s phone data. But phone companies will keep the data and provide it to the NSA upon request, so samesies. But they still called it the Freedom act. FISC, or the Foreign Intelligence Surveillance Court was established in 1978 with the passing of the Foreign Intelligence Surveillance Act, or FISA. These hearings are ex partei given that they are about intelligence matters. The Patriot Act expanded those. The Freedom Act retained much of that language and the Trump Administration is likely to request these be made permanent. Since we all know what’s happening, I guess our values have changed. Over the centuries, technology has constantly forced us to rethink our values, consciously or not. Can you imagine how your thinking might have changed going from a society where people didn’t read to one where they did at the advent of the printing press? Just think of how email and instant messaging changed what we value. The laws, based on the ethics and the values are slow to respond to technology. Laws are meant to be deliberate and so deliberated over. When Trump, Biden, Bernie, or the next president or presidential hopefuls ask about the history of investigating them. The answer is probably that the government started when you got your first cell phone. Or your first email address. Do you value keeping that information private? I’ve never cared all that much. I guess the rest of the country doesn’t either, as we haven’t taken steps to change it. But I might care about my civil liberties some day in the future. Think about that come December 15th. We can undo anything. If we care to. Because our civil liberties are just one aspect of liberty. And no matter who is in office or what they’re trying to accomplish, you still have values and on a case by case basis, you don’t have to sacrifice or erode those due to partisan bickering, or with each transition of power and each cult of personality that rises, you will slowly see them disappear. So thank you for tuning in to yet another episode of the History of Computing Podcast. We’re so lucky to have you. Have a great day!
11/21/2019 • 15 minutes, 35 seconds
Topiary: You Cannot Arrest An Idea
You Cannot Arrest An Idea Welcome to the History of Computing Podcast, where we explore the history of computers. Because understanding the helps us handle what’s coming in future - and maybe helps us build what’s next, without repeating some of our mistakes. Or if we do make mistakes, maybe we do so without taking things too seriously. Todays episode is a note from a hacker named Topiary, which perfectly wraps feelings many of us have had in words that… well, we’ll let you interpret it once you hear it. First, a bit of his story. It’s February, 2011. Tflow, Sabu, Keila, Topiary, and Ryan Ackroyd attack computer security firm HBGary Federal after CEO Barr decides to speak at a conference outing members of then 7 year old hacking collective Anonymous with the motto: We are Anonymous. We are Legion. We do not forgive. We do not forget. Expect us. As a part of Anonymous he would help hack Zimbabwe, Libya, Tunisia and other sites in support of Arab Spring protestors. They would go on to hack Westboro Baptist Church live during an interview. But that was part of a large collective. They would go on to form a group called Lulzsec with PwnSauce and AVunit. At Lulzsec, the 7 went on a “50 days of Lulz” spree. During this time they hit Fox.com, leaked the database of X factor Contestants, took over the PBS news site and published an article that Tupac was still alive and living in New Zealand. They published an article on the Sun claiming Rupert Murdoch died rather than testify in the voice mail hacking trials that were big at the time. They would steal data from Sony, DDoS All the Things, and they would go on to take down and or steal data from the US CIA, Department of Defense, and Senate. The light hearted comedy mixed with a considerable amount of hacking skills had earned them the love and adoration of tens of thousands. What happened next? Hackers from all over the world sent them their Lulz. Topiary helped get their haxies posted. Then Sabu was caught by the FBI and helped to out the others. Or did he. Either way, as one could expect, by July 2011, all had been arrested except AVunit. Topiary’s last tweet said “You cannot arrest an idea.” The British government might disagree. Or maybe counter that you can arrest for acting on an idea. Once unmasked, Jake Davis was in jail and then banned from the Internet for 2 years. During that time Topiary, now known as Jake Davis, wrote what is an exceptional piece of writing, to have come from a 20 year old. Here it is: “Hello, friend, and welcome to the Internet, the guiding light and deadly laser in our hectic, modern world. The Internet horde has been watching you closely for some time now. It has seen you flock to your Facebook and your Twitter over the years, and it has seen you enter its home turf and attempt to overrun it with your scandals and “real world” gossip. You need to know that the ownership of cyberspace will always remain with the hivemind. The Internet does not belong to your beloved authorities, militaries, or multi-millionaire company owners. The Internet belongs to the trolls and the hackers, the enthusiasts and the extremists; it will never cease to be this way. You see, the Internet has long since lost its place in time and its shady collective continues to shun the fact that it lives in a specific year like 2012, where it has to abide by 2012’s morals and 2012’s society, with its rules and its punishments. The Internet smirks at scenes of mass rape and horrific slaughtering followed by a touch of cannibalism, all to the sound of catchy Japanese music. It simply doesn’t give tuppence about getting a “job,” getting a car, getting a house, raising a family, and teaching them to continue the loop while the human race organizes its own death. Custom-plated coffins and retirement plans made of paperwork… The Internet asks why? You cannot make the Internet feel bad, you cannot make the Internet feel regret or guilt or sympathy, you can only make the Internet feel the need to have more lulz at your expense. The lulz flow through all in the faceless army as they see the twin towers falling with a dancing Hitler on loop in the bottom-left corner of their screens. The lulz strike when they open a newspaper and care nothing for any of the world’s alleged problems. They laugh at downward red arrows as banks and businesses tumble, and they laugh at our glorious government overlords trying to fix a situation by throwing more currency at it. They laugh when you try to make them feel the need to “make something of life,” and they laugh harder when you call them vile trolls and heartless web terrorists. They laugh at you because you’re not capable of laughing at yourselves and all of the pointless fodder they believe you surround yourselves in. But most of all they laugh because they can. This is not to say that the Internet is your enemy. It is your greatest ally and closest friend; its shops mean you don’t have to set foot outside your home, and its casinos allow you to lose your money at any hour of the day. Its many chat rooms ensure you nao longer need to interact with any other members of your species directly, and detailed social networking conveniently maps your every move and thought. Your intimate relationships and darkest secrets belong to the horde, and they will never be forgotten. Your existence will forever be encoded into the infinite repertoire of beautiful, byte-sized sequences, safely housed in the cyber cloud for all to observe. And how has the Internet changed the lives of its most hardened addicts? They simply don’t care enough to tell you. So welcome to the underbelly of society, the anarchistic stream-of-thought nebula that seeps its way into the mainstream world — your world — more and more every day. You cannot escape it and you cannot anticipate it. It is the nightmare on the edge of your dreams and the ominous thought that claws its way through your online life like a blinding virtual force, disregarding your philosophies and feasting on your emotions. Prepare to enter the hivemind” I hope Topiary still has a bit of funsies here and there. I guess we all grow up at some point. He now hunts for bug bounties rather than Lulz. One was addressed in iOS 10.13.1 when you could DoS an iOS device by shoving a malicious file into CoreText. That would be CVE-2017-7003. Hacking solutions together or looking for flaws in software. It can be like a video game. For better or worse. But I love that he’s pointed that big ugly Victorian ASCII humble boat in the direction of helping to keep us betterer. And the world is a more secure place today than it was before them. And a bit more light hearted. So thank you Topiary, for making my world better for awhile. I’m sorry you paid a price for it. But I hope you’re well.
11/18/2019 • 14 minutes, 59 seconds
TOR: Gateway to the Darkish Internets
TOR: The Dark Net Welcome to the History of Computing Podcast, where we explore the history of information technology. Because by understanding the past, we’re able to be prepared for the innovations of the future! I’ve heard people claim the Internet was meant to be open. The Internet was built using United States defense department grants. It wasn’t meant to be a freedom movement. These concepts were co-opted by some of the hippies who worked on the Internet. People I highly respect, like Stuart Brand and Doug Engelbart. Generations of engineers and thinkers later we got net neutrality, we got the idea that people should be anonymous. They rightfully looked to the Internet as a new freedom. But to be clear, those were never in the design requirement for any of the original Internet specifications. And sometimes the intent tells you a lot about the architecture and therefore explains the evolution and why certain aspects were necessary. The Internet began in the 1960s. But the modern Internet began in 1981 when the National Science Foundation took over funding and implemented Internet Protocol Suite, giving the IP part of the name to the acronym TCP/IP. Every device on the Internet has an IP address. You ask another host on the Internet for information and the site responds with that information. That response routes to the IP address listed as the source IP address in the packets of data you sent when you made the request. You can send the source IP address as an address other than your own, but then the response will be sent to the wrong place. Every device in a communication between two computers is meant to know the source and destination address of all the other devices involved in that communication. The Internet was meant to be resilient. It’s really expensive to have a private network, or a network where your computer talks directly to another. Let’s say your computer and another computer would like to have a conversation. That conversation likely passes through 10-12 other devices, if not more. The devices between you were once called IMPs but they’re now called routers. Those devices keep a table of addresses they’ve attempted to communicate with and the routes between other routers that they took to get there. Thus the name. Once upon a time those routes were programmed in manually. Later the routers got smarter, forming a pyramid scheme where they look to bigger routers that have more resources to host larger and larger routing tables. The explosion of devices on the Internet also led to a technology called Network Address Translation. This is where one of the 3,720,249,092 is split into potentially hundreds of thousands of devices and your device communicates with the Internet through that device. These are routers that route traffic back to the private address you’re using to communicate with the Internet. When bad people started to join us on the Internet these devices ended up with a second use, to keep others from communicating with your device. That’s when some routers started acting as a firewall. Putting names to the side, this is the most basic way to explain how computers communicate over the Internet. This public Internet was then a place where anyone with access to those routers could listen to what was passing over them. Thus we started to encrypt our communications. Thus http became https. Each protocol would encrypt traffic in its own way. But then we needed to hide all of our traffic. And maybe even what sites we were going to. A common technique to hide who you are online is to establish a VPN into a computer. A VPN, or Virtual Private Network, is a point to point network, established over existing Internet protocols. The VPN server you are logging into knows what IP address you are on. It can also intercept your communications, replay them, and even if encrypted, be aware of who you are actually communicating with. So a few minutes of over-simplified text lays out the basis of the Internet routing scheme under IPv4, that was initiated in 1983, the year the movie WarGames was released. Remember, the Internet was meant o build a resilient, fault tolerant network so that in the event of nuclear war, the US could retaliate and kill the other half of the people left in the world. If you’ve seen WarGames you have a pretty good idea of what we’re talking about here. Just to repeat: Privacy was never a concern in the design of the Internet. The United States has people in every country in the world that need to communicate home in real time. They need to do so in a secure and private manner. Part of the transition of the Internet to the National Science Foundation was to implement MilNet, their own network. But let’s say you’re an operative in Iran. If you try to connect to milnet then you’re likely to nat have a very good day. So these operatives needed to communicate back to the United States over a public network. If they used a VPN then the connection isn’t fully secured and they run the risk of getting found because eventually someone would be discovered and all traffic to a given address would be analyzed and that source device tracked down and more bad days. Let’s say you’re a political dissident in a foreign country. You want to post photos of war crimes. You need a way to securely and anonymously communicate with a friendly place to host that information. Enter the United States Naval Research Laboratory with Paul Syverson, David Goldschlag, and Michael Reed who were asked to find some ways to help protect the US intelligence community when they were on the public Internet. Roger Dingledine and Nick Mathewson would join the project and DARPA would pick up funding in 1997. They came up with what we now called TOR, or The Onion Router. Any property on the Internet that is intentionally exclusionary to the public can be considered a dark net or part of the dark web. Although usually we aren’t talking about your company intranet when we refer to these networks. Tor is simple and incredibly complicated. You install software, or a browser extension. Tor routes your data through a bunch of nodes. Each of those computers or routers is only aware of the node in front of or behind it in the communication route and encrypting the next node sent. Since each step is encrypted, these layers of encryption can be considered like a network with layers like an onion. The name might also come from the fact that a lot of people cry when they realize what TOR speeds are like. So if each step is partially encrypted, a compromise of any device in the route will still defeat network surveillance. Instead we’re *usually* talking about something like Tor. This is all pretty ingenious. So anyone can access the Internet anonymously? Yes. And when they do they can do anything they want, totally anonymously, right? Yes. And this is what is often called the Dark Web? Ish. There are sites you can access anonymously through Tor. Those sites might deal in drugs, fraud, counterfeit anything, gambling, hacking, porn, illegal guns, prostitution, anything. And anything might be really, really bad. You can quickly find terrible things, from violence for hire to child pornography. Humans can be despicable. Wait, so are we saying the US government really supports TOR? Yes. Most of the funding for the TOR project comes from the US government. Human Rights Watch, Reddit, and Google kick in money here and there. But it’s not much comparably. China, Turkey, and Venezuela banned Tor? Duh. They would ban it in North Korea but they don’t need to. TOR was used by Edward Snowden in 2013 to send leaked information to The Washington Post and The Guardian. And the use of the network has picked up ever since. According to leaked information the NSA finds TOR annoying. Even though the US government funds it. As does the Russian government, who’s offered a bounty for deanonymization techniques. After the fallout from Snowden’s leaked data, the US passed a bill allowing libraries to run Tor, opening the door for more exit nodes, or public-facing IP addresses for Tor. Now Tor isn’t the be all end-all. Your traffic is sent through an exit node. So let’s think about those library computers. If you’re listening to network traffic on one of those computers and the traffic being sent isn’t encrypted then, well, your email password is exposed. And flaws do come up every now and then. But they’re publicly exposed and then the maintainers solve for them. I’ve heard people claim that since Tor is government-funded, it’s watched by the government. Well, anything is possible. But consider this, the source code is published as well. It’s on GitHub at GitHub.com/torproject. If there are any intentional flaws they’re right there in broad daylight. The projects have been available for years. Given the fact that you have the source code, why don’t you give cracking it a shot? I have about 500 more episodes to record in the queue. We’ll see who wins that race. I should probably go start recording the next one now. All you spooks out there listening through Tor, stay safe. And to all the listeners, thank you for tuning in to yet another episode of the History of Computing Podcast. We’re so lucky to have you. Have a great day!
11/11/2019 • 14 minutes, 19 seconds
Visual Basic
Visual Basic Welcome to the History of Computing Podcast, where we explore the history of information technology. Because understanding the past prepares us for the innovations of the future! Today we’re going to cover an important but often under appreciated step on the path to ubiquitous computing: Visual Basic. Visual Basic is a programming language for Windows. It’s in most every realistic top 10 of programming languages of all time. It’s certainly split into various functional areas over the last decade or so, but it was how you did a lot of different tasks in Windows automation and programming for two of the most important decades through a foundational period of the PC movement. But where did it come from? Let’s go back to 1975. This was a great year. The Vietnam War ended, Sony gave us Betamax, JVC gave us VHS. Francisco Franco died. I don’t wish ill on many, but if I could go back in time and wish ill on him, I would. NASA launched a joint mission with the Soviet Union. The UK voted to stay the EU. Jimmy Hoffa disappears. And the Altair ships. Altair Basic is like that lego starter set you buy your kid when you think they’re finally old enough to be able to not swallow the smallest pieces. From there, you buy them more and more, until you end up stepping on those smallest pieces and cursing. Much as I used to find myself frequently cursing at Visual Basic. And such is life. Or at least, such is giving life to your software ideas. No matter the language, there’s often plenty of cursing. So let’s call the Altair a proto-PC. It was underpowered, cheap, and with this Microsoft Basic programming language you could, OMG, feed it programs that would blink lights, or create early games. That was 1978. And based largely on the work of John Kemeny and Thomas Kurts, the authors of the original BASIC in 1964, at Dartmouth College. As the PC revolution came, BASIC was popular on the Apple II and original PCs with QuickBASIC coming in 1985, and an IDE, or Integrated Development Environment, for QuickBASIC shipped in 2.0. At the time Maestro was the biggest IDE in use, but they’d been around since Microsoft released the first in 1974. Next, you could compile these programs into DOS executables, or .exe files in 3.0 and 4.0 brought debugging in the IDE. Pretty sweet. You could run the interpreter without ever leaving the IDE! No offense to anyone but Apple was running around the world pitching vendors to build software for the Mac, but had created an almost contentious development environment. And it showed from the number of programs available for the Mac. Microsoft was obviously investing heavily in enabling developers to develop in a number of languages and it showed; Microsoft had 4 times the software titles. Many of which were in BASIC. But the last version of QuickBASIC as it was known by then came in 4.5, in 1988, the year the Red Army withdrew from Afghanistan - probably while watching Who Framed Roger Rabbit on pirated VHS tapes. But by the late 80s, use began to plummet. Much as my daughters joy of the legos began to plummet when she entered tweenhood. It had been a huge growth spurt for BASIC but the era of object oriented programming was emerging. But Microsoft was in an era of hyper growth. Windows 3.0 - and what’s crazy is they were just entering the buying tornado. 1988, the same year as the final release of QuickBASIC, Alan Cooper created a visual programming language he’d been calling Ruby. Now, there would be another Ruby later. This language was visual and Apple had been early to the market on Visual programming, with the Mac - introduced in 1984. Microsoft had responded with Windows 1.0 in 1985. But the development environment just wasn’t very… Visual. Most people at the time used Windows to open a Window of icky text. Microsoft leadership knew they needed something new; they just couldn’t get it done. So they started looking for a more modern option. Cooper showed his Ruby environment to Bill Gates and Gates fell in love. Gates immediately bought the product and it was renamed to Visual Basic. Sometimes you build, sometimes you partner, and sometimes you buy. And so in 1991, Visual Basic was released at Comdex in Atlanta, Georgia and came around for DOS the next year. I can still remember writing a program for DOS. They faked a GUI using ASCII art. Gross. VB 2 came along in 1992, laying the foundations for class modules. VB 3 came in 93 and brought us the JET database engine. Not only could you substantiate an object but you had somewhere to keep it. VB 4 came in 95 because we got a 32-bit option. That adds a year or 6 for every vendor. The innovations that Visual Basic brought to Windows can still be seen today. VBX and DLL are two of the most substantial. A DLL is a “dynamic link library” file that holds code and procedures that Windows programs can then consume. DLL allow multiple programs to use that code, saving on memory and disk space. Shared libraries are the cornerstone of many an object-oriented language. VBX isn’t necessarily used any more as they’ve been replaced with OCXs but they’re similar and the VBX certainly spawned the innovation. These Visual Basic Extensions, or VBX for short, were C or C++ components that were assembled into an application. When you look at applications you can still see DLLs and OCXs. VB 4 was when we switched from VBX to OCX. VB 5 came in 97. This was probably the most prolific, both for software you wanted on your computer and malware. We got those crazy ActiveX controls in VB 5. VB 6 came along in 1998, extending the ability to create web apps. And we sat there for 10 years. Why? The languages really started to split with the explosion of web tools. VBScript was put into Active Server Pages . We got the .NET framework for compiled web pages. We got Visual Basic for Applications, allowing Office to run VB scripts using VBA 7. Over the years the code evolved into what are now known as Unified Windows Platform apps, written in C++ with WinRT or C++ with CX. Those shared libraries are now surfaced in common APIs and sandboxed given that security and privacy have become a much more substantial concern since the Total Wave of the Internet crashed into our lego sets, smashing them back to single blocks. Yah, those blocks hurt when you step on them. So you look for ways not to step on them. And controlling access to API endpoints with entitlements is a pretty good way to walk lightly. Bill Gates awarded Cooper the first “Windows Pioneer Award” for his work on Visual Basic. Cooper continued to consult with companies, with this crazy idea of putting users first. He was an earlier proponent of User Experience and putting users first when building interfaces. In fact, his first book was called “About Face: The Essentials of User Interface Design.” That was published in 1995. He still consults and trains on UX. Honestly, Alan Cooper only needs one line on his resume: “The Father of Visual Basic.” Today Eclipse and Visual Studio are the most used IDEs in the world. And there’s a rich ecosystem of specialized IDEs. The IDE gives code completion, smart code completion, code search, cross platform compiling, debugging, multiple language support, syntax highlighting, version control, visual programming, and so much more. Much of this isn’t available on every platform or for every IDE, but those are the main features I look for - like the first time I cracked open IntelliJ. The IDE is almost optional in functional programming - but In an era of increasingly complex object-oriented programming where classes are defined in hundreds or thousands of itty bitty files, a good, smart, feature-rich IDE is a must. And Visual Studio is one of the best you can use. Given that functional programming is dead, there’s no basic remaining in any of the languages you build modern software in. The explosion of object-orientation created flaws in operating systems, but we’ve matured beyond that and now get to find all the new flaws. Fun right? But it’s important to think, from Alan Kay’s introduction of Smalltalk in 1972, new concepts in programming in programming had been emerging and evolving. The latest incarnation is the API-driven programming methodology. Gone are the days when we accessed memory directly. Gone are the days when the barrier of learning to program was understanding functional and top to bottom syntax. Gone are the days when those Legos were simple little sets. We’ve moved on to building Death Stars out of legos with more than 3500 pieces. Due to increasingly complex apps we’ve had to find new techniques to keep all those pieces together. And as we did we learned that we needed to be much more careful. We’ve learned to write code that is easily tested. And we’ve learned to write code that protects people. Visual Basic was yet another stop towards the evolution to modern design principals. We’ve covered others and we’ll cover more in coming episodes. So until next time, think of the continuing evolution and what might be next. You don’t have to be in front of it, but it does help to have a nice big think on how it can impact projects you’re working on today. So thank you for tuning in to yet another episode of the History of Computing Podcast. We’re so lucky to have you. Have a great day!
11/8/2019 • 14 minutes, 2 seconds
Boring Old Application Programming Interfaces
Welcome to the History of Computing Podcast, where we explore the history of information technology. Because by understanding the past, we’re able to be prepared for the innovations of the future! Todays episode is gonna’ be a bit boring. It’s on APIs. An API is an Application Program Interface this is a set of tools, protocols or routines used for building applications. See boring! Most applications and code today are just a collection of REST endpoints interconnected with fancy development languages. We can pull in a lot of information from other apps and get a lot of code as we call it these days “for free”. It’s hard to imagine a world without APIs. It’s hard to imagine what software would be like if we still had to write memory to a specific register in order to accomplish basic software tasks. Obfuscating these low level tasks is done by providing classes of software to give developers access to common tasks they need to perform. These days, we just take this for granted. But once upon a time, you did have to write all of that code over and over, on PCs, initially in BASIC, PASCAL, or assembly for really high performance tasks. Then along comes Roy Fieldings. He writes the Architectural Styles and Design of Network-based Software Architectures dissertation in 2000. But APIs came out of a need for interaction between apps and devices. Between apps and web services. Between objects and other objects. The concept of the API started long before y2k though. In the 60s, we had libraries in operating systems. But what Subrata Dasgupta referred to as the second age of computer science in the seminal book by the same name began in 1970. And the explosion of computer science as a field in the 70s gave us the rise of Message Oriented Middleware and then Enterprise Application Integration (EAI) becomes the bridge into mainframe systems. This started a weird time. IBM ruled the world, but they were listening to the needs of customers and released MQSeries, to facilitate message queues. I release message queues are boring. Sorry. I’ve always felt like the second age of computer science is split right down the middle. The 1980s brought us into the era of object oriented programming when Alan Kotok and his coworkers from Xerox PARC gave us Smallltalk, the first popular object oriented programming language and began to codify methods and classes. Life was pretty good. This led to a slow adoption across the world of the principals of Alan Kay vis a viz Doug Engelbart vis a viz and Vanever Bush. The message passing and queuing systems were most helpful in very large software projects where there were a lot of procedures or classes you might want to share to reduce the cyclomatic complexity of those projects. Suddenly distributed computing began to be a thing. And while it started in research institutes like PARC and academia, it proliferated into the enterprise throughout the 80s. Enterprise computing is boring. Sorry again. The 90s brought grunge. And I guess this little uninteresting thing called the web. And with the web came JavaScript. It was pretty easy to build an API endpoint, or a programmatic point that you programmed to talk to a site, using a JSP or JavaServer Page helps software developers create dynamically generated pages such as those that respond to a query for information and then pass that query on to a database and provide the response. You could also use PHP, Ruby, ASP, and even NeXT’s Web Objects, the very name of which indicates an Object Oriented Programming language. The maturity of an API development environment led to Service-Oriented Architectures in the early 2000s, where we got into more function-based granularity. Instead of simply writing an endpoint to make data that was in our pages accessible, we would build those endpoints to build pages on and then build contracts for those endpoints that guaranteed that we would not break the functionality other teams needed. Now other teams could treat our code as classes they’d written themselves. APIs had shot into the mainstream. Roy Fielding’s dissertation legitimized APIs and over the next few years entire methodologies for managing teams based on the model began to emerge. Fielding wasn’t just an academic. He would help create the standards for HTTP communication. And suddenly having an API became a feature that helped propel the business. This is where APIs get a bit more interesting. You could transact online. eBay shipped an API in 2000, giving developers the ability to build their own portals. They also released low-code options called widgets that you could just drop into a page and call to produce a tile, or iFrame. The first Amazon APIs shipped in 2002, in an early SOAP iteration, along with with widgets as well. In fact, embedding widgets became much bigger than APIs and iFrames are still common practice today, although I’ve never found a *REAL* developer who liked them. I guess I should add that to my interview questions. The Twitter API, released in 2006, gave other vendors the ability to write their own Twitter app, but also gave us the concept of OAuth, a federated identity. Amazon released their initial APIs that year, making it possible to use their storage and compute clusters and automate the tasks to set them up and tear them down. Additional APIs would come later, giving budding developers the ability to write software and host data in databases, even without building their own big data compute clusters. This too helped open the doors to an explosion of apps and web apps. These days they basically offer everything, including machine learning, as a service, all accessible through an API. The iPhone 3g wasn’t boring. It came along in 2009. All of a sudden; and suddenly the world of mobile app development was unlocked. Foursqure came along at about the same time and opened up their APIs. This really gave the whole concept of using other vendor APIs as a way to accomplish various tasks without having to write all the code to do some of those tasks themselves. From there, more and more vendors began to open APIs and not only could you pull in information but you could also push more information out. And the ability to see settings gives us the ability to change them as well. From the consumer Foursqure to the Enterprise, now we have microservices available to do anything you might want to do. Microservices are applications that get deployed as modular services. Private APIs, or those that are undocumented. Public APIs, or interfaces anyone can access. Partner APIs, or those requiring a key to access. At this point, any data you might want to get into an app, is probably available through an API. Companies connect to their own API to get data, especially for apps. And if a vendor refuses to release their own API, chances are some enterprising young developer will find a way if there’s an actual desire to leverage their data, which is what happened to Instagram. Until they opened up their API at least. And Facebook, who released their API to any developer well over a decade is probably the most villainized in this regard. You see, Facebook allowed a pretty crazy amount of data to be accessible in their API until all of a sudden Cambridge Analytica supposedly stole elections with that data. There’s nothing boring about stealing elections! Whether you think that’s true or not, the fact that Facebook is the largest and most popular social network in the history of the world shines a light when technology currently being used by everyone in the industry is taken advantage of. I’m not sticking up for them or villainizing them; but when I helped to write one of the early Facebook games and I was shown what we now refer to as personally identifiable data, and able to crawl a user to get to their friends to invite them to add our game, and then their friends, it didn’t seem in the least bit strange. We’d done spidery things with other games. Nothing weird here. The world is a better place now that we have OAUth grant types and every other limiter on the planet. Stripe in fact gave any developer access to quickly and easily process financial transactions. And while there were well-entrenched competitors, they took over the market by making the best APIs available. They understood that if you make it easy and enjoyable for developers, they will push for adoption. And cottage industries of apps have sprung up over the years, where apps aggregate data into a single pane of glass from other sources. Tools like Wikipedia embrace this, banks allow Mint and Quickbooks to aggregate and even control finances, while advertising-driven businesses like portals and social networks seem to despise it, understandably. Sometimes they allow it to gain market share and then start to charge a licensing fee when they reach a point where the cost is too big not to, like what happened with Apple using Google Maps until suddenly they started their own mapping services. Apple by the way has never been great about exposing or even documenting their publicly accessible APIs outside of those used in their operating systems, APNs and profile management environment. The network services Apple provides have long been closed off. Today, if you write software, you typically want that software to be what’s known as API-first. API-first software begins with the tasks users want your software to perform. The architecture and design means the front-end or any apps just talk to those backend services and perform as little logic not available through an API as possible. This allows you to issue keys to other vendors and build integrations so those vendors can do everything you would do, and maybe more. Suddenly, anything is possible. Combined with continuous deployment, contiuous testing, continuous design, and continuous research, we heavily reduce the need to build so much, slashing the time it takes to market and the cost it takes to get to market substantially. When I think of what it means to be nimble. No matter how big the team, that’s what I think of. Getting new products and innovations to market shouldn’t be boring. APIs have helped to fulfill some of the best promises of the Information Age, putting an unparalleled amount of information at our fingertips. The original visionary of all of this, Vannevar Bush, would be proud. But I realize that this isn’t the most exciting of topics. So thank you for tuning in to yet another episode of the History of Computing Podcast. We’re so lucky to have you. Have a great day!
11/4/2019 • 15 minutes
In The Beginning... There Was Pong
Welcome to the History of Computing Podcast, where we explore the history of information technology. Because understanding the past prepares us for the innovations of the future! Today we’re going to look at Pong. In the beginning there was Pong. And it was glorious! Just think of the bell bottoms at Andy Capp’s Tavern in Sunnyvale, California on November 29th 1972. The first Pong they built was just a $75 black and white tv from a Walgreens and some cheap parts. The cabinet wasn’t even that fancy. And after that night, the gaming industry was born. It started with people starting to show up and play the game. They ended up waiting for the joint to open, not drinking, and just gaming the whole time. The bartender had never seen anything like it. I mean, just a dot being knocked around a screen. But it was social. You had to have two players. There was no machine learning to play the other side yet. Pretty much the same thing as real ping pong. And so Pong was released by Atari in 1972. It reminded me of air hockey the first time I saw it. You bounced a ball off a wall and tried to get it past the opponent using paddles. It never gets old. Ever. That’s probably why of all the Atari games at the arcade, more quarters got put into it than any. The machines were sold for three times the cost to produce them; unheard of at the time. The game got popular, that within a year, the company had sold 2,500 , which they tripled in 1974. I wasn’t born yet. But I remember my dad telling me that they didn’t have a color tv yet in 72. They’d manufactured the games in an old skate rink. And they were cheap because with the game needing so few resources they pulled it off without a CPU. But what about the code? It was written by Al Alcorn as a training exercise that Nolan Bushnell gave him after he was hired at Atari. He was a pretty good hire. It was supposed to be so easy a kid could play it. I mean, it was so easy a kid could play it. Bushnell would go down as the co-creator of Pong. Although maybe Ralph Baer should have as well, given that Bushnell tested his table tennis game at a trade show the same year he had Alcorn write Pong. Baer had gotten the idea of building video games while working on military systems at a few different electronics companies in the 50s and even patented a device called the Brown Box in 1973, which was filed in 1971 prior to licensing it to Magnavox to become the Odyssey. Tennis for Two had been made available in 1958. Spacewar! had popped up in 1962 , thanks to MIT’s Steven “Slug” Russel’s being teased until he finished it. It was initially written on the TX-0 and was ported to the PDP, slowly making its way across the world as the PDP was shipping. Alan Kotok had whipped up some sweet controllers, but it could be played with just the keyboard as well. No revolution seemed in sight yet as it was really just shipping to academic institutions. And to very large companies. The video game revolution was itching to get out. People were obsessed with space at the time. Space was all over science fiction, there was a space race being won by the United States, and so Spacewar gave way to Computer Space, the first arcade game to ship, in 1971, modeled after Spacewar!. But as an early coin operated video game it was a bit too complicated. As was Galaxy Game, whipped up in 1971 by Bushnell and cofounder Ted Dabney, who’s worked together at Ampex. They initially called their company Syzygy Engineering but as can happen, there was a conflict on that trademark and they changed the name to Atari. Atari had programmed Galaxy Game, but it was built and distributed by Nutting Associates. It was complex and needed a fair amount of instructions to get used to it. Pong on the other hand needed no instructions. A dot bounced from you to a friend and you tried to get it past the other player. Air hockey. Ping pong. Ice hockey. Football. It just kinda’ made sense. You bounced the dot off a paddle. The center of each returned your dot at a regular 90 degree angle and the further out you got, the smaller that angle. The ball got faster the longer the game went on. I mean, you wanna’ make more quarters, right?!?! Actually that was a bug, but one you like. They added sound effects. They spent three months. It was glorious and while Al Alcorn has done plenty of great stuff in his time in the industry I doubt many have been filled with the raw creativity he got to display during those months. It was a runaway success. There were clones of Pong. Coleco released Telestar and Nintendo came out with Color TV Game 6. In fact, General Instruments just straight up cloned the chip. Something else happened in 1972. The Magnavox Odyssey shipped and was the first console with interchangeable dice. After Pong, Atari had pumped out Gotcha, Rebound, and Space Race. They were finding success in the market. Then Sears called. They wanted to sell Pong in the home. Atari agreed. They actually outsold the Odyssey when they finally made the single-game console. Magnavox sued, claiming the concept had been stolen. They settled for $700k. Why would they settle? Well, they could actual prove that they’d written the game first and make a connection for where Atari got the idea from them. The good, the bad, and the ugly of intellectual property is that the laws exist for a reason. Baer beat Atari to the punch, but he’d go on to co-develop Simon says. All of his prototypes now live at the Smithsonian. But back to Pong. The home version of pong was released in 1974 and started showing up in homes in 1975, especially after the Christmas buying season in 1975. It was a hit, well on its way to becoming iconic. Two years later, Atari released the iconic Atari 2600, which had initially been called the VCS. This thing was $200 and came with a couple of joysticks, a couple of paddles, and a game called Combat. Suddenly games were showing up in homes all over the world. They needed more money to make money and Bushnell sold the company. Apple would become one of the fastest growing companies in US History with their release of the Apple II, making Steve Jobs a quarter of a billion dollars in 1970s money. But Atari ended up selling of units and becoming THE fastest growing company in US history at the time. There were sequels to Pong but by the time Breakout and other games came along, you really didn’t need them. I mean, pin-pong? Pong Doubles was fine but , Super Pong, Ultra Pong, and Quadrapong, never should have happened. That’s cool though. Other games definitely needed to happen. Pac Man became popular and given it wasn’t just a dot but a dot with a slice taken out for a mouth, it ended up on the cover of Time in 1982. A lot of other companies were trying to build stuff, but Atari seemed to rule the world. These things have a pretty limited life-span. The video game crash of 1983 caused Atari to lose half a billion dollars. The stock price fell. At an important time in computers and gaming, they took too long to release the next model, the 5200. It was a disaster. Then the Nintendo arrived in some parts of the world in 1983 and took the US by storm in 1985. Atari went into a long decline that was an almost unstoppable downward spiral in a way. That was sad to watch. I’m sure it was sadder to be a part of. it was even sadder when I studied corporate mergers in college. I’m sure that was even sadder to be a part of as well. Nolan Bushnell and Ted Dabney, the founders of Atari, wanted a hit coin operated game. They got it. But they got way more than they bargained for. They were able to parlay Pong into a short lived empire. Here’s the thing. Pong wasn’t the best game ever made. It wasn’t an original Bushnell idea. It wasn’t even IP they could keep anyone else from cloning. But It was the first successful video game and helped fund the development of the VCS, or 2600, that would bring home video game consoles into the mainstream, including my house. And the video game industry would later eclipse the movie industry. But the most important thing pong did was to show regular humans that microchips were for more than… computing. Ironically the game didn’t even need real microchips. The developers would all go on to do fun things. Bushnell founded Chuck E. Cheese with some of his cresis-mode cash. Once it was clear that the Atari consoles were done you could get iterations of Pong for the Sega Genesis, the Playstation, and even the Nintendo DS. It’s floated around the computer world in various forms for a long, long time. The game is simple. The game is loved. Every time I see it I can’t help but think about bell bottoms. It launched industries. And we’re lucky to have had it. Just like I’m lucky to have had you as a listener today. Thank you so much for choosing to spend some time with us. We’re so lucky to have you.
11/1/2019 • 12 minutes, 55 seconds
The Apache Web Server
Welcome to the History of Computing Podcast, where we explore the history of information technology. Because understanding the past prepares us for the innovations of the future! Today we’re going to cover one of the most important and widely distributed server platforms ever: The Apache Web Server. Today, Apache servers account for around 44% of the 1.7 Billion web sites on the Internet. But at one point it was zero. And this is crazy, it’s down from over 70% in 2010. Tim Berners-Lee had put the first website up in 1991 and what we now know as the web was slowly growing. In 1994 and begins with the National Center for Supercomputing Applications, University of Illinois, Urbana-Champaign. Yup, NCSA is also the organization that gave us telnet and Mosaic, the web browser that would evolve into Netscape. After Rob leaves NCSA, the HTTPdaemon goes a little, um, dormant in development. The distress had forked and the extensions and bug fixes needed to get merged into a common distribution. Apache is a free and open source web server that was initially created by Robert McCool and written in C in 1995, the same year Berners-Lee coined the term World Wide Web. You can’t make that name up. I’d always pictured him as a cheetah wearing sunglasses. Who knew that he’d build a tool that would host half of the web sites in the world. A tool that would go on to be built into plenty of computers so they can spin up sharing services. Times have changed since 1995. Originally the name was supposedly a cute name referring to a Patchy server, given that it was based on lots of existing patches of craptostic code from NCSA. So it was initially based on NCSA HTTPd is still alive and well all the way up to the configuration files. For example, on a Mac these are stored at /private/etc/apache2/httpd.conf. The original Apache group consisted of * Brian Behlendorf * Roy T. Fielding * Rob Hartill * David Robinson * Cliff Skolnick * Randy Terbush * Robert S. Thau * Andrew Wilson And there were additional contributions from Eric Hagberg, Frank Peters, and Nicolas Pioch. Within a year of that first shipping, Apache had become the most popular web server on the internet. The distributions and sites continued to grow to the point that they formed the Apache Software Foundation that would give financial, legal, and organizational support for Apache. They even started bringing other open source projects under that umbrella. Projects like Tomcat. And the distributions of Apache grew. Mod_ssl, which brought the first SSL functionality to Apache 1.17, was released in 1998. And it grew. The Apache Foundation came in 1999 to make sure the project outlived the participants and bring other tools under the umbrella. The first conference, ApacheCon came in 2000. Douglas Adams was there. I was not. There were 17 million web sites at the time. The number of web sites hosted on Apache servers continued to rise. Apache 2 was released in 2004. The number of web sites hosted on Apache servers continued to rise. By 2009, Apache was hosting over 100 million websites. By 2013 Apache had added that it was named “out of a respect for the Native American Indian tribe of Apache”. The history isn’t the only thing that was rewritten. Apache itself was rewritten and is now distributed as Apache 2.0. there were over 670 million web sites by then. And we hit 1 billion sites in 2014. I can’t help but wonder what percentage collections of fart jokes. Probably not nearly enough. But an estimated 75% are inactive sites. The job of a web server is to serve web pages on the internet. Those were initially flat HTML files but have gone on to include CGI, PHP, Python, Java, Javascript, and others. A web browser is then used to interpret those files. They access the .html or .htm (or other one of the other many file types that now exist) file and it opens a page and then loads the text, images, included files, and processes any scripts. Both use the http protocol; thus the URL begins with http or https if the site is being hosted over ssl. Apache is responsible for providing the access to those pages over that protocol. The way the scripts are interpreted is through Mods. These include mod_php, mod_python, mod_perl, etc. The modular nature of Apache makes it infinitely extensible. OK, maybe not infinitely. Nothing’s really infinite. But the Loadable Dynamic Modules do make the system more extensible. For example, you can easily get TLS/SSL using mod_ssl. The great thing about Apache and its mods are that anyone can adapt the server for generic uses and they allow you to get into some pretty really specific needs. And the server as well as each of those mods has its source code available on the Interwebs. So if it doesn’t do exactly what you want, you can conform the server to your specific needs. For example, if you wanna’ hate life, there’s a mod for FTP. Out of the box, Apache logs connections, includes a generic expression parser, supports webdav and cgi, can support Embedded Perl, PHP and Lua scripting, can be configured for public_html per-user web-page, supports htaccess to limit access to various directories as one of a few authorization access controls and allows for very in depth custom logging and log rotation. Those logs include things like the name and IP address of a host as well as geolocations. Can rewrite headers, URLs, and content. It’s also simple to enable proxies Apache, along with MySQL, PHP and Linux became so popular that the term LAMP was coined, short for those products. The prevalence allowed the web development community to build hundreds or thousands of tools on top of Apache through the 90s and 2000s, including popular Content Management Systems, or CMS for short, such as Wordpress, Mamba, and Joomla. * Auto-indexing and content negotiation * Reverse proxy with caching * Multiple load balancing mechanisms * Fault tolerance and Failover with automatic recovery * WebSocket, FastCGI, SCGI, AJP and uWSGI support with caching * Dynamic configuration * Name- and IP address-based virtual servers * gzip compression and decompression * Server Side Includes * User and Session tracking * Generic expression parser * Real-time status views * XML support Today we have several web servers to choose from. Engine-X, spelled Nginx, is a newer web server that was initially released in 2004. Apache uses a thread per connection and so can only process the number of threads available; by default 10,000 in Linux and macOS. NGINX doesn’t use threads so can scale differently, and is used by companies like AirBNB, Hulu, Netflix, and Pinterest. That 10,000 limit is easily controlled using concurrent connection limiting, request processing rate limiting, or bandwidth throttling. You can also scale with some serious load balancing and in-band health checks or with one of the many load balancing options. Having said that, Baidu.com, Apple.com, Adobe.com, and PayPal.com - all Apache. We also have other web servers provided by cloud services like Cloudflare and Google slowly increasing in popularity. Tomcat is another web server. But Tomcat is almost exclusively used to run various Java servers, servelets, EL, webscokets, etc. Today, each of the open source projects under the Apache Foundation has a Project Management committee. These provide direction and management of the projects. New members are added when someone who contributes a lot to the project get nominated to be a contributor and then a vote is held requiring unanimous support. Commits require three yes votes with no no votes. It’s all ridiculously efficient in a very open source hacker kinda’ way. The Apache server’s impact on the open-source software community has been profound. It iis partly explained by the unique license from the Apache Software Foundation. The license was in fact written to protect the creators of Apache while giving access to the source code for others to hack away at it. The Apache License 1.1 was approved in 2000 and removed the requirement to attribute the use of the license in advertisements of software. Version two of the license came in 2004, which made the license easier for projects that weren’t from the Apache Foundation. This made it easier for GPL compatibility, and using a reference for the whole project rather than attributing software in every file. The open source nature of Apache was critical to the growth of the web as we know it today. There were other projects to build web servers for sure. Heck, there were other protocols, like Gopher. But many died because of stringent licensing policies. Gopher did great until the University of Minnesota decided to charge for it. Then everyone realized it didn’t have nearly as good of graphics as other web servers. Today the web is one of the single largest growth engines of the global economy. And much of that is owed to Apache. So thanks Apache, for helping us to alleviate a little of the suffering of the human condition for all creatures of the world. By the way, did you know you can buy hamster wheels on the web. Or cat food. Or flea meds for the dog. Speaking of which, I better get back to my chores. Thanks for taking time out of your busy schedule to listen! You probably get to your chores as well though. Sorry if I got you in trouble. But hey, thanks for tuning in to another episode of the History of Computing Podcast. We’re lucky to have you. Have a great day!
10/29/2019 • 12 minutes, 52 seconds
Susan Kare, The Happy Mac, And The Trash Can
Susan Kare Welcome to the History of Computing Podcast, where we explore the history of information technology. Because by understanding the past, we’re able to be prepared for the innovations of the future! Today we’ll talk about a great innovator, Susan Kare. Can you imagine life without a Trash Can icon? What about the Mac if there had never been a happy Mac icon. What would writing documents be like if you always used Courier and didn’t have all those fonts named after cities? They didn’t just show up out of nowhere. And the originals were 8 bit. But they were were painstakingly designed, reviewed, reviewed again, argued over, obsessed over. Can you imagine arguing with Steve Jobs? He’s famous for being a hard person to deal with. But one person brought us all of these things. One pioneer. One wizard. She cast her spell over the world. And that spell was to bring to an arcane concept called the desktop metaphor into everyday computers. Primitive versions had shipped in Douglas Engelbart’s NLS, in Alan Kay’s Smalltalk. In Magic Desk on the Commodore 64. But her class was not an illusionist as those who came before her were, but a mage, putting hexadecimal text derived from graph paper so the bits would render on the screen the same, for decades to come. And we still use her visionary symbols, burned into the spell books of all visual designers from then to today. She was a true innovator. She sat in a room full of computer wizards that were the original Mac team, none was more important than Susan Kare. Born in 1954 in Ithaca, New York this wizard got her training in the form of a PhD from New York University and then moved off to San Francisco in the late 1970s, feeling the draw of a generation’s finest to spend her mage apprenticeship as a curator at a Fine Arts Museum. But like Gandalph, Raistlin, Dumbledoor, Merlin, Glinda the good witch and many others, she had a destiny to put a dent in the universe. To wield the spells of the infant user interface design art to reshape the universe, 8-bits at a time. She’d gone to high school with a different kind of wizard. His name was Andy Hertzfeld and he was working at a great temple called Apple Computer. And his new team team would build a new kind of computer called the Macintosh. They needed some graphics and fonts help. Susan had used an Apple II but had never done computer graphics. She had never even dabbled in typography. But then, Dr Strange took the mantle with no experience. She ended up taking the job and joining Apple as employee badge number 3978. She was one of two women on the original Macintosh team. She had done sculpture and some freelance work as a designer. But not this weird new art form. Almost no one had. Like any young magician, she bought some books and studied up on design, equating bitmap graphics to needlepoint. She would design the iconic fonts, the graphics for many of the applications, and the icons that went into the first Mac. She would conjure up the hex (that’s hexadecimal) for graphics and fonts. She would then manually type them in to design icons and fonts. Going through every letter of every font manually. Experimenting. Testing. At the time, fonts were reserved for high end marketing and industrial designers. Apple considered licensing existing fonts but decided to go their own route. She painstakingly created new fonts and gave them the names of towns along train stops around Philadelphia where she grew up. Steve Jobs went for the city approach but insisted they be cool cities. And so the Chicago, Monaco, New York, Cairo, Toronto, Venice, Geneva, and Los Angeles fonts were born - with her personally developing Geneva, Chicago, and Cairo. And she did it in 9 x 7. I can still remember the magic of sitting down at a computer with a graphical interface for the first time. I remember opening MacPaint and changing between the fonts, marveling at the typefaces. I’d certainly seen different fonts in books. But never had I made a document and been able to set my own typeface! Not only that they could be in italics, outline, and bold. Those were all her. And she painstakingly created them out of pixels. The love and care and detail in 8-bit had never been seen before. And she did it with a world class wizard: someone with a renowned attention to detail and design sense like Steve Jobs looking over her shoulder and pressuring her to keep making it better. They brought the desktop metaphor into the office. Some of it pre-existed her involvement. The trash can had been a part of the Lisa graphics already. She made it better. The documents icon pre-dated her. She added a hand holding a pencil to liven it up, making it clear which files were applications and which were documents. She made the painting brush icon for MacPaint that, while modernized, is still in use in practically every drawing app today. In fact when Bill Atkinson was writing MacSketch and saw her icon, the name was quickly changed to MacPaint. She also made the little tool that you use to draw shapes and remove them called the lasso, with Bill Atkinson. Before her, there were elevators to scroll around in a window. After her, they were called scroll bars. After her, the places you dropped your images was called the Scrapbook. After her the icon of a floppy disk meant save. She gave us the dreaded bomb. The stop watch. The hand you drag to move objects. The image of a speaker making sound. The command key, still on the keyboard of every Mac made. You can see that symbol on Nordic maps and it denotes an “area of interest” or more poignant for the need: “Interesting Feature”. To be clear, I never stole one of those signs while trampsing around Europe. But that symbol is a great example of what a scholarly mage can pull out of ancient tomes, as it is called a Gorgon knot or Saint John Arm’s and dates back over fifteen hundred years - and you can see that in other hieroglyphs she borrowed from obscure historical references. And almost as though those images are burned into our DNA, we identified with them. She worked with the traditionally acclaimed wizards of the Macintosh: Andy Hertzfeld, Bill Atkinson, Bruce Horn, Bud Tribble, Donn Denman, Jerome Coonen, Larry Kenos, and Steve Capps. She helped Chris Espinosa, Clement Mok, Ellen Romana, and Tom Hughes out with graphics for manuals, and often on how to talk about a feature. But there was always Steve Jobs. Some icons took hours; others took days. And Jobs would stroll in and have her recast her spell if it wasn’t just right. Never acknowledging the effort. If it wasn’t right, it wasn’t right. The further the team pushed on the constantly delayed release of the Mac the more frantic the wizards worked. The less they slept. But somehow they knew. It wasn’t just Jobs’ reality distortion field as Steven Levy famously phrased it. They knew that what they were building would put a dent in the Universe. And when they all look back, her designs on “Clarus the Dogcow” were just the beginning of her amazing contributions. The Mac launched. And it did not turn out to be a commercial success, leading to the ouster of Steve Jobs - Sauron’s eye was firmly upon him. Kare left with Jobs to become the tenth employee at NeXT computer. But she introduced Jobs to Paul Rand, who had helped design the IBM logo, to design their logo. When IBM, the Voldemort of the time, was designing OS/2, she helped with their graphics. When Bill Gates, the Jafar of the computer industry called, she designed the now classic solitaire for Windows. And she gave them Notepad and Control Panels. And her contributions have continued. When Facebook needed images for the virtual gifts feature. They called Kare. You know that spinning button when you refresh Pinterest. That’s Kare. And she still does work all the time. The Museum of Modern Art showed her original Sketches in a 2015 Exhibit called “This is for everyone.” She brought us every day metaphors to usher in the and ease the transition into a world of graphical user interfaces. Not a line of the original code remains. But it’s amazing how surrounded by all the young wizards, one that got very little attention in all the books and articles about the Mac was the biggest wizard of them all. Without her iconic designs, the other wizards would likely be forgotten. She is still building one of the best legacies in all of the technology industry. By simply putting users into user interface. When I transitioned from the Apple II to the Mac, she made it easy for me with those spot-on visual cues. And she did it in only 8 bits. She gave the Mac style and personality. She made it fun, but not so much fun that it would be perceived as a toy. She made the Mac smile. Who knew that computers could smile?!?! The Mac Finder still smiles at me every day. Truly Magical. Thanks for that, Susan Kare. And thanks to you inquisitive and amazing listeners. For my next trick. I’ll disappear. But thank you for tuning in to yet another episode of the History of Computing Podcast. We’re so lucky to have you. Have a great day!
10/26/2019 • 12 minutes, 58 seconds
Before The Web, There Was Gopher
Welcome to the History of Computing Podcast, where we explore the history of information technology. Because understanding the past prepares us for the innovations of the future! Today we’re going to talk about Gopher. Gopher was in some ways a precursor to the world wide web, or more specifically, too http. The University of Minnesota was founded in 1851. It gets cold in Minnesota. Like really cold. And sometimes, it’s dangerous to walk around outside. As the University grew, they needed ways to get students between buildings on campus. So they built tunnels. But that’s not where the name came from. The name actually comes from a political cartoon. In the cartoon a bunch of not-cool railroad tycoons were pulling a train car to the legislature. The rest of the country just knew it was cold in Minnesota and there must be gophers there. That evolved into the Gopher State moniker, the Gopher mascot of the U and later the Golden Gophers. The Golden Gophers were once a powerhouse in college football. They have won the 8th most National titles of any University in college football, although they haven’t nailed one since 1960. Mark McCahill turned 4 years old that year. But by the late 80s he was in his thirties. McCahill had graduated from the U in 1979 with a degree in Chemistry. By then he managed the Microcomputer Center at the University of Minnesota–Twin Cities. The University of Minnesota had been involved with computers for a long time. The Minnesota Education Computing Consortium had made software for schools, like the Oregon Trail. And even before then they’d worked with Honeywell, IBM, and a number of research firms. At this point, the University of Minnesota had been connected to the ARPANET, which was evolving into the Internet, and everyone wanted it to be useful. But it just wasn’t yet. TCP/IP maybe wasn’t the right way to connect to things. I mean, maybe bitnet was. But by then we knew it was all about TCP/IP. They’d used FTP. And saw a lot of promise in the tidal wave you could just feel coming of this Internet thing. There was just one little problem. A turf war between batch processed mainframes had been raging for a time with the suit and tie crowd thinking that big computers were the only place real science could happen and the personal computer kids thinking that the computer should be democratized and that everyone should have one. So McCahill writes a tool called POPmail to make it easy for people to access this weird thing called email on the Macs that were starting to show up at the University. This led to his involvement writing tools for departments. 1991 rolls around and some of the department heads around the University meet for months to make a list of things they want out of a network of computers around the school. Enter Farhad Anklesaria. He’d been working with those department heads and reduced their demands to something he could actually ship. A server that hosted some files and a client that accessed the files. McCahill added a search option and combined the two. They brought in four other programmers to help finish the coding. They finished the first version in about three weeks. Of those original programmers, Bob Alberti, who’d helped write an early online multiplayer game already, named his Gopher server Indigo after the Indigo Girls. Paul Lindner named one of his Mudhoney. They coded between taking support calls in the computing center. They’d invented bookmarks and hyperlinks which led McCahill to coin the term “surf the internet” Computers at the time didn’t come with the software necessary to access the Internet but Apple was kind enough to include a library at the time. People could get on the Internet and pretty quickly find some documents. Modems weren’t fast enough to add graphics yet. But, using the Gopher you could search the internet and retrieve information linked from all around the world. Wacky idea, right? The world wanted it. They gave it the name of the school’s mascot to keep the department heads happy. It didn’t work. It wasn’t a centralized service hosted on a mainframe. How dare they. They were told not to work on it any more but kept going anyway. They posted an FTP repository of the software. People downloaded it and even added improvements. And it caught fire underneath the noses of the University. This was one of the first rushes on the Internet. These days you’d probably be labeled a decacorn for the type of viral adoption they got. The White House jumped on the bandwagon. MTV veejay Adam Curry wore a gopher shirt when they announced their Gopher site. There were GopherCons. Al Gore showed up. He wasn’t talking about the Internet as though it were a bunch of tubes yet. So then Tim Berners-Lee had put the first website up in 1991, introducing html on Gopher and what we now know as the web was slowly growing. McCahill then worked with Berners-Lee, Marc Andreessen of Netscape, Alan Emtage and former MIT whiz kid, Peter J. Deutsch. Oh and the czar of the Internet Jon Postel. McCahill needed a good way of finding things on his new Internet protocol. So he invented something that we still use considerably: URLs, or Uniform Resource Locators. You know when you type http://www.google.com that’s a URL. The http indicates the protocol to use. Every computer has a default handler for those protocols. Everything following the :// is the address on the Internet of the object. Gopher of course was gopher://. FTP was ftp:// and so on. There’s of course more to the spec, but that’s the first part. Suddenly there were competing standards. And as with many rapid rushes to adopt a technology, Gopher started to fall off and the web started to pick up. Gopher went through the hoops. It went to an IETF RFC in 1993 as RFC 1436, The Internet Gopher Protocol (a distributed document search and retrieval protocol). I first heard of Mark McCahill when I was on staff at the University of Georgia and had to read up on how to implement this weird Gopher thing. I was tasked with deploying Gopher to all of the Macs in our labs. And I was fascinated, as were so many others, with this weird new thing called the Internet. The internet was decentralized. The Internet was anti-authoritarian. The Internet was the Subpop records of the computing world. But bands come and go. And the University of Minnesota wanted to start charging a licensing fee. That started the rapid fall of Gopher and the rise of the html driven web from Berners-Lee. It backfired. People were mad. The team hadn’t grown or gotten headcount or funding. The team got defensive publicly and while traffic continued to grow, the traffic on the web grew 300 times faster. The web came with no licensing. Yet. Modems got faster. The web added graphics. In 1995 an accounting disaster came to the U and the team got reassigned to work on building a modern accounting system. At a critical time, they didn’t add graphics. They didn’t further innovate. The air was taken out of their sales from the licensing drama and the lack of funding. Things were easier back then. You could spin up a server on your computer and other people could communicate with it without fear of your identity being stolen. There was no credit card data on the computer. There was no commerce. But by the time I left the University of Georgia we were removing the gopher apps in favor of NCSA Mosaic and then Netscape. McCahill has since moved on to Duke University. Perhaps his next innovation will be called Document Devil or World Wide Devil. Come to think of it, that might not be the best idea. Wouldn’t wanna’ upset the Apple Cart. Again. The web as we know it today wasn’t just some construct that happened in a vacuum. Gopher was the most popular protocol to come before it but there were certainly others. In those three years, people saw the power of the Internet and wanted to get in on that. They were willing it into existence. Gopher was first but the web built on top of the wave that gopher started. Many browsers still support gopher either directly or using an extension to render documents. But Gopher itself is no longer much of a thing. What we’re really getting at is that the web as we know it today was deterministic. Which is to say that it was almost willed into being. It wasn’t a random occurrence. The very idea of a decentralized structure that was being willed into existence, by people who wanted to supplement human capacity or by a variety of other motives including “cause it seemed cool at the time, man.” It was almost independent of the action of any specific humans. It was just going to happen, as though free will of any individual actors had been removed from the equation. Bucking authority, like the department heads at the U, hackers from around the world just willed this internet thing into existence. And all these years later, many of us are left in awe at their accomplishments. So thank you to Mark and the team for giving us Gopher, and for the part it played in the rise of the Internet.
10/23/2019 • 12 minutes, 47 seconds
The Meteoric Rise Of Snapchat
Welcome to the History of Computing Podcast, where we explore the history of information technology. Because understanding the past prepares us for the innovations of the future! Today we’re going to cover my daughter’s favorite thing to do these days: Snapchat. Today Snapchat has over 203 million users. That’s up from 188 just a year ago. And with around 50 million logging in monthly, it’s becoming one of the most used social networks in the world. But how does a company manage to go from nothing to, well, so… much… more… You see, the numbers don’t tell the full story. Nearly 80 percent of people on the Internet between 18 and 24 use Snapchat. As does pretty much every kid I know who’s younger than that. A lot of people don’t really like Snapchat, but the seems to be partly because a lot of them don’t seem to understand it. Neither does the world of finance. They have yet to figure out how to turn a profit. But in a world where users means once that number starts to slow, the adults will be called in to make all the money. This doesn’t mean the stock of Snap Inc doesn’t jump around. Facebook did it. And in the short term, life looked good. But in the long term, many of those decisions Facebook made has led to talk of breaking up Facebook, and has caused CEO Mark Zuckerberg to spend more time in front of the leaders of various nation states than I think he’d like. But Snapchat is a Gen Z company. And there’s something more there. How did Snapchat end up in these types of conversations? In part this was because with the popularity of Facebook growing amongst older generations, Gen Z and beyond wanted to stay connected but more more authentically and less for show as is common when trying to just go out and gather likes. This starts with how the company was founded. Snapchat was started in 2011 by now-CEO Evan Spiegel and partners Bobby Murphy and Reggie Brown, while going to school at Stanford University. They had a core tenant: that messages are removed once viewed. In the beginning, it was just for sharing pictures from one person to another. And it was the first social app to really embrace a mobile-first design philosophy. They worked on Picaboo, the first name, for a few months and after launch kicked out Reggie Brown, who had the idea in the first place. They rebranded to Snapchat, keeping Reggie;s Ghostface Chillah icon though. They would later settle with good old Brown for a little more than $150 million dollars in 2014. At release time in 2011, Snapchat was an iOS app. The next year, they released an Android app and similar to the guys from Silicon Valley they were having a rough time keeping up with demand of over 20 million photos per day already zipping through their network. 2013 brought a bunch of features like the ability to find friends, to reply by double-tapping, better navigation, and Snapkidz for children 13 and lower. The “My Story” feature allowed users to put snaps in storylines. In 2013 Snapchat also taught the world about API rate limiting. Given that they didn’t limit the number of possible API connections they leaked about 4 and a half million usernames and phone numbers on a website called SnapchatDB.info. As is often the case, getting hacked in no way hurt them. I mean they said they were sorry. I think that’s painful for Stanford… Nevermind. They added video in 2014. The network continued to grow and Snapchat teamed up with Square in 2014 to send money to friends through Snapchat. After 4 years, Snapcash was discontinued in 2018. 2015 Brough in app purchases to get more replays. I never liked that. They also relaxed the requirement to hold down a button while watching snaps but didn’t remove the requirement that you couldn’t take a screenshot without alerting the sender. But the big thing was adding effects. This led to an explosion in popularity. Bunny ears and noses were everywhere. And world lenses led to those little frames at special events all over the world. Snapchat got big enough to release a notable figures feature. 2015 also saw Spiegel say that Snapchat was for rich people, not Indians or the Spanish. They temporarily lost 1.5% in share price but… I guess it’s like when people have been drinking too much wine, they say what they really think. 2016 brought Snapchat Memories, which allow saving story posts in a private location in the app. We also got geofilters in 2016, bitmojis, and locking snaps with a PIN. They managed to raise $1.8 billion in private equity that year. In 2017, you could view a snap for an unlimited amount of time, but they would be removed at the end of the viewing time. 2017 also saw links in chats and given the popularity of other graphical tools, we got backgrounds. We also got custom stories which let people make stories by combining images. They bought Zenly in 2017. 2018 gave us the Snap Camera and Snapchat lenses which allow for those silly bunny filters video chat and live streaming services such as Skype, Twitch, YouTube, and Zoom. Seeing that they would like to foster a spirit of integration, Snapchat also released an integration platform known as Snap Kit, which allows for OAuth logins with OMG bitmoji avatars. They also released a new interface in 2018. One that Kylie Jenner dissed, supposedly causing the company to drop over a billion in market cap. Pretty sure that drop lasted less time than how long people will actually know who Kylie is. 2019 mostly just gave us Snapchat employees spying on other people using an internal tool called SnapLion. Pretty sure that’s not SOC2 compliant. Also pretty sure there will be law suits. It’s crap like that that can stifle the growth of a company. Instagram added similar features and investors later sued Snap for not disclosing how substantial a threat Instagram ended up being. This with dropping over a billion dollars a year to keep gaining that marketshare. Becoming a big name on Snapchat and Insta can skyrocket people to stardom and open the floodgates for an entire new career: influencer. People long thought Apple would buy Snapchat. They have enough cash on hand to do so and they’ve tried for a long time to build social networks to no avail. But Apple builds frameworks others use for Augmented Reality, not Augmented Reality worlds. Others thought Amazon would acquire the company to further build out the Snapchat camera and further promotional options to a younger audience. Snapchat also has better visual searching options than Amazon. Tencent owned 17 percent of the Class A shares of Snap in 2018 who has a market cap of over ten billion dollars. I believe they call that a decacorn. People keep trying to say Snapchat can’t stand alone forever. They sure burn through a lot of cash. But Facebook has tried to buy Snapchat and failed, just as Friendster tried to buy Facebook once upon a time. Special aint it. A lot of these tech empires are built by buying up smaller scrappier companies. What’s a couple hundred million here and there? The fact that Snapchat has managed to stand alone should be an inspiration. Until they become the company gobbling up everyone else. My daughter, who’s my Super BFF, sends me tons of snaps. It’s become one of her favorite things to do I think. It’s sweet. I don’t get snaps from anyone else. But then, I’m also a different generation. Which is both good and bad. She also chats me up a plenty in Instagram. Personally, I just want to see it all in one place and don’t care which place that is. So if you wanna’ hit me up listeners, you can find me at krypted dot com. And I’d be happy to take any feedback on the podcast that you might have. Or at Snapchat. Or at Twitter. Or… Well, you get the point. And thank you for tuning into another episode of the History of Computing Podcast. We’re lucky to have you. Have a great day!
10/20/2019 • 9 minutes, 53 seconds
The Mother Of All Demos
Welcome to the History of Computing Podcast, where we explore the history of information technology. Because understanding the past prepares us for the innovations of the future! Today we’re going to cover a special moment in time. Picture this if you will. It’s 1968. A collection of some 1,000 of the finest minds in computing is sitting in the audience of the San Francisco Civic Center. They’re at a joint conference of the Association for Computing Machinery and the IEEE or the Institute of Electrical and Electronics Engineers Fall Join Computer Conference in San Francisco. They’re waiting to see the a session called A research center for augmenting human intellect. Many had read Vannevar Bush’s “As We May Think” Atlantic article in 1946 that signified the turning point that inspired so many achievements over the previous 20 years. Many had witnessed the evolution from the mainframe to the transistorized computer to timesharing systems. The presenter for this session would be Douglas Carl Engelbart. ARPA had strongly recommended he come to finally make a public appearance. Director Bob Taylor in fact was somewhat adamant about it. The talk was six years in the making and ARPA and NASA were ready to see what they had been investing in. ARPA had funded his Augmentation Research Center Lab in SRI, or the Stanford Research Institute. The grad instigator J.C.R. Licklider had started the funding when ARPA was still called DARPA in 1963 based on a paper Engelbart published in 1962. But it had really been going since Engelbart got married in 1950 and realized computers could be used to improve human capabilities, to harness the collective intellect, to facilitate truly interactive computing and to ultimately make the world a better place. Englebart was 25. He’d been from Oregon where he got his Bachelors in 48 after serving in World War II as a radar tech. He then come to Berkely in 53 for is Masters, sating through 1955 to get his PhD. He ended up at Stanford’s SRI. There, he hired people like Don Andrews, Bill Paxton, Bill English, and Jeff Rulifson. And today Engelbart was ready to show the world what his team had been working on. The computer was called the oNLine System, or NLS. Bill English would direct things onsite. Because check this out, not all presenters were onsite on that day in 1968. Instead, some were at ARC in Menlo Park, 30 miles away. To be able to communicate onsite they used two 1200 baud modems connecting over a leased line to their office. But they would also use two microwave links. And that was for something crazy: video. The lights went dark. The OnLine Computer was projected onto a 22 foot high screen using an Eidophor video projector. Bill English would flip the screen up as the lights dimmed. The audience was expecting a tall, thin man to come out to present. Instead, they saw Doug Englebart on the screen in front of them. The one behind the camera, filming Engelbart, was Stewart Russel Brand, the infamous editor of the Whole Earth Catalog. It seems Englebart was involved in more than just computers. But people destined to change the world have always travelled in the same circles I supposed. Englebart’s face came up on the screen, streaming in from all those miles away. And the screen they would switch back and forth to. That was the Online System, or NLS for short. The camera would come in from above Englebart’s back and the video would be transposed with the text being entered on the screen. This was already crazy. But when you could see where he was typing, there was something… well, extra. He was using a pointing device in his right hand. This was the first demo of a computer mouse Which he had applied for a patent for in 1967. He called it that because it had a tail which was the cabe that connected the wooden contraption to the computer. Light pens had been used up to this point, but it was the first demonstration of a mouse and the team had actually considered mounting it under the desk and using a knee to move the pointer.But they decided that would just be too big a gap for normal people to imagine and that the mouse would be simpler. Engelbart also used a device we might think of more like a macro pad today. It was modeled after piano keys. We’d later move this type of functionality onto the keyboard using various keystrokes, F keys, and a keyboard and in the case of Apple, command keys. He then opened a document on his screen. Now, people didn’t do a lot of document editing in 1968. Really, computers were pretty much used for math at that point. At least, until that day. That document he opened. He used hyperlinks to access content. That was the first real demo of clickable hypertext. He also copied text in the document. And one of the most amazing aspects of the presentation was that you kinda’ felt like he was only giving you a small peak into what he had. You see, before the demo, they thought he was crazy. Many were probably only there to see a colossal failure of a demo. But instead they saw pure magic. Inspiration. Innovation. They saw text highlighted. They saw windows on screens that could be resized. They saw the power of computer networking. Video conferencing. A stoic Engelbart was clearly pleased with his creation. Bill Paxton and Jeff Rulifson were on the other side, helping with some of the text work. His style worked well with the audience, and of course, it’s easy to win over an audience when they have just been wowed by your tech. But more than that, his inspiration was so inspiring that you could feel it just watching the videos. All these decades later. can watching those videos. Engelbart and the team would receive a standing ovation. And to show it wasn’t smoke and mirrors, ARC let people actually touch the systems and Engelbart took questions. Many people involved would later look back as though it was an unfinished work. And it was. Andy van Dam would later say Everybody was blown away and thought it was absolutely fantastic and nothing else happened. There was almost no further impact. People thought it was too far out and they were still working on their physical teletypes, hadn't even migrated to glass teletypes yet. But that’s not really fair or telling the whole story. In 1969 we got the Mansfield Amendment - which slashed the military funding pure scientific research. After that, the budget was cut and the team began to disperse, as was happening with a lot of the government-backed research centers. Xerox was lucky enough to hire Bob Taylor, and many others immigrated to Xerox PARC, or Palo Alto Research Center, was able to take the concept and actually ship a device in 1973, although not as mass marketable yet as later devices would be. Xerox would ship the Alto in 1973. The Alto would be the machine that inspired the Mac and therefore Windows - so his ideas live on today. His own team got spun out of Stanford and sold, becoming Tymshare and then McDonnel Douglas. He continued to have more ideas but his concepts were rarely implemented at McDonnel Douglas so he finally left in 1986, starting the Bootstrapp Alliance, which he founded with his daughter. But he succeeded. He wanted to improve the plight of man and he did. Hypertext and movable screens directly influenced a young Alan Kay who was in the audience and was inspired to write Smalltalk. The Alto at Xerox also inspired Andy van Dam, who built the FRESS hypertext system based on many of the concepts from the talk as well. It also did multiple windows, version control on documents, intradocument hypertext linking, and more. But, it was hard to use. Users needed to know complex commands just to get into the GUI screens. He was also still really into minicomputers and timesharing, and kinda’ missed that the microcomputer revolution was about to hit hard. The hardware hacker movement that was going on all over the country, but most concentrated in the Bay Area, was about to start the long process of putting a computer, and now mobile device, in every home in the world. WIth smaller and smaller and faster chips, the era of the microcomputer would transition into the era of the client and server. And that was the research we were transitioning to as we moved into the 80s. Charles Irby was a presentter as well, being a designer of NLS. He would go on to lead the user interface design work on the Xerox star before founding a company then moving on to VP of development for General Magic, a senior leader at SGI and then the leader of the engineering team that developed the Nintendo 64. Bob Sproull was in the audience watching all this and would go on to help design the Xerox Alto, the first laser printer, and write the Principles of Interactive Computer Graphics before becoming a professor at Conegie Mellon and then ending up helping create Sun Microsystems Laboratories, becoming the director and helping design asuynchronous processors. Butler Lampson was also there, a found of Xerox PARC, where the Alto was built and co-creator of Ethernet. Bill Paxton (not the actor) would join him at PARC and later go on to be an early founder of Adobe. In 2000, Engelbart would receive the National Medal of Technology for his work. He also He got the Turing Award in 1997, the Locelace Medal in 2001. He would never lose his belief in the collective intelligence. He wrote Boosting Our Collective IQ in 1995 and it has Englebart passed away in 2013. He will forever be known as the inventor of the mouse. But he gave us more. He wanted to augment the capabilities of humans, allowing us to do more, rather than replace us with machines. This was in contrast to SAIL and the MIT AI Lab where they were just doing research for the sake of research. The video of his talk is on YouTube, so click on the links in the show notes if you’d like to access it and learn more about such a great innovator. He may not have brought a mass produced system to market, but as with Vanevar Bush’s article 20 years before, the research done is a turning point in history; a considerable milestone on the path to the gleaming world we now live in today. The NLS teaches us that while you might not achieve commercial success with years of research, if you are truly innovative, you might just change the world. Sometimes the two simply aren’t mutually exclusive. And when you’re working on a government grant, they really don’t have to be. So until next time, dare to be bold. Dare to change the world, and thank you for tuning in to yet another episode of the History of Computing Podcast. We’re so lucky to have you. Have a great day! https://www.youtube.com/watch?v=yJDv-zdhzMY
10/17/2019 • 13 minutes, 7 seconds
The Tetris Negotiations
The Tetris Negotiations Welcome to the History of Computing Podcast, where we explore the history of information technology. Because understanding the past prepares us for the innovations of the future! Todays episode is on the origins of Tetris. I’ll never forget the first time I saw St. Basil’s as I was loading Tetris up. I’ll never get those hours back that I played it. Countless hours. So how did it all begin? OK, so check this out. It’s 1984. Los Angeles hosts the olympics and the Russians refuse to come. But then, the US had refused to come to Moscow when they had the Olympics. I am fairly certain that someone stuck their tongue out at someone else. It happens a lot in preschool. One may have even given the middle finger. That’s what middle school is named after, right? It was a recession. Microchips were getting cheap. And Digital Equipment Corporation’s PDP was one of the best computers ever made. It wasn’t exactly easy to come by in Russia though. The microcomputer was becoming a thing. And Alexey Pajitnov was working at the Dorodnitsyn Computing Centre of the Soviet Academy of Sciences. As with a lot of R&D places, they were experimenting with new computers. The Electronika 60 was so similar to the PDP that DEC chip designers printed jokes on chips just for Russians looking to steal their chip designs. They actually managed to get a quarter million ops per second 5 VLSI chips with 8k of RAM. They needed to do some random number generation and visualization. Good ole’ Alexey was told to write some software to test the Electronika. He thought, ya’ know, I should do a game. The beauty of writing games is that they can be math intensive, so perfect for benchmarking. But what kind of game? When he was a kid, he’d loved to play pentomino games. That’s a game where there are 5 squares connected in one of 12 ways. Reduce that to 4 and it’s 7. The thing is, when you have 5 the math was a little too intense. And it was a little easier with 4 blocks. He drew them on the screen in ascii and had the tetraminos fall down the screen. The games ended pretty quick, so he added an additional feature that deleted rows once they were complete. Then, he sped the falling speed up as you cleared a level. You had to spin your puzzle pieces faster, the further you got. And once you’re addicted, you turn and turn and turn and turn. No frills, just fun. It needed a name though. Since you’re spinning 4 blocks or Greet for tetras, it seemed like it got mashed up with tennis for tetraminiss. No, tetraminoss. Wait, cut a syllable here and there and you get to Tetris. They have 7 shapes, ijlostz. The IBM PC version ran with the I as maroon, the j as silver, the l as purple, the o as navy, the green is s, the brown is t, the teal is z. He got a little help from help of Dmitry Pavlovsky and 16 year old programming whiz, Vadim Gerasimov. Probably because they were already hopelessly addicted to the game. They ported it to the fancy schmancy IBM PC in about two months, and it started to spread around Moscow. By now, his coworkers are freakin’ hooked. This was the era of disk sharing. And disks were certainly being shared. But it didn’t stop there. It leaked out all over the place, making its way to Budapest, where it ended up on a machine at British-based game maker Andromeda. CEO Robert Stein sends a Telex to Dmitry Pavlovsky. He offers 75% royalties on sales and $10,000. Pretty standard stuff so far, but this is where it gets fun. So Pavlovsky responds that they should maybe negotiate a contract. But Andromeda had already sold the rights to spectrum holobyte and so attempted to license the software from the Hungarian developers that did the porting. Then realized that was dumb and went back to the negotiating table, getting it done for “computers.” All license deals went through the USSR at the time, and the Russian government was happy to take over the contract negotiations. So the USSR's Ministry of Software and Hardware Export gets involved. Through a company they setup for this called Elektronorgtechnica or ELORG, they negotiated the contract and did. That’s how by 87 Tetris spreads to the US. In fact, Tetris was the first game released from ussr to USA and was for Commodore 64 and IBM PC. It was so simple it was sold as a budget title. Apple II package came with three versions on three disks, 5.25 inch, not copy protected yet. Can you say honor system. In 1988, Henk Rogers discovers Tetris at a trade show in Vegas and gets all kinds of hooked. Game consoles had been around for a long time, and anyone who paid attention knew that a number of organizations around the world were looking to release handhelds. Now, Old Henk was the Dutch video game designer behind a role playing game called The Black Onyx and had been looking for his next thing and customers. When he saw Tetris, he knew it was something special. He also knew the Game Boy was coming and thought maybe this was the killer game for the platform. He did his research and contacted Stein from Andromeda to get the rights to make a version for mobiles. Stein was into it but wasn’t on the best of terms with the Russian government because he was a little late in his royalty payments. Months went by. Henk didn’t hear back. Spectrum HoloByte got wind as well and sent Kevin Maxwell to Moscow to get the rights. Realizing his cash cow was getting in danger, old Stein from Andromeda also decided to hop on a plane and go to Moscow. They each met with the Russians separately in about a three day span. Henk Rogers is a good dude. As a developer who’d been dealing with rights to his own game, he decided the best way to handle the Russians was to actually just lay out how it all worked. He gave them a crash course in the evolving world of computer vs mobile license agreements in an international world. The Russians showed him their contracts with Andromeda. He told them how it should all really be. They realized Andromeda wasn’t giving them the best of deals. Henk also showed them a game that there’s no rights deal for. Whether all this was intentionally screwing the other parties or not is for history, but by the time he walked out he’d make a buck per copy that went on the Gameboy. There was other wrangling with the other two parties including an incident where the Russians sent a fax they knew Maxwell couldn’t get in order to get out of a clause in a contract. This all set up a few law suits and resulted in certain versions in certain countries shipping then being pulled back off the shelf. Fun times. But because of it all, in 1989 the Game Boy was released. Henk was right, Tetris turned out to be the killer app for the platform. Until Minecraft came along it was the most popular game of all time, selling over 30 million copies. And ranked #5 in the 100 best Nintendo games. It was the first Game Boy game that came with the ability to link up to other Game Boys and you could play against your friends. Back then you needed to use a cable to coop. The field was 10 wide and 18 high in game boy and it was set to music from Nintendo composer Hirokazu Tanaka. The Berlin Wall is torn down in 1989. I suspect that was part of the negotiations with Game Boy. Can you imagine Gorbetrev and Reagan with their Game Boys linked up playing Tetris for hours over the fate of Germany? ‘Cause I can. You probably think there were much more complicated negotiations taking place. I do not. I tend to think Reagan’s excellent Tetris skills ended the Cold War. So Pajitnov’s friend Vladimir Pokhilko had done some work on the game as well and in 1989. He ran psychological experiments using the game and with that research, the two would found a company called AnimaTek. They would focus on 3D software, releasing El-fish through Maxis. While Tetris had become the most successful game of all time, Pokhilko was in a dire financial crisis and would commit suicide. There’s more to that story but it’s pretty yuck so we’ll leave it at that. Pajitnov, the original developer, finally got royalties in 1996 when the Parastroika agreement ended. Because Henk Rogers had been a good dude, they formed The Tetris Company together to manage licensing of the game. Pajitnov went to work at Microsoft in 1996, working on games. The story has become much more standard since then. Although in 2012 the US Court on International Trade responded to some requests to shut competitors down noting that the US Copyright didn’t apply to rules of game and so Tetris did file other patents and trademarks to limit how close competitors could get to the original game mechanics. After studying at the MIT Media Lab, that 16 year old programmer, Vadim Gerasimov went to become an engineer at Google. Henk Rogers serves as the Managing Director of The Tetris Company. Since designing Tetris, Pajitnov has made a couple dozen other games, with Marbly on iOS being the latest success. It needs a boss button. Tetris has been released on arcade games, home consoles, mobile devices, pdas, music players, computers, Oscilloscope Easter eggs and I’m pretty sure it now has its own planet or 4. It probably owes some of its success to the fact that it makes people smarter. Dr Richard hayer claims Tetris leads to more efficient brain activity. Boosts general cognitive abilities. Improved cerebral cortex thickness. If my cortex were thicker I’d probably research effects of games as a means of justifying the countless hours I wanted to spend on them too. So that’s the story of Tetris. It ended the Cold War, makes you smarter, and now Alexy gets a cut of every cent you spend on it. So he’ll likely thank you for your purchase. Just as I thank you for tuning in to another episode of the History of Computing Podcast. We’re so very lucky to have you. Have a great day! Now back to my game!
10/14/2019 • 12 minutes, 58 seconds
The Blue Meanies of Apple, IBM, and the Pinks
Apple Lore: The Pinks Versus The Blue Meanies Welcome to the History of Computing Podcast, where we explore the history of information technology. Because understanding the past prepares us for the innovations of the future! Today we’re going to cover two engineering groups at Apple: The Pinks and the Blue Meanies. The Mac OS System 6 had been the sixth operating system released in five years. By 1988, Apple was keeping up an unrealistic release cadence, especially given that the operating system had come along at an interesting time when a lot of transitions were happening in IT, and there were lot of increasingly complex problems trying to code around earlier learning opportunities. After sweeping the joint for bugs, Apple held an offsite engineering meeting in Pescadero and split the ideas for the next operating system into two colors of cards: pink, red, green, and blue. The most important of these for this episode were pink, or future release stuff and blue, or next release, stuff. The notecards were blue. The architects of blue were horrible, arrogant self-proclaimed bastards. They’d all seen Yellow Submarine and so they went with the evil Pepperland Blue Meanies. As architects, they were the ones who often said no to things. The Blue Meanies ended up writing much of the core of System 7. They called this OS, which took 3 years to complete, The Big Bang. It would last on the market for 6 years. Longer than any operating system from Apple did prior or since. System 7 gave us CDs, File Sharing, began the migration to a 32-bit OS, replaced MacroMaker with AppleScript and Apple Events and the Extensions Manager, which we’re likely to see a return of given the pace Apple’s going these days. System 7.0.1 came with an Easter egg. If you typed in Help! Help! We're being held prisoner in a system software factory! You got a list of names: Darin Adler Scott Boyd Chris Derossi Cynthia Jasper Brian McGhie Greg Marriott Beatrice Sochor Dean Yu The later iterations of the file ended “Who dares wins” Pink was meant to get more than incremental gains. They wanted coorperative multitasking. The people who really pushed for this were senior engineers Bayles Holt, David Goldsmith, Gene Pope, Erich Ringewald, and Gerard Schutten, referred to as the Gang of Five. They had their pink cards and knew that what was on them was critical, or Apple might have to go out and buy some other company to get the next really operating system. They insisted that they be given the time to build this new operating system and traded their managers to the blue meanies for the chance to build the preemptive multitasking and a more component-based, or object-oriented applications esgn. They got Mike Potel as their manager. They worked in a separate location looking to launch their new operating system in two years. The code named as Defiant, given that Pink just wasn’t awesome. They shared space with the Newton geeks. Given that they had two years and they saw the technical debt in System 6 as considerable, they had to decide if they were going to build a new OS from the ground up, or build on top of the System 6. They pulled in the Advanced Technology Group, another team at Apple, and got up to 11 people. They ended up starting over with a new microkernel they called Opus. Big words. The Pink staff ended up pulling in ideas from other cards and got up to about 25 people. From there, it went a little off the rail and turf wars set in. It kept growing. 100 engineers. They were secretive. They eventually grew to 150 people by 1990. Remember, two years. And the further out they got the less likely that the code would ever be backwards compatible. The Pink GUI used isometric icons, rounded windows, drop shadows, beveling, was fully internationalized, and were huge influences in Mac OS 8 and Copland. Even IBM was impressed by the work being done on Pink and in 1991 they entered an alliance with Apple to help take on what was quickly becoming a Microsoft Monopoly. They planned to bring this new OS to the market as a new company called Taligent in the mid-90s. Just two more years. In 1992, Taligent moved out of Apple with 170 employees, and Joe Guglielmi, who had once led the OS/2 team and had been a marketing exec at IBM for 30 years. By then, this one one of 5 partnerships between Apple and IBM, something that starts and stops every now and then up to today. It was an era of turf wars and empire building. But it was the era of Object orientation. Since Smalltalk, this had been a key aspect in higher level languages such as Java and in the AS/400. IBM had already done it with OS/2 and AIX. By 1993 there was suspicion. Again they grow, now to over 250 people, but they really just needed two more years, guys. Apple actually released an object-oriented SDK called Bedrock to migrate from System 7 to Pink, which could work also work with Windows 3.1, NT, and OS/2. Before you know it they were building a development environment on AIX and porting frameworks to HP-UX, OS/2, Windows. By 1994 the apps could finally run on an IBM RS/6000 running AIX. The buzz continued. Ish. 1994 saw HP take on 15% of the company and add Smalltalk into the mix. HP brought new compilers into the portfolio, and needed native functionality. The development environment was renamed to cq professional and the User Interface builder was changed to cqconstructor. TalAE became CommonPoint. TalOS was scheduled to ship in 1996. Just two more years. The world wanted to switch away from monolithic apps and definitely away from procedural apps. It still does. Every attempt to do so just takes two more years. Then and now. That’s what we call “Enterprise Software” and as with anyone who’s ok with such pace, Joe Guglielmi left Taligent in 1995. Let’s review where we are. There’s no real shipping OS. There’s an IDE but C++ programmers would need 3 months training to get up to speed on Taligent. Most needed a week or two class to learn Java, if that. Steve Jobs had aligned with Sun in OpenStep. So Apple was getting closer and closer to IBM. But System 7 was too big a dog to run Taligent. Debbie Coutant became CEO towards the end of the year. HP and Apple sold their stake in the company which was then up to 375 employees. Over half were laid off and the organization was wrapped into IBM as would be focusing on… Java. Commonpoint would be distributed across IBM products where possible. Taligent themselves would be key to the Java work done at IBM. By then IBM was a services first organization anyways, so it kinda’ all makes sense. TalOS was demoed in 1996 but never released. It was unique. It was object oriented from the ground up. It was an inspiration of a new era of interfaces. It was special. But it never shipped. Mac OS 8 was released in 1997. Better late than never. But it was clear that there was no more runway left in the code that had been getting bigger and meaner. They needed a strategy. The final Taligent employees got sucked into IBM that year, ending a fascinating drama in operating systems and frameworks. Whatever the behind baseball story, Apple decided to bring Steve Jobs back in, in 1997. And he brought NeXT, which gave the Mac all the object-oriented neediness they wanted. They got Objective-C, Mach (through Avie Tevanian of Carnegie Mellon), Property Lists, AppWrappers (.app), Workspace Manager (which begat the Finder), The Dock, and NetInfo. And they finally retired the Apple Bonkers server. But as importantly as anything else, they got Bertrand Serlet and Craig Federighi - who as the next major VPs of Software were able to keep the ship in the right direction and by 2001 they gave us 10.0: Cheetah * Darwin (kinda’ like Unix) with Terminal * Mail, Address Book, iTunes * AppleScript survived, AppleTalk didn’t * Aqua UI, Carbon and Cocoa APIs * AFP over TCP/IP, HTTP, SSH, and FTP server/client * Native PDF Support It began a nearly 20 year journey that we are still on. So in the end, the Pinks never shipped an operating system, despite their best intentions. And the Blues never paid down their technical debt. Despite their best intentions. As engineers, we need a plan. We need to ship incrementally. We need good, sane cultures that can work together. We need to pay down technical debt - but we don’t need to run amuck building technology that’s a little ahead of our time. Even if it’s always just two more years ahead of our time. And I think we’re at time gentle listeners. And I hope it doesn’t take me two years to ship this, gentle listeners. But if it does or doesn’t, thanks for tuning into another episode of the History of Computing Podcast. We’re lucky to have you. Have a great day!
10/11/2019 • 12 minutes, 12 seconds
The Origin Of The Blue Meanies
The Blue Meanies Origin Joke Welcome to the History of Computing Podcast, where we explore the history of information technology. Because understanding the past prepares us for the innovations of the future! Today we’re going to look at an alternative story of how the Blue Meanies formed, from Greg Marriott, a Blue Meanie: https://web.archive.org/web/19991013005722/http://spies.com/greg/bluemeanies.html How Did The Blue Meanies Come To Be? The "Blue Meanies" was the name of a group of generalists in the system software group at Apple. I was a member of the group for three and half years. We were experts at Mac programming and debugging, and we guided the architecture of Mac system software for several years. People often ask how the Blue Meanies got started. The truth was pretty mundane, so I made up this story a few years ago. By the way, we had a hamster mascot named Gibbly. The stooped figure in the bloodstained lab coat scurried around the lab, checking his instruments. All was in readiness. Tonight, finally, he would silence the skeptics. He would show them his theories weren't those of a crack-pot, but those of a genius! He turned and surveyed the eleven tiny figures strapped on the tables in the center of the cavernous laboratory. The frightened rodents twisted and squirmed, but could not break free. Their sharp teeth had no effect on the stainless steel straps holding them in place. A twelfth hamster in a cage, marked with a nameplate that said "Gibbly," watched in horror as her brothers and sisters were subjected to this unthinkable torture. Their wide frightened eyes beheld their tormentor as he performed some last minute adjustments on the huge panel filling the far wall of the lab. Had they any intelligence at all they would have recognized the eleven identical sets of medical monitors. Gauges, meters, and dials reflected respiration, heart rate, and blood pressure. Eleven long streamers of paper inched their way out of the EEGs, leaving twisted little piles on the floor. The mad scientist paused and remembered the laughter of his peers when he presented his ideas to them. His face hardened as he recalled their ridicule when he proposed his "Theory of Transfiguration." His carefully documented research clearly showed that one mammal could be turned into another, yet they jeered and hooted until he was forced off stage, humiliated. He decided then to continue with his plan to prove his theories by turning rodents into monkeys. The old man smiled grimly and faced his subjects. He crossed the room and sat before his Macintosh. The desk was covered with documentation, leaving barely enough room to move the mouse. He shoved TechNotes and volumes of Inside Macintosh out of the way to make more room. He briefly checked that his control program was ready, reaching for the mouse. A couple of clicks later, relays closed deep inside the complex machines and the process began. Right at that moment, lightning struck the power lines just outside the lab windows. The Mac exploded in a shower of sparks. The blast propelled the old man backwards, his wheeled chair racing across the lab floor. He crashed into the panels and slid out of the chair onto the shiny floor. The piles of loose paper and manuals vaporized filling the air with a fine mist. At the same time enormous amounts of power surged through the machines, the tables, and the poor helpless hamsters. Automatic safety devices failed, fused by the jolt of electricity. The transformation raced out of control. The straps holding the hamsters snapped open, but the stunned animals still could not move. A puslating aura surrounded them, permeated their tiny bodies, growing stronger and stronger. As the acrid smoke from the Mac and the remnants of the manuals swirled through the aura it began to shimmer violently. The transformation continued. The eleven rodents began to shudder uncontrollably as the immense energy surrounding them intensified further. Had the scientist been conscious, he would have noted their change in size and form. They grew longer and wider and their fur (mostly) disappeared, replaced by Reeboks, Levis and t-shirts. Critical components in the complicated machinery finally succumbed to the outrageous current. Sparks flew from the panels and tiny lights winked out as the transformation process ground to a halt. The aura subsided and suddenly the air was very still. The old man stirred and groaned as his abused bones protested their treatment. He shook his head to clear it and was immediately forced to wonder why anyone would do such a thing after being slammed into a wall. He rose and looked at the stainless steel tables, expecting to see years of research blown to bits. He gasped in astonishment at the scene his eyes beheld. Eleven pairs of human eyes looked back at him. Unfortunately, the shock of such an overwhelming success was too much for him. His aging heart stopped beating and he fell heavily to the floor. Equally unfortunate was the intense paranoia which caused him to encrypt all of his notes. The eleven Blue Meanies [the way they got their name is another story... -ed.] looked at each other and smiled. The knowledge fused into their very structure by the aura made them giddy with excitement. They desperately wanted to use this newfound information in some way but didn't quite know what to do. They milled around the lab in confusion, looking for some clue that would tell them what to do next. One of them noticed a charred scrap of paper on the floor and picked it up. He showed it to the others, and soon they decided what to do. They grabbed Gibbly and filed out of the lab, not looking back, and set out on a long journey to Apple Computer, Inc., 20525 Mariani Ave, Cupertino, CA 95014.
10/8/2019 • 7 minutes, 47 seconds
Mavis Beacon Teaches Typing
Mavis Beacon Welcome to the History of Computing Podcast, where we explore the history of information technology. Because understanding the past prepares us for the innovations of the future! Today we’re going to give thanks to a wonderful lady. A saint. The woman that taught me to type. Mavis Beacon. Over the years I often wondered what Mavis was like. She took me from a kid that was filled with wonder about these weird computers we had floating around school to someone that could type over a hundred words a minute. She always smiled. I never saw her frown once. I thought she must be a teacher somewhere. She must be a kind lady whose only goal in the world was to teach young people how to type. And indeed, she’s taught over a million people to type in her days as a teacher. In fact she’d been teaching for years by the time I first encountered her. Mavis Beacon was initially written for MS-DOS in 1987 and released by The Software Toolworks. Norm Worthington, Mike Duffy joined Walt Bilofsky started the company out of Sherman Oaks, California in 1980 and also made Chessmaster in 1986. They started with HDOS, a health app for the Osborne 1. They worked on Small C and Grogramma, releasing a conversation simulation tool from Joseph Weizenbaum in 1981. They wrote Mavis Beacon Teaches Typing in 1987 for IBM PCs. It took "Three guys, three computers, three beds, in four months”. It was an instant success. They went public in 1988 and were acquired by Pearson in 1994 for around half a billion dollars, becoming Mindscape in 1994. By 1998 she’d taught over 6,000,000 kids to type. Today, Encore Software produces the software and Software MacKiev distributes a version for the Mac. The software integrates with iTunes, supports competitive typing games, and still tracks words-per-minute. But who was Mavis? What inspired her to teach generations of children to type? Why hasn’t she aged? Mavis was named after Mavis Staples but she was a beacon to anyone looking to learn to type, thus Mavis Beacon. Mavis was initially portrayed by Haitian-born Renée L'Espérance, who was discovered working behind the perfume counter at Saks Fifth Avenue Beverly Hills by talk-show host Les Crane in 1985. He then brought her in to be the model. Featuring an African-American woman regrettably caused some marketing problems but didn’t impact the success of the release. So until the next episode, think about this: Mavis Beacon, real or not, taught me and probably another 10 million kids to type. She opened the door for us to do more with computers. I could never write code or books or even these episodes at a rate if it hadn’t been for her. So I owe her my sincerest of gratitude. And Norm Worthington, for having the idea in the first place. And I owe you my gratitude, for tuning into another episode of the History of Computing Podcast. We’re lucky to have you. Have a great day!
10/5/2019 • 5 minutes, 7 seconds
Smalltalk and Object-Oriented Programming
Welcome to the History of Computing Podcast, where we explore the history of information technology. Because understanding the past prepares us for the innovations of the future! Today we’re going to cover the first real object-oriented programming language, Smalltalk. Many people outside of the IT industry would probably know the terms Java, Ruby, or Swift. But I don’t think I’ve encountered anyone outside of IT that has heard of Smalltalk in a long time. And yet… Smalltalk influenced most languages in use today and even a lot of the base technologies people would readily identify with. As with PASCAL from Episode 3 of the podcast, Smalltalk was designed and created in part for educational use, but more so for constructionist learning for kids. Smalltalk was first designed at the Learning Research Group (LRG) of Xerox PARC by Alan Kay, Dan Ingalls, Adele Goldberg, Ted Kaehler, Scott Wallace, and others during the 1970s. Alan Kay had coined the term object-oriented programming was coined by Alan Kay in the late 60s. Kay took the lead on a project which developed an early mobile device called the Dynabook at Xerox PARC, as well as the Smalltalk object-oriented programming language. The first release was called Smalltalk-72 and was really the first real implementation of this weird new programming philosophy Kay had called object-oriented programming. Although… Smalltalk was inspired by Simula 67, from Norwegian developers Kirsten Nygaard and Ole-johan Dahl. Even before that Stewart Nelson and others from MIT had been using a somewhat object oriented model when working on Lisp and other programs. Kay had heard of Simula and how it handled passing messages and wrote the initial Smalltalk in a few mornings. He’d go on work with Dan Ingalls to help with implementation and Adele Goldberg to write documentation. This was Smalltalk 71. Object oriented program is a programming language model where programs are organized around data, also called objects. This is a contrast to programs being structured around functions and logic. Those objects could be data fields, attributes, behaviors, etc. For example, a product you’re selling can have a sku, a price, dimensions, quantities, etc. This means you figure out what objects need to be manipulated and how those objects interact with one another. Objects are generalized as a class of objects. These classes define the kind of data and the logic used when manipulating data. Within those classes, there are methods, which define the logic and interfaces for object communication, known as messages. As programs grow and people collaborate on them together, an object-oriented approach allows projects to more easily be divided up into various team members to work on different parts. Parts of the code are more reusable. The way programs are played out is more efficient. And in turn, the code is more scalable. Object-oriented programming is based on a few basic principals. These days those are interpreted as encapsulation, abstraction, inheritance, and polymorphism. Although to Kay encapsulation and messaging are the most important aspects and all the classing and subclassing isn’t nearly as necessary. Most modern languages that matter are based on these same philosophies, such as java, javascript, Python, C++, .Net, Ruby. Go, Swift, etc. Although Go is arguably not really object-oriented because there’s no type hierarchy and some other differences, but when I look at the code it looks object-oriented! So there was this new programming paradigm emerging and Alan Kay really let it shine in Smalltalk. At the time, Xerox PARC was in the midst of revolutionizing technology. The MIT hacker ethic had seeped out to the west coast with Marvin Minsky’s AI lab SAIL at Stanford and got all mixed into the fabric of chip makers in the area, such as Fairchild. That Stanford connection is important. The Augmentation Research Center is where Engelbart introduced the NLS computer and invented the Mouse there. And that work resulted in advances like hypertext links. In the 60s. Many of those Stanford Research Institute people left for Xerox PARC. Ivan Sutherland’s work on Sketchpad was known to the group, as was the mouse from NLS, and because the computing community that was into research was still somewhat small, most were also aware of the graphic input language, or GRAIL, that had come out of Rand. Sketchpad's had handled each drawing elements as an object, making it a predecessor to object-oriented programming. GRAIL ran on the Rand Tablet and could recognize letters, boxes, and lines as objects. Smalltalk was meant to show a dynamic book. Kinda’ like the epub format that iBooks uses today. The use of similar objects to those used in Sketchpad and GRAIL just made sense. One evolution led to another and another, from Lisp and the batch methods that came before it through to modern models. But the Smalltalk stop on that model railroad was important. Kay and the team gave us some critical ideas. Things like overlapping windows. These were made possibly by the inheritance model of executions, a standard class library, and a code browser and editor. This was one of the first development environments that looked like a modern version of something we might use today, like an IntelliJ or an Eclipse for Java developers. Smalltalk was the first implementation of the Model View Controller in 1979, a pattern that is now standard for designing graphical software interfaces. MVC divides program logic into the Model, the View, and the Controller in order to separate internal how data is represented from how it is presented as decouples the model from the view and the controller allow for much better reuse of libraries of code as well as much more collaborative development. Another important thing happened at Xerox in 1979, as they were preparing to give Smalltalk to the masses. There are a number of different interpretations to stories about Steve Jobs and Xerox PARC. But in 1979, Jobs was looking at how Apple would evolve. Andy Hertzfeld and the original Mac team were mostly there at Apple already but Jobs wanted fresh ideas and traded a million bucks in Apple stock options to Xerox for a tour of PARC. The Lisa team came with him and got to see the Alto. The Alto prototype was part of the inspiration for a GUI-based Lisa and Mac, which of course inspired Windows and many advances since. Smalltalk was finally released to other vendors and institutions in 1980, including DEC, HP, Apple, and Berkely. From there a lot of variants have shown up. Instantiations partnered with IBM and in 1984 had the first commercial version at Tektronix. A few companies tried to take SmallTalk to the masses but by the late 80s SQL connectivity was starting to add SQL support. The Smalltalk companies often had names with object or visual in the name. This is a great leading indicator of what Smalltalk is all about. It’s visual and it’s object oriented. Those companies slowly merged into one another and went out of business through the 90s. Instantiations was acquired by Digitalk. ParcPlace owed it’s name to where the language was created. The biggest survivor was ObjectShare, who was traded on NASDAQ, peaking at $24 a share until 1999. In a LA Times article: “ObjectShare Inc. said its stock has been delisted from the Nasdaq national market for failing to meet listing requirements. In a press release Thursday, the company said it is appealing the decision.” And while the language is still maintained by companies like Instantiations, in the heyday, there was even a version from IBM called IBM VisualAge Smalltalk. And of course there were combo-language abominations, like a smalltalk java add on. Just trying to breathe some life in. This was the era where Filemaker, Foxpro, and Microsoft Access were giving developers the ability to quickly build graphical tools for managing data that were the next generation past what Smalltalk provided. And on the larger side products like JDS, Oracle, Peoplesoft, really jumped to prominence. And on the education side, the industry segmented into learning management systems and various application vendors. Until iOS and Google when apps for those platforms became all the rage. Smalltalk does live on in other forms though. As with many dying technologies, an open source version of Smalltalk came along in 1996. Squeak was written by Alan Kay, Dan Ingalls, Ted Kaehler, Scott Wallace, John Maloney, Andreas Raab, Mike Rueger and continues today. I’ve tinkerated with Squeak here and there and I have to say that my favorite part is just getting to see how people who actually truly care about teaching languages to kids. And how some have been doing that for 40 years. A great quote from Alan Kay, discussing a parallel between Vannevar Bush’s “As We May Think” and the advances they made to build the Dynabook: If somebody just sat down and implemented what Bush had wanted in 1945, and didn't try and add any extra features, we would like it today. I think the same thing is true about what we wanted for the Dynabook. There’s a direct path with some of the developers of Smalltalk to deploying MacBooks and Chromebooks in classrooms. And the influences these more mass marketed devices have will be felt for generations to come. Even as we devolve to new models from object-oriented programming, and new languages. The research that went into these early advances and the continued adoption and research have created a new world of teaching. At first we just wanted to teach logic and fundamental building blocks. Now kids are writing code. This might be writing java programs in robotics classes, html in Google Classrooms, or beginning iOS apps in Swift Playgrounds. So until the next episode, think about this: Vannevar Bush pushed for computers to help us think, and we have all of the worlds data at our fingertips. With all of the people coming out of school that know how to write code today, with the accelerometers, with the robotics skills, what is the next stage of synthesizing all human knowledge and truly making computers help with As we may think. So thank you so very much for tuning into another episode of the History of Computing Podcast. We’re lucky to have you. Have a great day!
9/29/2019 • 12 minutes, 22 seconds
Java: The Programming Language, Not The Island
Welcome to the History of Computing Podcast, where we explore the history of information technology. Because understanding the past prepares us for the innovations of the future! Today we’re going to look at Java. Java is an Indonesian island with over 141 million people. Java man lived there 1.7 million years ago. Wait, wrong java. The infiltration of coffee into the modern world can really trace its roots to ancient coffee forests on the Ethiopian plateau. Sufis in Yemen began importing coffee in the 1400s to make a beverage that would aid in concentration and as a kind of spiritual intoxication. Um, still the wrong java… Although caffeine certainly has a link somewhere, somehow. The history of the Java programming language dates back to early 1991. It all started at Sun Microsystems with the Stealth Project. Patrick Naughton had considered going to NeXT due to limitations in C++ and the C APIs. But he stayed to join Stealth, a secret team of engineers led by a developer Sun picked up from Carnegie Mellon named James Gosling . Stealth was formed to explore new opportunities in the consumer electronics market. This came up when Gosling was writing a program to port software from perf to vax and emulating hardware as many, many, many programers had done before him. I wonder if he realized when he went to build the first Java compiler and the original virtual machine code that would go on to write a dozen books about Java and it would consume most of his professional life. I wonder how much coffee he would have consumed if he had. They soon added Patrick Sheridan to the team. The project was later known as the “Green” project and with the advent of the web, somewhat pivoted into more of a web project. You see, Microsoft and the clones had some runaway success but Apple and other vendors were a factor in the home market. But Sun saw going down market as the future of the company. They added a few more people and rented separate offices in Menlo Park. Lisa Friendly was the first employee in the Java Products Group. Gosling would be lead engineer. John Gage would direct the project. Jonni Kanerva would write Java FAQ1. The team started to build C++ ++ —. Sun founder Bill Joy wanted a language that combined the the best parts of Mesa and C. In 1993, NCSA gave us Mozilla. That Andreessen guy was on the news saying the era of the desktop was over. These brilliant designers knew they needed an embedded application, one that could even be used in a web browser, or an applet. The language was initially called “Oak,” but was later renamed “Java” in 1995, supposedly from a list of random words but really due to massive consumption of coffee imported from the island of Java. By the way, it only aids in concentration up to a point. Then you get jumpy. Like a Halfling. It took the Java team 18 months to develop the first working version. It is unknown how much Java they drank in this time. Between the initial implementation of Oak in the fall of 1992 and the public announcement of Java in the spring of 1995, around 13 people ended up contributing to the design and evolution of the language. They were going to build a language that could sit on top of the operating systems on the market. This would allow them to be platform agnostic. In 1995, the team announced that the evolution of Mosaic, Netscape Navigator, would provide support for Java. Java gave us Write Once, Run Anywhere platform independence. You could run the code on a Mac, on Solaris, or on Windows. Java derives its syntax from C and many of the object oriented features were influenced by C++. Several of Java’s defining characteristics come from—or are responses to—its predecessors. Therefore, Java was meant to build on these and become a simple, object-oriented, distributed, interpreted, robust, secure, architectural neutral, portable, high performance, multithreaded, and dynamic language. Before I forget. The "Mocha Java" blend pairs coffee from Yemen and Java to get a thick, syrupy, and highly caffeinated blend that is often found with a hint of cinnamon or clove. Similar to all other computer language, all innovation in the design of the language was driven by the need to solve a fundamental problem that the preceding languages could not solve. To start, the creation of C is considered by many to have marked the beginning of the modern age of computer languages. It successfully synthesized the conflicting attributes that had so troubled earlier languages. The result was a powerful, efficient, structured language that was relatively easy to learn. It also included one other, nearly intangible aspect: it was a programmer’s language. Prior to the invention of C, computer languages were generally designed either as academic exercises or by bureaucratic committees. C was designed, implemented, and developed by real, working programmers, reflecting how they wanted to write code. Its features were honed, tested, thought about, and rethought by the people who actually used the language. C quickly attracted many followers who had a near-religious zeal for it. As such, it found wide and rapid acceptance in the programmer community. In short, C is a language designed by and for programmers, as is Java. Throughout the history of programming, the increasing complexity of programs has driven the need for better ways to manage that complexity. C++ is a response to that need in C. To better understand why managing program complexity is fundamental to the creation of C++, consider that in the early days of programming, computer programing was done by manually toggling in the binary machine instructions by use of the front panel or punching cards. As long as programs were just a few hundred instructions long, this worked. Then came Assembly and Fortran and then But as programs grew, assembly language was invented so that a programmer could deal with larger, increasingly complex programs by using symbolic representations of the machine instructions. As programs continued to grow, high-level languages were introduced that gave the programmer more tools with which to handle complexity. This gave birth to the first popular programing language; FORTRAN. Though impressive it had its shortcomings as it didn’t encourage clear and easy-to-understand programs. In the 1960s structured programming was born. This is the method of programming championed by languages such as C. The use of structured languages enabled programmers to write, for the first time, moderately complex programs fairly easily. However, even with structured programming methods, once a project reaches a certain size, its complexity exceeds what a programmer can manage. Due to continued growth, projects were exceeding the limits of the structured approach. To overcome this problem, a new way to program had to be invented; it is called object-oriented programming (OOP). Object-oriented programming (OOP) is a programming methodology that helps organize complex programs through the use of inheritance, encapsulation, and polymorphism. In spite of the fact that C is one of the world’s great programming languages, there is still a limit to its ability to handle complexity. Once the size of a program exceeds a certain point, it becomes so complex that it is difficult to grasp as a totality. While the precise size at which this occurs differs, depending upon both the nature of the program and the programmer, there is always a threshold at which a program becomes unmanageable. C++ added features that enabled this threshold to be broken, allowing programmers to comprehend and manage larger programs. So if the primary motivation for creating Java was the need for a platform-independent, architecture-neutral language, it was to create software to be embedded in various consumer electronic devices, such as microwave ovens and remote controls. The developers sought to use a different system to develop the language one which did not require a compiler as C and C++ did. A solution which was easier and more cost efficient. But embedded systems took a backseat when the Web took shape at about the same time that Java was being designed. Java was suddenly propelled to the forefront of computer language design. This could be in the form of applets for the web or runtime-only packages known as Java Runtime Environments, or JREs. At the time, developers had fractured into the three competing camps: Intel, Macintosh, and UNIX. Most software engineers stayed in their fortified boundary. But with the advent of the Internet and the Web, the problem that the portability of software between platforms suddenly got important in ways it hadn’t been since the forming of ARPANET. Even though many platforms are attached to the Internet, users would like them all to be able to run the same program. What was once an irritating but low-priority problem had become a high-profile necessity. The team realized this pressing need and later made the switch to refocus Java from embedded, consumer electronics to Internet programming. So while the desire for an architecture-neutral programming language provided the initial spark, the Internet ultimately led to Java’s large-scale success. So if Java derives much of its character from C and C++, this is by intent. The original designers knew that using familiar syntax would make their new language appealing to legions of experienced C/C++ programmers. Java also shares some of the other attributes that helped make C and C++ successful. Java was designed, tested, and refined by real, working programmers. Not scientists. Java is a programmer’s language. Java is also cohesive and logically consistent. If you program well, your programs reflect it. If you program poorly, your programs reflect that, too. Put differently, Java is not a language with training wheels. It is a language for professional programmers. Java 1 would be released in 1996 for Solaris, Windows, Mac, and Linux. It was released as the Java Development Kit, or JDK, and to this day we still refer to the version we’re using as JDK 11. Version 2, or 1.2 came in 1998 and with the rising popularity we had a few things that the burgeoning community needed. These included event listeners, Just In Time compilers, and change thread synchronizations. 1.3, code named Kestrel came in 2000, bringing RMI for CORBA compatibility, synthetic proxy classes, the Java Platform Debugger Architecture, Java Naming and Directory Interface in core libraries, the HostSpot JVM, and Java Sound. Merlin, or 1.4 came in 2002 bringing the frustrating regular expressions, native XML processing, logging, Non-Blocking I/O, and SSL. Tiger, or 1.5 came in 2004. This was important. We could autobox, get compile time type safety in generics, static import the static part of a class, annotations for declarative programming, and run time libraries were mapped into memory - a huge improvements to how JVMs work. Java 5 also gave us the version number change. So JDK 1.5 was officially recognized as Java 5. JDK 1.6, or Mustang, came in 2006. This was a big update, bringing monitoring and management tools, compiler access gave us programmatic access to javac and pluggable annotations allowed us to analyze code semantically as a step before javac compiles the code. WebStart got a makeover and SE 6 unified plugins with webstart. Enhanced XML services would be important (at least until he advent of son) and you could mix javascript up with Java. We also got JDBC 4, Character Large Objects, SwingWorker, JTable, better SQL datatypes, native PKI, Kerberos, LDAP, and honestly the most important thing was that it was stable. Although I’ve never written code stable enough to encounter their stability issues… Not enough coffee I suppose. Sun purchased Oracle in 2009. Wait, no, that’s one of my Marvel What If comic book fantasies where the world was a better place. Oracle bought Sun in 2009. After ponying up $5.6 billion dollars, Oracle had a lot of tech based on Sun products and seeing Sun as an increasingly attractive acquisition target by other companies, Oracle couldn’t risk someone else swooping in and buying Sun. With all the turmoil created, it took 5 years during a pretty formative time on the web, but we finally got Dolphin, or 1.7, which came in 2011 and gave us compressed, 64-bit pointers, strings in switch statements, the ability to make a binary integer and use underscores in literals, better graphics APIs, more cryptography algorithms, and a new I/O library that gave even better platform compatibilities. Spider, or 1.8, came along in 2014. We got the ability to Launch JavaFX application Jars, statically-linked JNI libraries, a new date an time API, annotation for java types, unsigned integer arithmetic, a JavaScript runtime that allowed us to embed Javascript code in apps - whether this is a good idea or not is still tbd. Lambda functions had been dropped in Java 7 so here we also got lambda expressions. And this kickstarted a pretty interesting time in the development of Java. We got 9 in 2017, 10 and 11 in 2018, 12, 13, and 14 in 2019. Of these, only 8 and 11 are LTS, or commercial Long Term Support releases, basically meaning we got the next major release after 8 in 2018 and according to my trend line should expect the next LTS in 2021 or 2022. JDK 13, when released later in 2019, will give us text blocks, Switch Expressions, improved memory management by returning unused heap memory to the OS, improves application class and data sharing, and brings back the legacy socket API. But it won’t likely be an LTS release. Today there are over 45 billion active Java Virtual Machines and java remains arguably the top language for micro service, ci/cd environments, and a number of other use cases. Other languages have come. Other languages have gone. Many are better in their own right. Some are not. Java is not perfect. It was meant to reduce complexity. But as languages evolve they become more complex. A project with a million lines of code is monolithic and probably incorporates plugins or frameworks like spring security as an example, that make code even more complex. But Java is meant to reduce cyclomatic complexity, to allow for a language that is simple enough for a professional to pick up quickly and only be as complex as the quality of the code being compiled. I don’t personally love Java. I respect it. And I adore high-quality programmers and their code in any language. But I’ve had to redo so much work because other languages have come and gone over the years that if I were to be starting a new big monolithic web-app today, I’d probably use Java every time. Which isn’t to say that Java isn’t useful in micro-service architectures. According to what’s required from the contract testing on a service, I might use Java, Go, node, python or even the formerly hipster Ruby. Although I don’t love drinking PBR… If I’m writing an Android app, I need to know Java. No matter what the lawyers say. If I’m planning on an enterprise webapp, Java needs to be in the conversation. But usually, I can do the work in a fraction of the time using something like python. But most big companies speak Java. And for good reason. Because of the write once run anywhere approach and the level of permissions a JRE needs, there have been security challenges with running Java on desktop computers. Apple deprecated Java on the Mac in 2010. Users could still instal lications and is the gold standard for those. I’m certainly not advocating going back to the 90s and running Java apps on our desktops any more. No matter what you think of Java, one thing you have to admit, the introduction of the language and the evolution have had a substantial impact on the IT industry and it will continue to do so. A great takeaway here might be that there’s always a potential alternative that might be better suited for a given task. But when it comes to choosing a platform that will be there in a decade or 3, getting support, getting a team that can scale, sometimes you might end up using a solution that doesn’t immediately seem as well suited to a need. But it can get the job done. As it’s been doing since James Gosling and the rest of the team started the project back in the early 90s. So thank you listeners, for sticking with us through this episode of the History of Computing Podcast. We’re lucky to have you.
9/25/2019 • 21 minutes, 17 seconds
The MIT Tech Model Railroad Club
Welcome to the History of Computing Podcast, where we explore the history of information technology. Because understanding the past prepares us for the innovations of the future! Today we’re going to look at the Tech Model Railroad Club, an obsessive group of young computer hackers that helped to shape a new vision for the young computer industry through the late 50s and early 60s. We’ve all seen parodies it in the movies. Queue up a montage. Iron Man just can’t help but tinker with new models of his armor. Then viola, these castaway hack jobs are there when a new foe comes along. As is inspiration to finish them. The Lambda Lamda Lamda guys get back at the jock frat boys in Revenge of the Nerds. The driven inventor in Honey I Shrunk the Kids just can’t help himself but build the most insane inventions. Peter Venkman in Ghostbusters. There’s a drive. And those who need to understand, to comprehend, to make sense of what was non-sensical before. I guess it even goes back to Dr Frankenstein. Some science just isn’t meant to be conquered. But trains. Those are meant to be conquered. They’re the golden spike into the engineering chasm that young freshman who looked like the cast of Stand By Me, but at MIT, wanted to conquer. You went to MIT in the 50s and 60s because you wanted a deeper understanding of how the world worked. But can you imagine a world where the unofficial motto of the MIT math department was that “there’s no such thing as computer science. It’s witchcraft!” The Tech Model Railroad Club, or TMRC, had started in 1946. World War II had ended the year before and the first first UN General Assembly and Security Council met, with Iran filing the first complaint against the Soviet Union and UNICEF being created. Syria got their independence from France. Jordan got their independence from Britain. The Philippines gained their independence from the US. Truman enacted the CIA, Stalin accounted a 5 year plan for Russia, ushering in the era of Soviet reconstruction and signaling the beginning of the col war, which would begin the next year. Anti-British protests exploded in India, and Attlee agreed to their independence. Ho Chi Minh became president of the Democratic Republic of Vietnam and France recognized their statehood days later, with war between his forces and the French breaking out later that year resulting in French martial law. Churchill gave his famous Iron Curtain Speech. Italy and Bulgaria abolished their monarchies. The US Supreme Court ordered desegregation of busses and Truman ordered desegregation of the armed forces and created the Committee on Civil Rights using an executive order. And there was no true computer industry. But the ENIAC went into production in 1946. And a group of kids at the Massachusetts Institute of Technology weren’t thinking much about the new world order being formed nor about the ENIAC which was being installed just a 5 or 6 hour drive away. They were thinking about model trains. And over the next few years they would build, paint, and make these trains run on model tracks. Started by Walter Marvin and John Fitzallen Moore, who would end up with over a dozen patents after earning his PhD from Columbia and having a long career at Lockheed, EMI Medical who invented the CT scan. By the mid-50s the club had grown and there were a few groups of people who were really in it for different things. Some wanted to drink cocacola while they painted trains. But the thing that drew many a student though was the ARRC, or Automatic Railroad Running Computer. This was built by the Signals and Power Subcommittee who used relays from telephone switches to make the trains do all kinds of crazy things, even cleaning the tracks. Today there we’re hacking genes, going to lifehacker.com, and sometimes regrettably getting hacked, or losing data in a breach. But the term came from one who chops or cuts, going back to the 1200s. But on a cool day in 1955, on the third floor of Build 20, known as the Plywood Palace, that would change. Minutes of a meeting at the Tech Model Railroad Club note “Mr. Eccles requests that anyone working or hacking on the electrical system turn the power off to avoid fuse blowing.” Maybe they were chopping parts of train tracks up. Maybe the term was derived from something altogether separate. But this was the beginning of a whole new culture. One that survives and thrives today. Hacking began to mean to do technical things for enjoyment in the club. And those who hacked became hackers. The OG hacker was Jack Dennis, an alumni of the TMRC. Jack Dennis had gotten his bachelors from MIT in 1953 and moved on to get his Masters then Doctorate by 1958, staying until he retired in 1987, teaching and influencing many subsequent generations of young hackers. You see, he studied artificial intelligence, or taking these computers built by companies like IBM to do math, and making them… intelligent. These switches and relays under the table of the model railroad were a lot of logical circuits strung together and in the days before what we think of as computers now, these were just a poor college student’s way of building a computer. Having skipped two grades in high school, this “computer” was what drew Alan Kotok to the TMRC in 1958. And incoming freshman Peter Samson. And Bob Saunders, a bit older than the rest. Then grad student Jack Dennis introduced the TMRC to the IBM 704. A marvel of human engineering. It was like your dad’s shiny new red 1958 corvette. Way too expensive to touch. But you just couldn’t help it. The young hackers didn’t know it yet, but Marvin Minsky had shown up to MIT in 1958. John McCarthy was a research fellow there. Jack Dennis got his PhD that year. Outside of MIT, Robert Noyce and Jack Kilby were giving us the Integrated Circuit, we got FORTRAN II, and that McCarthy guy. He gave us LISP. No, he didn’t speak with a LISP. He spoke IN LISP. And then president Lyndon Johnson established ARPA in response to Sputnik, to speed up technological progress. Fernando Corbato got his PhD in physics in 1956 and stayed on with the nerds until he retired as well. Kotok ended up writing the first chess program with McCarthy on the IBM 7090 while still a teenager. Everything changed when Lincoln Lab got the TX-0, lovingly referred to as the tikso. Suddenly, they weren’t loading cards into batch processing computers. The old IBM way was the enemy. The new machines allowed them to actually program. They wrote calculators and did work for courses. But Dennis kinda’ let them do most anything they wanted. So of course we ended up with very early computer games as well, with tic tac toe and Mouse in the Maze. These kids would write anything. Compilers? Sure. Assemblers? Got it. They would hover around the signup sheet for access to the tikso and consume every minute that wasn’t being used for official research. At this point, the kids were like the budding laser inventors in Weird Science. They were driven, crazed. And young Peter Deutsch joined them, writing the Lisp 1.5 implementation for the PDP at 12. Can you imagine being a 12 year old and holding your own around a group of some of the most influential people in the computer industry. Bill Gosper got to MIT in 1961 and so did the second PDP-1 ever built. Steve Russell joined the team and ended up working on Spacewar! When he wasn’t working on Lisp. Speaking of video games. They made Spacewar during this time with a little help from Kotok Steve Piner, Samson, Suanders, and Dan Edwards. In fact, Kotok and Saunders created the first gamepad, later made popular for Nintendo, so they could play Spacewar without using the keyboard. This was work that would eventually be celebrated by the likes of Rolling Stone and Space War and in fact would later become the software used to smoke test the PDP once it entered into the buying tornado. Ricky Greenblatt got to MIT in 1962. And this unruly, unkempt, and extremely talented group of kids hacked their way through the PDP, with Greenblatt becoming famous for his hacks, hacking away the first FORTRAN compiler for the PDP and spending so much time at the terminal that he didn’t make it through his junior year at MIT. These formative years in their lives were consumed with cocacola, Chinese food, and establishing many paradigms we now consider fundamental in computer science. The real shift from a batch process mode of operations, fed by paper tape and punchcards, to a interactive computer was upon us. And they were the pioneers who through countless hours of hacking away, found “the right thing.” Project MAC was established at MIT in 1963 using a DARPA grant and was initially run by legendary J. C. R. Licklider. MAC would influence operating systems with Multics which served as the inspiration for Unix, and the forming of what we now know as computer science through the 1960s and 70s. This represented a higher level of funding and a shift towards the era of development that led to the Internet and many of the standards we still use today. More generations of hackers would follow and continue to push the envelope. But that one special glimpse in time, let’s just say if you listen at just the right frequency you can hear screaming at terminals when a game of Spacewar didn’t go someone’s way, or when something crashed, or with glee when you got “the right thing.” And if you listen hard enough at your next hackathon, you can sometimes hear a Kotok or a Deutsch or a Saunders whisper in your ear exactly what “the right thing” is - but only after sufficient amounts of trial, error, and Spacewar. This free exercise gives way to innovation. That’s why Google famously gives employees free time to pursue their passions. That’s why companies run hackathons. That’s why everyone from DARPA to Netflix has run bounty programs. These young mathematicians, scientists, physicists, and engineers would go on to change the world in their own ways. Uncle John McCarthy would later move to Stanford, where he started the Stanford Artificial Intelligence Laboratory. From there he influenced Sun Microsystems (the S in Sun is for Stanford), Cisco, and dozens of other Silicon Valley powerhouses. Dennis would go on to found Multics and be an inspiration for Ken Thompson with the first versions of Unix. And after retiring he would go to NASA and then Acorn Networks. Slug Russell would go on to a long career as a developer and then executive, including a stop mentoring two nerdy high school kids at Lakeside School in Seattle. They were Paul Allen and Bill Gates, who would go on to found Microsoft. Alan Kotok would go on to join DEC where he would work for 30 years, influencing much of the computing through the 70s and into the 80s. He would work on the Titan chip at DEC and in the various consortiums around the emergent Internet. He would be a founding member of the World Wide Web Consortium. Ricky Greenblatt ended up spending too much of his time hacking. He would go on to found Lisp Machines, coauthor the time sharing software for the PDP-6 and PDP-10, write Maclisp, and write the first computer chess program to beat world class players in Hubert Dreyfus. Peter Samson wrote the Tech Model Railroad Club’s official dictionary which would evolve into the now-famous Jargon file. He wrote the Harmony compiler, a FORTRAN compiler for the PDP-6, made music for the first time with computers, became an architect at DEC, would oversee hardware engineering at NASA, and continues to act as a docent at the Computer History Museum. Bob Saunders would go on to be a professor at the University of California, becoming president of the IEEE, and Chairman of the Board during some of the most influential years in that great body of engineers and scientists. Peter Deutsch would go on to get his PhD from Berkeley, found Aladdin Enterprises, write Ghostscript, create free Postscript and PDF alternatives, work on Smalltalk, work at Sun, be an influential mind at Xerox PARC, and is now a composer. We owe a great deal to them. So thank you to these pioneers. And thank you, listeners, for sticking through to the end of this episode of the History of Computing Podcast. We’re lucky to have you.
9/22/2019 • 14 minutes, 43 seconds
The Altair 8800
Welcome to the History of Computing Podcast, where we explore the history of information technology. Because understanding the past prepares us for the innovations of the future! Todays episode is on Agile Software Development. Agile software development is a methodology, or anti-methodology, or approach to software development that evolves the requirements a team needs to fulfill and the solutions they need to build in a collaborative, self-organized, and cross-functional way. Boy, that’s a lot to spit out there. I was in an elevator the other day and I heard someone say: “That’s not very agile.” And at that moment, I knew that I just couldn’t help but do an episode on agile. I’ve worked in a lot of teams that use a lot of variants of agile, scrum, Kanban, scrumban, Extreme Programing, Lean Software Development. Some of these are almost polar opposites and you still hear people talk about what is agile and if they want to make fun of people doing things an old way, they’ll say something like waterfall. Nothing ever was waterfall, given that you learn on the fly, find re-usable bits or hit a place where you just say that’s not possible. But that’s another story. The point here is that agile is, well, weaponized to back up what a person wants someone to do. Or how they want a team to be run. And it isn’t always done from an informed point of view. Why is Agile an anti-methodology? Think of it more like a classification maybe. There were a number of methodologies like Extreme Programming, Scrum, Kanban, Feature Driven Development, Adaptive Software Development, RAD, and Lean Software Development. These had come out to bring shape around a very similar idea. But over the course of 10-20 years, each had been developed in isolation. In college, I had a computer science professor who talked about “adaptive software development” from his days at a large power company in Georgia back in the 70s. Basically, you are always adapting what you’re doing based on speculation of how long something will take, collaboration on that observation and what you learn while actually building. This shaped how I view software development for years to come. He was already making fun of Waterfall methodologies, or a cycle where you write a large set of requirements and stick to them. Waterfall worked well if you were building a computer to land people on the moon. It was a way of saying “we’re not engineers, we’re software developers.” Later in college, with the rapid proliferation of the Internet and computers into dorm rooms I watched the emergence of rapid application development, where you let the interface requirements determine how you build. But once someone weaponized that by putting a label on it, or worse forking the label into spiral and unified models, then they became much less useful and the next hot thing had to come along. Kent Beck built a methodology called Extreme Programming - or XP for short - in 1996 and that was the next hotness. Here, we release software in shorter development cycles and software developers, like police officers on patrol work in pairs, reviewing and testing code and not writing each feature until it’s required. The idea of unit testing and rapid releasing really came out of the fact that the explosion of the Internet in the 90s meant people had to ship fast and this was also during the rise of really main-stream object-oriented programming languages. The nice thing about XP was that you could show a nice graph where you planned, managed, designed, coded, and tested your software. The rules of Extreme Programming included things like “Code the unit test first” - and “A stand up meeting starts each day.” Extreme Programming is one of these methodologies. Scrum is probably the one most commonly used today. But the rest, as well as the Crystal family of methodologies, are now classified as Agile software development methodologies. So it’s like a parent. Is agile really just a classification then? No. So where did agile come from? By 2001, Kent Beck, who developed Extreme Programming met with Ward Cunningham (who built WikiWikiWeb, the first wiki), Dave Thomas, a programmer who has since written 11 books, Jeff Sutherland and Ken Schwaber, who designed Scrum. Jim Highsmith, who developed that Adaptive Software Development methodology, and many others were at the time involved in trying to align an organizational methodology that allowed software developers to stop acting like people that built bridges or large buildings. Most had day jobs but they were like-minded and decided to meet at a quaint resort in Snowbird, Utah. They might have all wanted to use the methodologies that each of them had developed. But if they had all been jerks then they might not have had a shift in how software would be written for the next 20+ years. They decided to start with something simple, a statement of values; instead of Instead of bickering and being dug into specific details, they were all able to agree that software development should not be managed in the same fashion as engineering projects are run. So they gave us the Manifesto for Agile Software Development… The Manifesto reads: We are uncovering better ways of developing software by doing it and helping others do it. Through this work we have come to value: * Individuals and interactions over processes and tools * Working software over comprehensive documentation * Customer collaboration over contract negotiation * Responding to change over following a plan That is, while there is value in the items on the right, we value the items on the left more. But additionally, the principles dig into and expand upon some of that adjacently. The principles behind the Agile Manifesto: Our highest priority is to satisfy the customer through early and continuous delivery of valuable software. Welcome changing requirements, even late in development. Agile processes harness change for the customer's competitive advantage. Deliver working software frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale. Business people and developers must work together daily throughout the project. Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done. The most efficient and effective method of conveying information to and within a development team is face-to-face conversation. Working software is the primary measure of progress. Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely. Continuous attention to technical excellence and good design enhances agility. Simplicity--the art of maximizing the amount of work not done--is essential. The best architectures, requirements, and designs emerge from self-organizing teams. At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly. Many of the words here are easily weaponized. For example, “satisfy the customer.” Who’s the customer? The product manager? The end user? The person in an enterprise who actually buys the software? The person in that IT department that made the decision to buy the software? In the scrum methodology, the customer is not known. The product owner is their representative. But the principles should need to identify that, just use the word so each methodology makes sure to cover it. Now take “continuous delivery.” People frequently just lump CI in there with CD. I’ve heard continuous design, continuous improvement, continuous deployment, continuous podcasting. Wait, I made the last one up. We could spend hours going through each of these and identifying where they aren’t specific enough. Or, again, we could revel in their lack of specificity by pointing us into the direction of a methodology where these words get much more specific meanings. Ironically, I know accounting teams at very large companies that have scrum masters, engineering teams for big projects with a project manager and a scrum master, and even a team of judges that use agile methodologies. There are now scrum masters embedded in most software teams of note. But once you see Agile on the cover of The Harvard Business Review, you hate to do this given all the classes in agile/XP/scrum - but you have to start wondering what’s next? For 20 years, we’ve been saying “stop treating us like engineers” or “that’s waterfall.” Every methodology seems to grow. Right after I finished my PMP I was on a project with someone else that had just finished theirs. I think they tried to implement the entire Project management Body of Knowledge. If you try to have every ceremony from Scrum, you’re not likely to even have half a day left over to write any code. But you also don’t want to be like the person on the elevator, weaponizing only small parts of a larger body of work, just to get your way. And more importantly, to admit that none of us have all the right answers and be ready to, as they say in Extreme Programming: Fix XP when it breaks - which is similar to Boyd’s Destruction and Creation, or the sustenance and destruction in Lean Six-Sigma. Many of us forget that last part: be willing to walk away from the dogma and start over. Thomas Jefferson called for a revolution every 20 years. We have two years to come up with a replacement! And until you replace me, thank you so very much for tuning into another episode of the History of Computing Podcast. We’re lucky to have you. Have a great day!
9/19/2019 • 11 minutes, 40 seconds
Craigslist
Craigslist Welcome to the History of Computing Podcast, where we explore the history of information technology. Because understanding the past prepares us for the innovations of the future! Today we’re going to look at the computer that was the history of craigslist. It’s 1995. The web is 4 years old. By the end of the year, there would be over 23,000 websites. Netscape released JavaScript, Microsoft released Internet Explorer, Sony released the Playstation, Coolio Released Gangsta’s Paradise, and probably while singing along to “This is How We Do It” veteran software programmer Craig Newmark made a list. And Craig Alexander Newmark hails from Morristown, New Jersey and after being a nerdy kid with thick black glasses and a pocket protector in high school went off to Case Western, getting a bachelors in 1975 and a masters in 77. This is where he was first given access to the arpanet, which would later evolve into the internet as we know it today. He then spent 17 years at IBM during some of the most formative years of the young computer industry. This was when the hacker ethos formed and anyone that went to college in the 70s would be well acquainted with Stewart Brand’s Whole Earth Catalog and yes, even employees of IBM would potentially have been steeped in the ethos of the counterculture that helped contribute to that early hacker ethos. And as with many of us, Gibson’s Neuromancer got him thinking about the potential of the web. Anyone working at that time would have also seen the rise of the Internet, the advent of email, and a lot of people were experimenting with side projects here and there. And people from all around the country that still believed in the ideals of that 60s counterculture still gravitated towards San Francisco, where Newmark moved to take a gig at Charles Schwab in 1993 where he was an early proponent of the web, exploring uses with a series of brown bag lunches. If you’re going to San Francisco make sure to wear flowers in your hair. Newmark got to see some of the best of the WELL and Usenet and as with a lot of people when they first move to a new place, old Craig was in his early 40s with way too much free time on his hands. I’ve known lots of people these days that move to new cities and jump headfirst into Eventbrite, Meetup, or more recently, Facebook events, as a way of meeting new people. But nothing like that really existed in 1993. The rest of the country had been glued to their televisions, waiting for the OJ Simpson verdict while flipping back and forth between Seinfeld, Frasier, and Roseanne. Unforgiven with Clint Eastwood won Best Picture. I’ve never seen Seinfeld. I’ve seen a couple episodes of Frasier. I lived Roseanne so was never interested. So a lot of us missed all that early 90s pop culture. Instead of getting embroiled in Friends from 93 to 95, Craig took a stab at connecting people. He started simple, with an email list and ten or so friends. Things like getting dinner at Joe’s digital diner. And arts events. Things he was interested in personally. People started to ask Craig to be added to the list. The list, which he just called craigslist, was originally for finding things to do but quickly grew into a wanted ad in a way - with people asking him to post their events or occasionally asking for him to mention an apartment or car, and of course, early email aficionados were a bit hackery so there was plenty of computer parts needed or available. It’s even hard for me to remember what things were like back then. If you wanted to list a job, sell a car, sell furniture, or even put an ad to host a group meetup, you’d spend $5 to $50 for a two or three line blurb. You had to pick up the phone. And chances are you had a home phone. Cordless phones were all the rage then. And you had to dial a phone number. And you had to talk to a real life human being. All of this sounds terrible, right?!?! So it was time to build a website. When he first launched craigslist, you could rent apartments, post small business ads, sell cars, buy computers, and organize events. Similar to the email list but on the web. This is a natural progression. Anyone who’s managed a list serve will eventually find the groups to become unwieldy and if you don’t build ways for people to narrow down what they want out of it, the groups and lists will split themselves into factions organically. Not that Craig had a vision for increasing page view times or bringing in advertisers, or getting more people to come to the site. But at first, there weren’t that many categories. And the URL was www.craigslist.org. It was simple and the text, like most hyperlinks at the time, was mostly blue. By end of 1997 he was up to a million page views a month and a few people were volunteering to help out with the site. Through 1998 the site started to lag behind with timely postings and not pruning old stuff quickly enough. It was clear that it needed more. In 1999 he made Craigslist into a business. Being based in San Francisco of course, venture capitalist friends were telling him to do much, much more, like banner ads and selling ads. It was time to hire people. He didn’t feel like he did great at interviewing people, he couldn’t fire people. But in 99 he got a resume from Jim Buckmaster. He hired him as the lead tech. Craigslist first expanded into different geographies by allowing users to basically filter to different parts of the Bay Area. San Francisco, South Bay, East Bay, North Bay, and Peninsula. Craig turned over operations of the company to Jim in 2000 and Craigslist expanded to Boston in y2k, and once tests worked well, added Chicago, DC, Los Angeles, New York City, Portland, Sacramento, San Diego, and Seattle. I had friends in San Francisco and had used Craigslist - I lived in LA at the time and this was my first time being able to use it regularly at home. Craig stayed with customer service, enjoying a connection with the organization. They added Sacramento and in 2001 saw the addition of Atlanta, Austin, Vancouver and Denver added. Every time I logged in there were new cities, and new categories, even one to allow for “erotic services”. Then in 2004 we saw Amsterdam, Tokyo, Paris, Bangalore, and Sao Paulo. As organizations grow they need capital. Craigslist wasn’t necessarily aggressive about growth, but once they became a multi-million dollar company, there was risk of running out of cash. In 2004, eBay purchased 28.4 percent of the company. They expanded into Sydney and Melbourne. Craigslist also added new categories to make it easier to find specific things, like toys or things for babies, different types of living arrangements, ridesharing, etc. Was it the ridesharing category that inspired Travis Kalanick? Was it posts to rent a room for a weekend that inspired AirBNB? Was it the events page that inspired Eventbrite? In 2005, eBay launched Kijiji, an online classifieds service organized by cities. It’s a similar business model to Craigslist. By May they’d purchased Gumtree, a similar site serving the UK, South Africa and a number of other countries, and then purchased LoQuo, OpusForum.org. They were firmly getting in the same market as Craigslist. Craigslist continued to grow. And by 2008, eBay sued Craigslist claiming they were diluting the eBay stock. Craigslist countered that Kijiji stoke trade secrets. By 2008 over 40 million Americans used Craigslist every month and they had helped people in more than 500 cities spread across more than 50 countries. Much larger than the other service. They didn’t settle that suit for 7 years, with eBay finally selling its shares back to Craigslist in 2015. Over the years, there have been a number of other legal hurdles for Craigslist. In 2008, Craigslist added phone verification to the erotic services category and saw a drastic reduction in the number of ads. They also teamed up with the National Center for Missing and Exploited Children as well as 43 US Attorneys General and saw over 90% reduced ads for erotic services over the next year and donated all revenue from ads to post erotic services to charities. Craigslist later removed the category outright. The net effect was that many of those services got posted to the personals section. At the time, craigslist was the most used personals site in the US. Therefore, unable to police those, in 2010, Craiglist took the personals down as well. Craigslist was obviously making people ask a lot of questions. Newspaper revenue from classifieds advertisements went down from 14 to 20 percent in 2007 while online classified traffic shot up 23%. Again, disruption makes people ask question. I am not a political person and don’t like talking about politics. I had friends in prosecutors offices at the time and they would ask me about how an ad could get posted for an illegal activity and really looked at it from the perspective that Craigslist was facilitating sex work. But it’s worth noting that a social change that resulted in that erotic services section was that a number of sex workers moved inside apartments rather than working on the street. They could screen potential customers and those clients knew they would be leaving behind a trail of bits and bytes that might get them caught. As a result, homicide rates against females went down by 17 percent and since the Erotic Services section of the site has been shut down, those rates have risen back to the same levels. Other sites did spring up to facilitate the same services, such as Backpage. And each has been taken down or prosecuted as they spring up. To make it easier to do so, the Stop Enabling Sex Trafficers Act and Allow States and Victims to Fight Online Sex Trafficking Act was launch in 2018. We know that the advent of the online world is changing a lot in society. If I need some help around the house, I can just go to Craigslist and post an ad and within an hour usually have 50 messages. I don’t love washing windows on the 2nd floor of the house - and now I don’t have to. I did that work myself 20 years ago. Cars sold person to person sell for more than to dealerships. And out of great changes comes people looking to exploit them. I don’t post things to sell as much as I used to. The last few times I posted I got at least 2 or 3 messages asking if I am willing to ship items and offering to pay me after the items arrive. Obvious scams. Not that I haven’t seen similar from eBay or Amazon, but at least there you would have recourse. Angie got a list in 1995 too. You can use angieslist to check up on people offering to do services. But in my experience few who respond to a craigslist ad are there and most are gainfully employed elsewhere and just gigging on the side. Today Craigslist runs with around 50 people, and with revenue over 700 million. Classified advertising at large newspaper chains has dropped drastically. Alexa ranks craigslist as the 120th ranked global sites and 28th ranked in the US - with people spending 9 minutes on the site on average. The top searches are cheap furniture, estate sales, and lawn mowers. And what’s beautiful is that the site looks almost exactly like it looked when launched in the 90s. Still no banners. Still blue hyperlinks. Still some black text. Nothing fancy. Out of Craigslist we’ve gotten CL blob service, CL image service, and memcache cluster proxy. They contribute code to Haraka, Redis, and Sphinx. The craigslist Charitable fund helps support the Apache Foundation, the Free Software Foundation, Gnome Foundation, Mozilla Foundation, Open Source Initiative, OpenStreetMap.us, Perl Foundation, PostgreSQL, Python Software Foundation, and Software in the Public Interest. I meet a lot of entrepreneurs who want to “disrupt” an industry. When I hear the self proclaimed serial entrepreneurs who think they’re all about the ideas but don’t know how to actually make any of the ideas work talk about disruptive technologies, I have never heard one mention craigslist. There’s a misnomer that a lot of engineers don’t have the ideas and that every Bill Gates needs a Paul Allen or that every Steve Jobs needs a Woz. Or I hear that starting companies is for young entrepreneurs, like those four were when starting Microsoft and Apple. Craig Newmark, a 20 year software veteran in his 40s inspired Yelp!, Uber, Next-door and thousands of other sites. And unlike many of those other organizations he didn’t have to go blow things up and build a huge company. They did something that their brethren from the early days on the WELL would be proud of, they diverted much of their revenues to the Craigslist Charitable Fund. Here, they sponsor four main categories of grant partners: * Environment and Transportation * Education, Rights, Justice, Reason * Non-Violence, Veterans, Peace * Journalism, Open Source, Internet You can find more on this at https://www.craigslist.org/about/charitable According to Forbes, Craig is a billionaire. But he’s said that his “minimal profit” business model allows him to “give away tremendous amounts of money to the nonprofits I believe in” including Wikipedia, a similar minded site. The stories of the history of computing are often full of people becoming “the richest person in the world” and organizations judged based on market share. But not only with the impact that the site has had but also with those inspired by how he runs it, Craig Newmark shatters all of those misconceptions of how the world should work. These days you’re probably most likely gonna’ find him on craigconnects.org - “helping people do good work that matters.” So think about this, my lovely listeners. No matter how old you are, nor how bad your design skills, nor how disruptive it will be or not be, anyone can parlay an idea that helps a few people into something that changes not only their life, but changes the lives of others, disrupts multiple industries, and doesn’t have to create all the stress of trying to keep up with the tech joneses. You can do great things if you want. Or you can listen to me babble. Thanks for doing that. We’re lucky to have you join us.
9/16/2019 • 17 minutes, 27 seconds
The Evolution Of The Microchip
The Microchip Welcome to the History of Computing Podcast, where we explore the history of information technology. Because understanding the past prepares us for the innovations of the future! Todays episode is on the history of the microchip, or microprocessor. This was a hard episode, because it was the culmination of so many technologies. You don’t know where to stop telling the story - and you find yourself writing a chronological story in reverse chronological order. But few advancements have impacted humanity the way the introduction of the microprocessor has. Given that most technological advances are a convergence of otherwise disparate technologies, we’ll start the story of the microchip with the obvious choice: the light bulb. Thomas Edison first demonstrated the carbon filament light bulb in 1879. William Joseph Hammer, an inventor working with Edison, then noted that if he added another electrode to a heated filament bulb that it would glow around the positive pole in the vacuum of the bulb and blacken the wire and the bulb around the negative pole. 25 years later, John Ambrose Fleming demonstrated that if that extra electrode is made more positive than the filament the current flows through the vacuum and that the current could only flow from the filament to the electrode and not the other direction. This converted AC signals to DC and represented a boolean gate. In the 1904 Fleming was granted Great Britain’s patent number 24850 for the vacuum tube, ushering in the era of electronics. Over the next few decades, researchers continued to work with these tubes. Eccles and Jordan invented the flip-flop circuit at London’s City and Guilds Technical College in 1918, receiving a patent for what they called the Eccles-Jordan Trigger Circuit in 1920. Now, English mathematician George Boole back in the earlier part of the 1800s had developed Boolean algebra. Here he created a system where logical statements could be made in mathematical terms. Those could then be performed using math on the symbols. Only a 0 or a 1 could be used. It took awhile, John Vincent Atanasoff and grad student Clifford Berry harnessed the circuits in the Atanasoff-Berry computer in 1938 at Iowa State University and using Boolean algebra, successfully solved linear equations but never finished the device due to World War II, when a number of other technological advancements happened, including the development of the ENIAC by John Mauchly and J Presper Eckert from the University of Pennsylvania, funded by the US Army Ordinance Corps, starting in 1943. By the time it was taken out of operation, the ENIAC had 20,000 of these tubes. Each digit in an algorithm required 36 tubes. Ten digit numbers could be multiplied at 357 per second, showing the first true use of a computer. John Von Neumann was the first to actually use the ENIAC when they used one million punch cards to run the computations that helped propel the development of the hydrogen bomb at Los Alamos National Laboratory. The creators would leave the University and found the Eckert-Mauchly Computer Corporation. Out of that later would come the Univac and the ancestor of todays Unisys Corporation. These early computers used vacuum tubes to replace gears that were in previous counting machines and represented the First Generation. But the tubes for the flip-flop circuits were expensive and had to be replaced way too often. The second generation of computers used transistors instead of vacuum tubes for logic circuits. The integrated circuit is basically a wire set into silicon or germanium that can be set to on or off based on the properties of the material. These replaced vacuum tubes in computers to provide the foundation of the boolean logic. You know, the zeros and ones that computers are famous for. As with most modern technologies the integrated circuit owes its origin to a number of different technologies that came before it was able to be useful in computers. This includes the three primary components of the circuit: the transistor, resistor, and capacitor. The silicon that chips are so famous for was actually discovered by Swedish chemist Jöns Jacob Berzelius in 1824. He heated potassium chips in a silica container and washed away the residue and viola - an element! The transistor is a semiconducting device that has three connections that amplify data. One is the source, which is connected to the negative terminal on a battery. The second is the drain, and is a positive terminal that, when touched to the gate (the third connection), the transistor allows electricity through. Transistors then acts as an on/off switch. The fact they can be on or off is the foundation for Boolean logic in modern computing. The resistor controls the flow of electricity and is used to control the levels and terminate lines. An integrated circuit is also built using silicon but you print the pattern into the circuit using lithography rather than painstakingly putting little wires where they need to go like radio operators did with the Cats Whisker all those years ago. The idea of the transistor goes back to the mid-30s when William Shockley took the idea of a cat’s wicker, or fine wire touching a galena crystal. The radio operator moved the wire to different parts of the crystal to pick up different radio signals. Solid state physics was born when Shockley, who first studied at Cal Tech and then got his PhD in Physics, started working on a way to make these useable in every day electronics. After a decade in the trenches, Bell gave him John Bardeen and Walter Brattain who successfully finished the invention in 1947. Shockley went on to design a new and better transistor, known as a bipolar transistor and helped move us from vacuum tubes, which were bulky and needed a lot of power, to first gernanium, which they used initially and then to silicon. Shockley got a Nobel Prize in physics for his work and was able to recruit a team of extremely talented young PhDs to help work on new semiconductor devices. He became increasingly frustrated with Bell and took a leave of absence. Shockley moved back to his hometown of Palo Alto, California and started a new company called the Shockley Semiconductor Laboratory. He had some ideas that were way before his time and wasn’t exactly easy to work with. He pushed the chip industry forward but in the process spawned a mass exodus of employees that went to Fairchild in 1957. He called them the “Traitorous 8” to create what would be Fairchild Semiconductors. The alumni of Shockley Labs ended up spawning 65 companies over the next 20 years that laid foundation of the microchip industry to this day, including Intel. . If he were easier to work with, we might not have had the innovation that we’ve seen if not for Shockley’s abbrasiveness! All of these silicon chip makers being in a small area of California then led to that area getting the Silicon Valley moniker, given all the chip makers located there. At this point, people were starting to experiment with computers using transistors instead of vacuum tubes. The University of Manchester created the Transistor Computer in 1953. The first fully transistorized computer came in 1955 with the Harwell CADET, MIT started work on the TX-0 in 1956, and the THOR guidance computer for ICBMs came in 1957. But the IBM 608 was the first commercial all-transistor solid-state computer. The RCA 501, Philco Transac S-1000, and IBM 7070 took us through the age of transistors which continued to get smaller and more compact. At this point, we were really just replacing tubes with transistors. But the integrated circuit would bring us into the third generation of computers. The integrated circuit is an electronic device that has all of the functional blocks put on the same piece of silicon. So the transistor, or multiple transistors, is printed into one block. Jack Kilby of Texas Instruments patented the first miniaturized electronic circuit in 1959, which used germanium and external wires and was really more of a hybrid integrated Circuit. Later in 1959, Robert Noyce of Fairchild Semiconductor invented the first truly monolithic integrated circuit, which he received a patent for. While doing so independently, they are considered the creators of the integrated circuit. The third generation of computers was from 1964 to 1971, and saw the introduction of metal-oxide-silicon and printing circuits with photolithography. In 1965 Gordon Moore, also of Fairchild at the time, observed that the number of transistors, resistors, diodes, capacitors, and other components that could be shoved into a chip was doubling about every year and published an article with this observation in Electronics Magazine, forecasting what’s now known as Moore’s Law. The integrated circuit gave us the DEC PDP and later the IBM S/360 series of computers, making computers smaller, and brought us into a world where we could write code in COBOL and FORTRAN. A microprocessor is one type of integrated circuit. They’re also used in audio amplifiers, analog integrated circuits, clocks, interfaces, etc. But in the early 60s, the Minuteman missal program and the US Navy contracts were practically the only ones using these chips, at this point numbering in the hundreds, bringing us into the world of the MSI, or medium-scale integration chip. Moore and Noyce left Fairchild and founded NM Electronics in 1968, later renaming the company to Intel, short for Integrated Electronics. Federico Faggin came over in 1970 to lead the MCS-4 family of chips. These along with other chips that were economical to produce started to result in chips finding their way into various consumer products. In fact, the MCS-4 chips, which split RAM , ROM, CPU, and I/O, were designed for the Nippon Calculating Machine Corporation and Intel bought the rights back, announcing the chip in Electronic News with an article called “Announcing A New Era In Integrated Electronics.” Together, they built the Intel 4004, the first microprocessor that fit on a single chip. They buried the contacts in multiple layers and introduced 2-phase clocks. Silicon oxide was used to layer integrated circuits onto a single chip. Here, the microprocessor, or CPU, splits the arithmetic and logic unit, or ALU, the bus, the clock, the control unit, and registers up so each can do what they’re good at, but live on the same chip. The 1st generation of the microprocessor was from 1971, when these 4-bit chips were mostly used in guidance systems. This boosted the speed by five times. The forming of Intel and the introduction of the 4004 chip can be seen as one of the primary events that propelled us into the evolution of the microprocessor and the fourth generation of computers, which lasted from 1972 to 2010. The Intel 4004 had 2,300 transistors. The Intel 4040 came in 1974, giving us 3,000 transistors. It was still a 4-bit data bus but jumped to 12-bit ROM. The architecture was also from Faggin but the design was carried out by Tom Innes. We were firmly in the era of LSI, or Large Scale Integration chips. These chips were also used in the Busicom calculator, and even in the first pinball game controlled by a microprocessor. But getting a true computer to fit on a chip, or a modern CPU, remained an elusive goal. Texas Instruments ran an ad in Electronics with a caption that the 8008 was a “CPU on a Chip” and attempted to patent the chip, but couldn’t make it work. Faggin went to Intel and they did actually make it work, giving us the first 8-bit microprocessor. It was then redesigned in 1972 as the 8080. A year later, the chip was fabricated and then put on the market in 1972. Intel made the R&D money back in 5 months and sparked the idea for Ed Roberts to build The Altair 8800. Motorola and Zilog brought competition in the 6900 and Z-80, which was used in the Tandy TRS-80, one of the first mass produced computers. N-MOSs transistors on chips allowed for new and faster paths and MOS Technology soon joined the fray with the 6501 and 6502 chips in 1975. The 6502 ended up being the chip used in the Apple I, Apple II, NES, Atari 2600, BBC Micro, Commodore PET and Commodore VIC-20. The MOS 6510 variant was then used in the Commodore 64. The 8086 was released in 1978 with 3,000 transistors and marked the transition to Intel’s x86 line of chips, setting what would become the standard in future chips. But the IBM wasn’t the only place you could find chips. The Motorola 68000 was used in the Sun-1 from Sun Microsystems, the HP 9000, the DEC VAXstation, the Comodore Amiga, the Apple Lisa, the Sinclair QL, the Sega Genesis, and the Mac. The chips were also used in the first HP LaserJet and the Apple LaserWriter and used in a number of embedded systems for years to come. As we rounded the corner into the 80s it was clear that the computer revolution was upon us. A number of computer companies were looking to do more than what they could do with he existing Intel, MOS, and Motorola chips. And ARPA was pushing the boundaries yet again. Carver Mead of Caltech and Lynn Conway of Xerox PARC saw the density of transistors in chips starting to plateau. So with DARPA funding they went out looking for ways to push the world into the VLSI era, or Very Large Scale Integration. The VLSI project resulted in the concept of fabless design houses, such as Broadcom, 32-bit graphics, BSD Unix, and RISC processors, or Reduced Instruction Set Computer Processor. Out of the RISC work done at UC Berkely came a number of new options for chips as well. One of these designers, Acorn Computers evaluated a number of chips and decided to develop their own, using VLSI Technology, a company founded by more Fairchild Semiconductor alumni) to manufacture the chip in their foundry. Sophie Wilson, then Roger, worked on an instruction set for the RISC. Out of this came the Acorn RISC Machine, or ARM chip. Over 100 billion ARM processors have been produced, well over 10 for every human on the planet. You know that fancy new A13 that Apple announced. It uses a licensed ARM core. Another chip that came out of the RISC family was the SUN Sparc. Sun being short for Stanford University Network, co-founder Andy Bchtolsheim, they were close to the action and released the SPARC in 1986. I still have a SPARC 20 I use for this and that at home. Not that SPARC has gone anywhere. They’re just made by Oracle now. The Intel 80386 chip was a 32 bit microprocessor released in 1985. The first chip had 275,000 transistors, taking plenty of pages from the lessons learned in the VLSI projects. Compaq built a machine on it, but really the IBM PC/AT made it an accepted standard, although this was the beginning of the end of IBMs hold on the burgeoning computer industry. And AMD, yet another company founded by Fairchild defectors, created the Am386 in 1991, ending Intel’s nearly 5 year monopoly on the PC clone industry and ending an era where AMD was a second source of Intel parts but instead was competing with Intel directly. We can thank AMD’s aggressive competition with Intel for helping to keep the CPU industry going along Moore’s law! At this point transistors were only 1.5 microns in size. Much, much smaller than a cats whisker. The Intel 80486 came in 1989 and again tracking against Moore’s Law we hit the first 1 million transistor chip. Remember how Compaq helped end IBM’s hold on the PC market? When the Intel 486 came along they went with AMD. This chip was also important because we got L1 caches, meaning that chips didn’t need to send instructions to other parts of the motherboard but could do caching internally. From then on, the L1 and later L2 caches would be listed on all chips. We’d finally broken 100MHz! Motorola released the 68050 in 1990, hitting 1.2 Million transistors, and giving Apple the chip that would define the Quadra and also that L1 cache. The DEC Alpha came along in 1992, also a RISC chip, but really kicking off the 64-bit era. While the most technically advanced chip of the day, it never took off and after DEC was acquired by Compaq and Compaq by HP, the IP for the Alpha was sold to Intel in 2001, with the PC industry having just decided they could have all their money. But back to the 90s, ‘cause life was better back when grunge was new. At this point, hobbyists knew what the CPU was but most normal people didn’t. The concept that there was a whole Univac on one of these never occurred to most people. But then came the Pentium. Turns out that giving a chip a name and some marketing dollars not only made Intel a household name but solidified their hold on the chip market for decades to come. While the Intel Inside campaign started in 1991, after the Pentium was released in 1993, the case of most computers would have a sticker that said Intel Inside. Intel really one upped everyone. The first Pentium, the P5 or 586 or 80501 had 3.1 million transistors that were 16.7 micrometers. Computers kept getting smaller and cheaper and faster. Apple answered by moving to the PowerPC chip from IBM, which owed much of its design to the RISC. Exactly 10 years after the famous 1984 Super Bowl Commercial, Apple was using a CPU from IBM. Another advance came in 1996 when IBM developed the Power4 chip and gave the world multi-core processors, or a CPU that had multiple CPU cores inside the CPU. Once parallel processing caught up to being able to have processes that consumed the resources on all those cores, we saw Intel's Pentium D, and AMD's Athlon 64 x2 released in May 2005 bringing multi-core architecture to the consumer. This led to even more parallel processing and an explosion in the number of cores helped us continue on with Moore’s Law. There are now custom chips that reach into the thousands of cores today, although most laptops have maybe 4 cores in them. Setting multi-core architectures aside for a moment, back to Y2K when Justin Timberlake was still a part of NSYNC. Then came the Pentium Pro, Pentium II, Celeron, Pentium III, Xeon, Pentium M, Xeon LV, Pentium 4. On the IBM/Apple side, we got the G3 with 6.3 million transistors, G4 with 10.5 million transistors, and the G5 with 58 million transistors and 1,131 feet of copper interconnects, running at 3GHz in 2002 - so much copper that NSYNC broke up that year. The Pentium 4 that year ran at 2.4 GHz and sported 50 million transistors. This is about 1 transistor per dollar made off Star Trek: Nemesis in 2002. I guess Attack of the Clones was better because it grossed over 300 Million that year. Remember how we broke the million transistor mark in 1989? In 2005, Intel started testing Montecito with certain customers. The Titanium-2 64-bit CPU with 1.72 billion transistors, shattering the billion mark and hitting a billion two years earlier than projected. Apple CEO Steve Jobs announced Apple would be moving to the Intel processor that year. NeXTSTEP had been happy as a clam on Intel, SPARC or HP RISC so given the rapid advancements from Intel, this seemed like a safe bet and allowed Apple to tell directors in IT departments “see, we play nice now.” And the innovations kept flowing for the next decade and a half. We packed more transistors in, more cache, cleaner clean rooms, faster bus speeds, with Intel owning the computer CPU market and AMD slowly growing from the ashes of Acorn computer into the power-house that AMD cores are today, when embedded in other chips designs. I’d say not much interesting has happened, but it’s ALL interesting, except the numbers just sound stupid they’re so big. And we had more advances along the way of course, but it started to feel like we were just miniaturizing more and more, allowing us to do much more advanced computing in general. The fifth generation of computing is all about technologies that we today consider advanced. Artificial Intelligence, Parallel Computing, Very High Level Computer Languages, the migration away from desktops to laptops and even smaller devices like smartphones. ULSI, or Ultra Large Scale Integration chips not only tells us that chip designers really have no creativity outside of chip architecture, but also means millions up to tens of billions of transistors on silicon. At the time of this recording, the AMD Epic Rome is the single chip package with the most transistors, at 32 billion. Silicon is the seventh most abundant element in the universe and the second most in the crust of the planet earth. Given that there’s more chips than people by a huge percentage, we’re lucky we don’t have to worry about running out any time soon! We skipped RAM in this episode. But it kinda’ deserves its own, since RAM is still following Moore’s Law, while the CPU is kinda’ lagging again. Maybe it’s time for our friends at DARPA to get the kids from Berkley working at VERYUltra Large Scale chips or VULSIs! Or they could sign on to sponsor this podcast! And now I’m going to go take a VERYUltra Large Scale nap. Gentle listeners I hope you can do that as well. Unless you’re driving while listening to this. Don’t nap while driving. But do have a lovely day. Thank you for listening to yet another episode of the History of Computing Podcast. We’re so lucky to have you!
9/13/2019 • 31 minutes, 14 seconds
Agile Software Development
Welcome to the History of Computing Podcast, where we explore the history of information technology. Because understanding the past prepares us for the innovations of the future! Todays episode is on Agile Software Development. Agile software development is a methodology, or anti-methodology, or approach to software development that evolves the requirements a team needs to fulfill and the solutions they need to build in a collaborative, self-organized, and cross-functional way. Boy, that’s a lot to spit out there. I was in an elevator the other day and I heard someone say: “That’s not very agile.” And at that moment, I knew that I just couldn’t help but do an episode on agile. I’ve worked in a lot of teams that use a lot of variants of agile, scrum, Kanban, scrumban, Extreme Programing, Lean Software Development. Some of these are almost polar opposites and you still hear people talk about what is agile and if they want to make fun of people doing things an old way, they’ll say something like waterfall. Nothing ever was waterfall, given that you learn on the fly, find re-usable bits or hit a place where you just say that’s not possible. But that’s another story. The point here is that agile is, well, weaponized to back up what a person wants someone to do. Or how they want a team to be run. And it isn’t always done from an informed point of view. Why is Agile an anti-methodology? Think of it more like a classification maybe. There were a number of methodologies like Extreme Programming, Scrum, Kanban, Feature Driven Development, Adaptive Software Development, RAD, and Lean Software Development. These had come out to bring shape around a very similar idea. But over the course of 10-20 years, each had been developed in isolation. In college, I had a computer science professor who talked about “adaptive software development” from his days at a large power company in Georgia back in the 70s. Basically, you are always adapting what you’re doing based on speculation of how long something will take, collaboration on that observation and what you learn while actually building. This shaped how I view software development for years to come. He was already making fun of Waterfall methodologies, or a cycle where you write a large set of requirements and stick to them. Waterfall worked well if you were building a computer to land people on the moon. It was a way of saying “we’re not engineers, we’re software developers.” Later in college, with the rapid proliferation of the Internet and computers into dorm rooms I watched the emergence of rapid application development, where you let the interface requirements determine how you build. But once someone weaponized that by putting a label on it, or worse forking the label into spiral and unified models, then they became much less useful and the next hot thing had to come along. Kent Beck built a methodology called Extreme Programming - or XP for short - in 1996 and that was the next hotness. Here, we release software in shorter development cycles and software developers, like police officers on patrol work in pairs, reviewing and testing code and not writing each feature until it’s required. The idea of unit testing and rapid releasing really came out of the fact that the explosion of the Internet in the 90s meant people had to ship fast and this was also during the rise of really main-stream object-oriented programming languages. The nice thing about XP was that you could show a nice graph where you planned, managed, designed, coded, and tested your software. The rules of Extreme Programming included things like “Code the unit test first” - and “A stand up meeting starts each day.” Extreme Programming is one of these methodologies. Scrum is probably the one most commonly used today. But the rest, as well as the Crystal family of methodologies, are now classified as Agile software development methodologies. So it’s like a parent. Is agile really just a classification then? No. So where did agile come from? By 2001, Kent Beck, who developed Extreme Programming met with Ward Cunningham (who built WikiWikiWeb, the first wiki), Dave Thomas, a programmer who has since written 11 books, Jeff Sutherland and Ken Schwaber, who designed Scrum. Jim Highsmith, who developed that Adaptive Software Development methodology, and many others were at the time involved in trying to align an organizational methodology that allowed software developers to stop acting like people that built bridges or large buildings. Most had day jobs but they were like-minded and decided to meet at a quaint resort in Snowbird, Utah. They might have all wanted to use the methodologies that each of them had developed. But if they had all been jerks then they might not have had a shift in how software would be written for the next 20+ years. They decided to start with something simple, a statement of values; instead of Instead of bickering and being dug into specific details, they were all able to agree that software development should not be managed in the same fashion as engineering projects are run. So they gave us the Manifesto for Agile Software Development… The Manifesto reads: We are uncovering better ways of developing software by doing it and helping others do it. Through this work we have come to value: * Individuals and interactions over processes and tools * Working software over comprehensive documentation * Customer collaboration over contract negotiation * Responding to change over following a plan That is, while there is value in the items on the right, we value the items on the left more. But additionally, the principles dig into and expand upon some of that adjacently. The principles behind the Agile Manifesto: Our highest priority is to satisfy the customer through early and continuous delivery of valuable software. Welcome changing requirements, even late in development. Agile processes harness change for the customer's competitive advantage. Deliver working software frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale. Business people and developers must work together daily throughout the project. Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done. The most efficient and effective method of conveying information to and within a development team is face-to-face conversation. Working software is the primary measure of progress. Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely. Continuous attention to technical excellence and good design enhances agility. Simplicity--the art of maximizing the amount of work not done--is essential. The best architectures, requirements, and designs emerge from self-organizing teams. At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly. Many of the words here are easily weaponized. For example, “satisfy the customer.” Who’s the customer? The product manager? The end user? The person in an enterprise who actually buys the software? The person in that IT department that made the decision to buy the software? In the scrum methodology, the customer is not known. The product owner is their representative. But the principles should need to identify that, just use the word so each methodology makes sure to cover it. Now take “continuous delivery.” People frequently just lump CI in there with CD. I’ve heard continuous design, continuous improvement, continuous deployment, continuous podcasting. Wait, I made the last one up. We could spend hours going through each of these and identifying where they aren’t specific enough. Or, again, we could revel in their lack of specificity by pointing us into the direction of a methodology where these words get much more specific meanings. Ironically, I know accounting teams at very large companies that have scrum masters, engineering teams for big projects with a project manager and a scrum master, and even a team of judges that use agile methodologies. There are now scrum masters embedded in most software teams of note. But once you see Agile on the cover of The Harvard Business Review, you hate to do this given all the classes in agile/XP/scrum - but you have to start wondering what’s next? For 20 years, we’ve been saying “stop treating us like engineers” or “that’s waterfall.” Every methodology seems to grow. Right after I finished my PMP I was on a project with someone else that had just finished theirs. I think they tried to implement the entire Project management Body of Knowledge. If you try to have every ceremony from Scrum, you’re not likely to even have half a day left over to write any code. But you also don’t want to be like the person on the elevator, weaponizing only small parts of a larger body of work, just to get your way. And more importantly, to admit that none of us have all the right answers and be ready to, as they say in Extreme Programming: Fix XP when it breaks - which is similar to Boyd’s Destruction and Creation, or the sustenance and destruction in Lean Six-Sigma. Many of us forget that last part: be willing to walk away from the dogma and start over. Thomas Jefferson called for a revolution every 20 years. We have two years to come up with a replacement! And until you replace me, thank you so very much for tuning into another episode of the History of Computing Podcast. We’re lucky to have you. Have a great day!
9/10/2019 • 11 minutes, 26 seconds
The Advent Of The Cloud
Welcome to the History of Computing Podcast, where we explore the history of information technology. Because by understanding the past prepares us for the innovations of the future! Today we’re going to look at the emergence of the cloud. As with everything evil, the origin of the cloud began with McCarthyism. From 1950 to 1954 Joe McCarthy waged a war against communism. Wait, wrong McCarthyism. Crap. After Joe McCarthy was condemned and run out of Washington, **John** McCarthy made the world a better place in 1955 with a somewhat communistic approach to computing. The 1950s were the peak of the military industrial complex. The SAGE air defense system needed to process data coming in from radars and perform actions based on that data. This is when McCarthy stepped in. John, not Joe. He proposed things like allocating memory automatically between programs, quote “Programming techniques can be encouraged which make destruction of other programs unlikely” and modifying FORTRAN to trap programs into specified areas of the storage. When a person loading cards or debugging code, the computer could be doing other things. To use his words: “The only way quick response can be provided at a bearable cost is by time-sharing. That is, the computer must attend to other customers while one customer is reacting to some output.” He posited that this could go from a 3 hour to day and a half turnaround to seconds. Remember, back then these things were huge and expensive. So people worked shifts and ran them continuously. McCarthy had been at MIT and Professor Fernando Corbato from there actually built it between 1961 and 1963. But at about the same time, Professor Jack Dennis from MIT started doing about the same thing with a PDP-1 from DEC - he’s actually probably one of the most influential people many I talk to have never heard of. He called this APEX and hooked up multiple terminals on TX-2. Remember John McCarthy? He and some students actually did the same thing in 1962 after moving on to become a professor at Stanford. 1965 saw Alan Kotok sell a similar solution for the PDP-6 and then as the 60s rolled on and people in the Bay Area got really creative and free lovey, Cobato, Jack Dennis of MIT, a team from GE, and another from Bell labs started to work on Multics, or Multiplexed Information and Computing Service for short, for the GE-645 mainframe. Bell Labs pulled out and Multics was finished by MIT and GE, who then sold their computer business to Honeywell so they wouldn’t be out there competing with some of their customers. Honeywell sold Multics until 1985 and it included symmetric multiprocessing, paging, a supervisor program, command programs, and a lot of the things we now take for granted in Linux, Unix, and macOS command lines. But we’re not done with the 60s yet. ARPAnet gave us a standardized communications platform and distributed computing started in the 60s and then became a branch of computer science later in the late 1970s. This is really a software system that has components stored on different networked computers. Oh, and Telnet came at the tail end of 1969 in RFC 15, allowing us to remotely connect to those teletypes. People wanted Time Sharing Systems. Which led Project Genie at Berkely, TOPS-10 for the PDP-10 and IBM’s failed TSS/360 for the System 360. To close out the 70s, Ken Thompson, Dennis Ritchie, Doug McIllroy, Mike Lesk, Joe Assana, and of course Brian Kernighan at Bell Labs hid a project to throw out the fluff from Multics and build a simpler system. This became Unix. Unix was originally developed in Assembly but Ritchie would write C in 72 and the team would eventually refactor Unix in C. Pretty sure management wasn’t at all pissed when they found out. Pretty sure the Uniplexed Information and Computing Services, or eunuchs for short wasn’t punny enough for the Multics team to notice. BSD would come shortly thereafter. Over the coming years you could create multiple users and design permissions in a way that users couldn’t step on each others toes (or more specifically delete each others files). IBM did something interesting in 1972 as well. They invented the Virtual Machine, which allowed them to run an operating system inside an operating system. At this point, time sharing options were becoming common place in mainframes. Enter Moore’s Law. Computers got cheaper and smaller. Altair and hobbyists became a thing. Bill Joy ported BSD to Sun workstations in 77. Computers kept getting smaller. CP/M shows up on early microcomputers at about the same time up until 1983. Apple arrives on the scene. Microsoft DOS appears in 1981 and and In 1983, with all this software you have to pay for really starting to harsh his calm, Richard Stallman famously set out to make software free. Maybe this was in response to Gates’ 1976 Open Letter to Hobbyists asking pc hobbyists to actually pay for software. Maybe they forgot they wrote most of Microsoft BASIC on DARPA gear. Given that computers were so cheap for a bit, we forgot about multi-user operating systems for awhile. By 1991, Linus Torvalds, who also believed in free software, by then known as open source, developed a Unix-like operating system he called Linux. Computers continued to get cheaper and smaller. Now you could have them on multiple desks in an office. Companies like Novell brought us utility computers we now refer to as servers. You had one computer to just host all the files so users could edit them. CERN gave us the first web server in 1990. The University of Minnesota gave us Gopher in 1991. NTP 3 came in 1992. The 90s also saw the rise of virtual private networks and client-server networks. You might load a Delphi-based app on every computer in your office and connect that fat client with a shared database on a server to, for example, have a shared system to enter accounting information into, or access customer information to do sales activities and report on them. Napster had mainstreamed distributed file sharing. Those same techniques were being used in clusters of servers that were all controlled by a central IT administration team. Remember those virtual machines IBM gave us: you could now cluster and virtualize workloads and have applications that were served from a large number of distributed computing systems. But as workloads grew, the fault tolerance and performance necessary to support them became more and more expensive. By the mid-2000s it was becoming more acceptable to move to a web-client architecture, which meant large companies wouldn’t have to bundle up software and automate the delivery of that software and could instead use an intranet to direct users to a series of web pages that allowed them to perform business tasks. Salesforce was started in 1999. They are the poster child for software as a service and founder/CEO Marc Benioff coined the term platform as a service, allowing customers to build their own applications using the Salesforce development environment. But it wasn’t until we started breaking web applications up and developed methods to authenticate and authorize parts of them to one another using technologies like SAML, introduced in 2002) and OAuth (2006) that we were able to move into a more micro-service oriented paradigm for programming. Amazon and Google had been experiencing massive growth and in 2006 Amazon created Amazon Web Services and offered virtual machines on demand to customers, using a service called Elastic Compute Cloud. Google launched G Suite in 2006, providing cloud-based mail, calendar, contacts, documents, and spreadsheets. Google then offered a cloud offering to help developers with apps in 2008 with Google App Engine. In both cases, the companies had invested heavily in developing infrastructure to support their own workloads and renting some of that out to customers just… made sense. Microsoft, seeing the emergence of Google as not just a search engine, but a formidable opponent on multiple fronts also joined into the Infrastructure as a Service as offering virtual machines for pennies per minute of compute time also joined the party in 2008. Google, Microsoft, and Amazon still account for a large percentage of cloud services offered to software developers. Over the past 10 years the technologies have evolved. Mostly just by incrementing a number, like OAuth 2.0 or HTML 5. Web applications have gotten broken up in smaller and smaller parts due to mythical programmer months meaning you need smaller teams who have contracts with other teams that their service, or micro-service, can specific tasks. Amazon, Google, and Microsoft see these services and build more workload specific services, like database as a service or putting a REST front-end on a database, or data lakes as a service. Standards like OAuth even allow vendors to provide Identity as a service, linking up all the things. The cloud, as we’ve come to call hosting services, has been maturing for 55 years, from shared compute time on mainframes to shared file storage space on a server to very small shared services like payment processing using Stripe. Consumers love paying a small monthly fee for access to an online portal or app rather than having to deploy large amounts of capital to bring in an old-school JDS Uniphase style tool to automate tasks in a company. Software developers love importing an SDK or calling a service to get a process for free, allowing developers to go to market much faster and look like magicians in the process. And we don’t have teams at startups running around with fire extinguishers to keep gear humming along. This reduces the barrier to build new software and apps and democratizes software development. App stores and search engines then make it easier than ever to put those web apps and apps in front of people to make money. In 1959, John McCarthy had said “The cooperation of IBM is very important but it should be to their advantage to develop this new way of using a computer.” Like many new philosophies, it takes time to set in and evolve. And it takes a combination of advances to make something so truly disruptive possible. The time-sharing philosophy gave us Unix and Linux, which today are the operating systems running on a lot of these cloud servers. But we don’t know or care about those because the web provides a layer on top of them that obfuscates the workload. Much as the operating system obfuscated the workload of the components of the system. Today those clouds obfuscate various layers of the stack so you can enter at any part of the stack you want whether it’s a virtual computer or a service or just to consume a web app. And this has lead to an explosion of diverse and innovative ideas. Apple famously said “there’s an app for that” but without the cloud there certainly wouldn’t be. And without you, my dear listeners, there wouldn’t be a podcast. So thank you so very much for tuning into another episode of the History of Computing Podcast. We’re lucky to have you. Have a great day!
9/5/2019 • 14 minutes, 55 seconds
Wikipedia
Welcome to the History of Computing Podcast, where we explore the history of information technology. Because understanding the past prepares us for the innovations of the future! Todays episode is on the history of Wikipedia. The very idea of a single location that could store all the known information in the world began with Ptolemy I, founder of the Greek dynasty that ruled Egypt following the death of Alexander the great. He and his son amassed 100s of thousands of scrolls in the Library and Alexandria from 331 BC and on. The Library was part of a great campus of the Musaeum where they also supported great minds starting with Ptolemy I’s patronage of Euclid, the father of geometry, and later including Archimedes, the father of engineering, Hipparchus, the founder of trigonometry, Her, the father of math, and Herophilus, who gave us the scientific method and countless other great hellenistic thinkers. The Library entered into a slow decline that began with the expulsion of intellectuals from Alexandria in 145BC. Ptolemy VIII was responsible for that. Always be weary of people who attack those that they can’t win over especially when they start blaming the intellectual elite for the problems of the world. This began a slow decline of the library until it burned, first with a small fire accidentally set by Caesar in 48BC and then for good in the 270s AD. In the centuries since there have been attempts here and there to gather great amounts of information. The first known encyclopedia was the Naturalis Historiae by Pliny the Elder, never completed because he was killed in the eruption of Vesuvius. One of the better known being the Encyclopedia Britannica, starting off in 1768. Mass production of these was aided by the printing press but given that there’s a cost to producing those materials and a margin to be made in the sale of those materials that encouraged a somewhat succinct exploration of certain topics. The advent of the computer era of course led to encyclopedias on CD and then to online encyclopedias. Encyclopedias at the time employed experts in certain fields and paid them for compiling and editing articles for volumes that would then be sold. As we say these days, this was a business model just waiting to be disrupted. Jimmy Wales was moderating an online discussion board on Objectivism and happened across Larry Sanger in the early 90s. They debated and became friends. Wales started Nupedia, which was supposed to be a free encyclopedia, funded by advertising revenue. As it was to be free, they were to recruit thousands of volunteer editors. People of the caliber that had been previously hired to research and write articles for encyclopedias. Sanger, who was pursuing a PhD in philosophy from Ohio State University, was hired on as editor-in-chief. This was a twist on the old model of compiling an encyclopedia and a twist that didn’t work out as intended. Volunteers were slow to sign up, but Nupedia went online in 2000. Later in the year there had only been two articles that made it through the review process. When Sanger told Ben Kovitz about this, he recommended looking at the emerging wiki culture. This had been started with WikiWikiWeb, developed by Ward Cunningham in 1994, named after a shuttle bus that ran between airport terminals at the Honolulu airport. WikiWikiWeb had been inspired by Hypercard but needed to be multi-user so people could collaborate on web pages, quickly producing content on new patterns in programming. He wanted to make non-writers feel ok about writing. Sanger proposed using a wiki to be able to accept submissions for articles and edits from anyone but still having a complicated review process to accept changes. The reviewers weren’t into that, so they started a side project they called Wikipedia in 2001 with a user-generated model for content, or article, generation. The plan was to generate articles on Wikipedia and then move or copy them into Nupedia once they were ready. But Wikipedia got mentioned on Slashdot. In 2001 there were nearly 30 million websites but half a billion people using the web. Back then a mention on the influential Slashdot could make a site. And it certainly helped. They grew and more and more people started to contribute. They hit 1,000 articles in March of 2001 and that increased by 10 fold by September, By And another 4 fold the next year. It started working independent of Nupedia. The dot-com bubble burst in 2000 and by 2002 Nupedia had to lay Sanger off and he left both projects. Nupedia slowly died and was finally shut down in 2003. Eventually the Wikimedia Foundation was built to help unlock the world’s knowledge, which now owns and operates Wikipedia. Wikimedia also includes Commons for media, Wikibooks that includes free textbooks and manuals, Wikiquote for quotations, Wikiversity for free learning materials, MediaWiki the source code for the site, Wikidata for pulling large amounts of data from Wikimedia properties using APIs, Wikisource, a library of free content, Wikivoyage, a free travel guide, Wikinews, free news, Wikispecies, a directory containing over 687,000 species. Many of the properties have very specific ways of organizing data, making it easier to work with en masse. The properties have grown because people like to be helpful and Wales allowed self-governance of articles. To this day he rarely gets involved in the day-to-day affairs of the wikipedia site, other than the occasional puppy dog looks in banners asking for donations. You should donate. He does have 8 principles the site is run by: 1. Wikipedia’s success to date is entirely a function of our open community. 2. Newcomers are always to be welcomed. 3. “You can edit this page right now” is a core guiding check on everything that we do. 4. Any changes to the software must be gradual and reversible. 5. The open and viral nature of the GNU Free Documentation License and the Create Commons Attribution/Share-Alike License is fundamental to the long-term success of the site. 6. Wikipedia is an encyclopedia. 7. Anyone with a complaint should be treated with the utmost respect and dignity. 8. Diplomacy consists of combining honesty and politeness. This culminates in 5 pillars wikipedia is built on: 1. Wikipedia is an encyclopedia. 2. Wikipedia is written from a neutral point of view. 3. Wikipedia is free content that anyone can use, edit, and distribute. 4. Wikipedia’s editors should treat each other with respect and civility. 5. Wikipedia has no firm rules. Sanger went on to found Citizendium, which uses real names instead of handles, thinking maybe people will contribute better content if their name is attached to something. The web is global. Throughout history there have been encyclopedias produced around the world, with the Four Great Books of Song coming out of 11th century China, the Encyclopedia of the Brethren of Purity coming out of 10th century Persia. When Wikipedia launched, it was in English. Wikipedia launched a German version using the deutsche.wikipedia.com subdomain. It now lives at de.wikipedia.com and Wikipedia has gone from being 90% English to being almost 90 % non-English, meaning that Wikipedia is able to pull in even more of the world’s knowledge. Wikipedia picked up nearly 20,000 English articles in 2001, over 75,000 new articles in 2002, and that number has steadily climbed wreaching over 3,000,000 by 2010, and we’re closing in on 6 Million today. The English version is 10 terabytes of data uncompressed. If you wanted to buy a printed copy of wikipedia today, it would be over 2500 books. By 2009 Microsoft Encarta shut down. By 2010 Encyclopedia Britannica stopped printing their massive set of books and went online. You can still buy encyclopedias from specialty makers, such as the World Book. Ironically, Encyclopedia Britannica does now put real names of people on articles they produce on their website, in an ad-driven model. There are a lot of ads. And the content isn’t linked to as many places nor as thorough. Creating a single location that could store all the known information in the world seems like a pretty daunting task. Compiling the non-copywritten works of the world is now the mission of Wikipedia. The site receives the fifth most views per month and is read by nearly half a billion people a month with over 15 billion page views per month. Anyone who has gone down the rabbit hole of learning about Ptolemy I’s involvement in developing the Library of Alexandria and then read up on his children and how his dynasty lasted until Cleopatra and how… well, you get the point… can understand how they get so much traffic. Today there are over 48,000,000 articles and over 37,000,000 registered users who have contributed articles meaning if we set 160 Great Libraries of Alexandria side-by-side we would have about the same amount of information Wikipedia has amassed. And it’s done so because of the contributions of so many dedicated people. People who spend hours researching and building pages, undergoing the need to provide references to cite the data in the articles (btw wikipedia is not supposed to represent original research), more people to patrol and look for content contributed by people on a soapbox or with an agenda, rather than just reporting the facts. Another team looking for articles that need more information. And they do these things for free. While you can occasionally see frustrations from contributors, it is truly one of the best things humanity has done. This allows us to rediscover our own history, effectively compiling all the facts that make up the world we live in, often linked to the opinions that shape them in the reference materials, which include the over 200 million works housed at the US Library of Congress, and over 25 million books scanned into Google Books (out of about 130 million). As with the Great Library of Alexandria, we do have to keep those who seek to throw out the intellectuals of the world away and keep the great works being compiled from falling to waste due to inactivity. Wikipedia keeps a history of pages, to avoid revisionist history. The servers need to be maintained, but the database can be downloaded and is routinely downloaded by plenty of people. I think the idea of providing an encyclopedia for free that was sponsored by ads was sound. Pivoting the business model to make it open was revolutionary. With the availability of the data for machine learning and the ability to enrich it with other sources like genealogical research, actual books, maps, scientific data, and anything else you can manage, I suspect we’ll see contributions we haven’t even begun to think about! And thanks to all of this, we now have a real compendium of the worlds knowledge, getting more and more accurate and holistic by the day. Thank you to everyone involved, from Jimbo and Larry, to the moderators, to the staff, and of course to the millions of people who contribute pages about all the history that makes up the world as we know it today. And thanks to you for listening to yet another episode of the History of Computing Podcast. We’re lucky to have you. Have a great day! Note: This work was produced in large part due to the compilation of historical facts available at https://en.wikipedia.org/wiki/History_of_Wikipedia
9/2/2019 • 14 minutes, 49 seconds
FORTRAN
FORTRAN Welcome to the History of Computing Podcast, where we explore the history of information technology. Because by understanding the past, we’re better prepared for the innovations of the future! Todays episode is on one of the oldest of the programming languages, FORTRAN - which has influenced most modern languages. We’ll start this story with John Backus. This guy was smart. He went to med school and was diagnosed with a brain tumor. He didn’t like the plate that was left behind in his head. So he designed a new one. He then moved to New York and started to work on radios while attending Columbia for first a bachelor’s degree and then a master’s degree in math. That’s when he ended up arriving at IBM. He walked in one day definitely not wearing the standard IBM suit - and when he said he was a grad student in math they took him upstairs, played a little stump the chump, and hired him on the spot. He had not idea what a programmer was. By 1954 he was a trusted enough resource that he was allowed to start working on a new team, to define a language that could provide a better alternative to writing code in icky assembly language. This was meant to boost sales of the IBM 704 mainframe by making it easier to hire and train new software programmers. That language became FORTRAN, an acronym for Formula Translation. The team was comprised of 10 geniuses. Lois Haibt, probably one of the younger on the team said of this phase: "No one was worried about seeming stupid or possessive of his or her code. We were all just learning together." She built the arithmetic expression analyzer and helped with the first FORTRAN manual, which was released in 1956. Roy Nutt was also on that team. He wrote an assembler for the IBM 704 and was responsible for the format command which managed data as it came in and out of FORTRAN programs. He went on to be a co-founder of Computer Science Corporation, or CSC with Fletcher Jones in 1959, landing a huge contract with Honeywell. CSC grew quickly and went public in the 60s. They continued to prosper until 2017 when they merged with HP Enteprirse services, which had just merged with Silicon Graphics. Today they have a pending merger with Cray. David Sayre was also on that team. He discovered the Sayre crystallography equation, and molter moved on to pioneer electron beam lithography and push the envelope of X-ray microscopy. Harlan Herrick on the team invented the DO and GO TO commands and ran the first working FORTRAN program. Cuthbert Herd was recruited from the Atomic Energy Commission and invented the concept of a general purpose computer. Frances Allen was a math teacher that joined up with the group to help pay off college debts. She would go on to teach Fortran and in 1989 became the first female IBM Fellow Emeritus. Robert Nelson was a cryptographer who handled a lot of the technical typing and designing some of the more sophisticated sections of the compiler. Irving Ziller designed the methods for loops and arrays. Peter Sheridan, aside from having a fantastic mustache, invented much of the compiler code used for decades after. Sheldon Best optimized the use of index registers, along with Richard Goldberg. As Backus would note in his seminal paper, the History Of FORTRAN I, II, and III, the release of FORTRAN in 1957 changed the economics of programming. While still scientific in nature, the appearance of the first true high-level language using the first real compiler meant you didn’t write in machine or assembly, which was hard to teach, hard to program, and hard to debug. Instead, you’d write machine independent code that could perform complex mathematical expressions and once compiled it would run maybe 20% slower, but development was 5 times faster. IBM loved this because customers needed to buy faster computers. But customers had a limit for how much they could spend and the mainframes at the time had a limit for how much they could process. To quote Backus “To this day I believe that our emphasis on object program efficiency rather than on language design was basically correct.” Basically they spent more time making the compiler efficient than they spent developing the programming language itself. As with the Constitution of the United States, simplicity was key. Much of the programming language pieces were designed by Herrick, Ziller, and Backus. The first release of FORTRAN had 32 statements that did things that might sound similar today like PRINT, READ, FORMAT, CONTINUE, GO TO, ASSIGN and of course IF. This was before terminals and disk files so programs were punched into 80 column cards. The first 72 columns were converted into 12 36 bit words. 1-5 were labels for control statements like PRINT, FORMAT, ASSIGN or put a C in column 1 to comment out the code. Column 6 was boolean where a 1 told it a new statement was coming or a 0 continued the statement from the previous card. Columns 7 through 72 were the statement, which ignored whitespace, and the other columns were ignored. FORTRAN II came onto the scene very shortly thereafter in 1958 and the SUBROUTINE, FUNCTION, END, CALL, RETURN, and COMMON statements were added. COMMON was important because it gave us global variables. FORTRAN III came in 1958 as well but was only available for specific computers and never shipped. 1401 FORTRAN then came for the 1401 mainframe. The compiler ran from tape and kept the whole program in memory, allowing for faster runtime. FORTRAN IV came in the early 60s and brought us into the era of the System/360. Here, we got booleans, logical IF instead of that used in arithmetic, the LOGICAL data type, and then came one of the most important versions, FORTRAN 66 - which merged all those dialects from IV into not quite a new version. Here, ANSI, or the American National Standards Institute stepped in and started to standardize. We sill use DO for loops, and every language has its own end of file statement, commenting structures, and logical IFs. Once things get standardized, they move slower. Especially where compiler theory is concerned. Dialects had emerged but FORTRAN 66 stayed put for 11 years. In 1968, the authors of BASIC were already calling FORTRAN old fashioned. A new version was started in 66 but wasn’t completed until 1977 and formally approved in 1978. Here, we got END IF statements, the ever so important ELSE, with new types of I/O we also got OPEN and CLOSE, and persistent variable controls with SAVE. The Department of Defense also insisted on lexical comparison strings. And we actually removed things, which these days we call DEPRECATE. 77 also gave us new error handling methods, and programmatic ways to manage really big programs (because over the last 15 years some had grown pretty substantial in size). The next update took even longer. While FORTRAN 90 was released in 1991, we learned some FORTRAN 77 in classes at the University of Georgia. Fortran 90 changed the capitalization so you weren’t yelling at people and added recursion, pointers, developer-controlled data types, object code for parallelization, better argument passing, 31 character identifiers, CASE, WHERE, and SELeCT statements, operator overloading, inline commenting, modules, POINTERs (however Ken Thompson felt about those didn’t matter ‘cause he had long hair and a beard), dynamic memory allocation (malloc errors woohoo), END DO statements for loop terminations, and much more. They also deprecated arithmetic IF statements, PAUSE statements, branching END IF, the ASSIGN statement, statement functions, and a few others. Fortran 95 was a small revision, adding FORALL and ELEMENTAL procedures, as well as NULL pointers. But FORTRAN was not on the minds of many outside of the scientific communities. 1995 is an important year in computing. Mainframes hadn’t been a thing for awhile. The Mac languished in the clone era just as Windows 95 had brought Microsoft to a place of parity with the Mac OS. The web was just starting to pop. The browser wars between Netscape and Microsoft were starting to heat up. C++ turned 10 years old. We got Voice over IP, HTML 2.0, PHP, Perl 5, the ATX mother board, Windows NT, the Opera browser, the card format, CD readers that cost less than a grand, the Pentium Pro, Java, JavaScript, SSL, the breakup of AT&T, IBM’s DEEP BLUE, WebTV, Palm Pilot, CPAN, Classmates.com, the first Wiki, Cygwin, the Jazz drive, Firewire, Ruby, and NumPy kickstarted the modern machine learning era. Oh and Craigslist, Yahoo!, eBay, and Amazon.com. Audible was also established that year but they weren’t owned by Amazon just yet. Even at IBM, they were buys buying Lotus and trying to figure out how they were going to beat Kasparov with Deep Blue. Hackers came out that year, and they were probably also trying to change their passwords from god. With all of this rapid innovation popping in a single year it’s no wonder there was a backlash as can be seen in The Net, with Sandra Bullock, also from 1995. And as though they needed even more of a kick that this mainframe stuff was donezo, Konrad Zuse passed away in 1995. I was still in IT at the university watching all of this. Sometimes I wonder if it’s good or bad that I wasn’t 2 or 3 years older… Point of all of this is that many didn’t notice when Fortran continued on becoming more of a niche language. At this point, programming wasn’t just for math. Fortran 2003 brought object oriented enhancements, polymorphism, and interoperability with C. Fortran 2008 came and then Fortran 2018. Yes, you can still find good jobs in Fortran. Or COBOL for that matter. Fortran leaves behind a legacy (and a lot of legacy code) that established many of the control statements and structures we use today. Much as Grace Hopper pioneered the idea of a compiler, FORTRAN really took that concept and put it to the masses, or at least the masses of programmers of the day. John Backus and that team of 10 programmers increased the productivity of people who wrote programs by 20 fold in just a few years. These types of productivity gains are rare. You have the assembly line, the gutenberg press, the cotton gin, the spinning Jenny, the watt steam engine, and really because of the derivative works that resulted from all that compiled code from all those mainframes and since, you can credit that young, diverse, and brilliant team at IBM for kickstarting the golden age of the mainframe. Imagine if you will, Backus walks into IBM and they said “sorry, we don’t have any headcount on our team.” You always make room for brilliant humans. Grace Hopper’s dream would have resulted in COBOL, but without the might of IBM behind it, we might still be writing apps in machine language. Backus didn’t fit in with the corporate culture at IBM. He rarely wore suits in an era where suit makers in Armonk were probably doing as well as senior management. They took a chance on a brilliant person. And they assembled a diverse team of brilliant people who weren’t territorial or possessive, a team who authentically just wanted to learn. And sometimes that kind of a team lucks up and change sthe world. Who do you want to take a chance on? Mull over that until the next episode. Thank you so very much for tuning into another episode of the History of Computing Podcast. We’re lucky to have you. Have a great day! The History of FORTRAN I, II, and III :: http://www.softwarepreservation.org/projects/FORTRAN/paper/p165-backus.pdf
8/29/2019 • 13 minutes, 57 seconds
DEF CON: A Brief History Of The Worlds Largest Gathering Of Hackers
The History of DEF CON Welcome to the History of Computing Podcast, where we explore the history of information technology. Because understanding the past prepares us for the innovations of the future! Todays episode is on the history of DEF CON. I have probably learned more about technology in my years attending Blackhat and DEF CON than from any other source other than reading and writing books. But DEF CON specifically expanded my outlook on the technology industry and made me think of how others might consider various innovations, and sometimes how they might break them. DEF CON also gave me an insight into the hacker culture that I might not have gotten otherwise. Not the hacker culture many think of, but the desire to just straight up tinkerate with everything. And I do mean everything, sometimes much to the chagrin of the Vegas casino or hotel hosting the event. The thing that I have always loved about DEF CON is that, while there is a little shaming of vendors here and there, there’s also a general desire to see security research push the envelope of what’s possible, making vendors better and making the world a more secure place. Not actually trying to back things in a criminal way. In fact, there’s an ethos that surrounds the culture. Yes, you want to find sweet, sweet o days. But when you do, you disclose the vulnerability before you tell the world that you can bring down any Cisco firewall. DEF CON has played a critical role in the development and remediation of rootlets, trojans, viruses, forensics, threat hunting research, social engineering, botnet detection and defeat, keystroke logging, DoS attacks, application security, network security, and privacy. In 2018, nearly 28,000 people attended Def Con. And the conference shows no signs of slowing down. In fact, the number of people with tattoos of Jack, the skull and crossbones-esque logo, only seems to be growing. As does the number of people who have black badges, which give them free access to DEF CON for life. But where did it get its start? The name is derived from WarGames, a 1983 movie that saw Matthew Broderick almost start World War III by playing a simulation of a nuclear strike with a computer. This was obviously before his freewheeling days as Ferris Bueller. Over the next decade, Bulletin Board Networks had become a prime target for hackers in it for the lolz. Back then, Bullet Boards were kinda’ like what Reddit is today. But you dialed a network and then routed through a hierarchical system, with each site having a coordinator. A lot of Fido hacking was trying to become an admin of each board. If this sounds a lot like the Internet of today, the response would be “ish”. So Jeff Moss, also known as Dark Tangent, was a member of a group of hackers that liked to try to take over these bulletin boards called “Platinum Net”. He started planning a party for a network that was shut down. He had graduated from Gonzaga University with a degree in Criminal Justice a few years earlier, and invited #hack to join him in Vegas. Moss had graduated from Gonzaga University in Criminal Justice and so why not have 100 criminals join him in Vegas at the Sands Hotel and Casino! He got a little help from Dead Addict, and the event was a huge success. The next year, Artimage, Pappy Ozendorph, Stealth, Zac Franken, and Noid threw in to help coordinate things and the attendees at the conference doubled to around 200. They knew they had something special cookie’ up. Def Con two, which was held at the Sahara, got mentions by Business Week and the New York Times, as well as PC Magazine, which was big at the time. DEF CON 3 happened right after the Hackers movie at the Tropicana, and DEF CON 4 actually had the FBI show up to to tell the hackers all the things at the Monte Carlo. DEF CON 4 also saw the introduction of Black Hat, a conference that runs before DEF CON. DEF CON 5 though, saw ABC News ZDNet, Computer World, and saw people show up to the Aladdin from all over the world, which is how I heard of the conference. The conference continued to grow. People actually started waiting to release tools until DEF CON. DEF CON 6 was held at the Plaza and then it went to the Alexis Park Resort from DEF CON 7 to DEF CON 13. DEF CON 7 will always be remembered for the release of Back Orifice 2000, a plugin based remote admin tool (or RAT) that I regrettably had to remove from many a device throughout my career. Of course it had an option for IRC-based command and control, as did all the best stuff on the Silk Road. Over the next few years the conference grew and law enforcement agents started to show up. I mean, easy pickings, right? This led to a “spot the fed” contest. People would of course try to hack each other, which led to maybe the most well-known contest, the scavenger hunt. I am obviously a history nerd so I always loved the Hacker Jeapoardy contest. You can also go out to the desert to shoot automatic weapons, participate in scavenger hunts, pick all the locks, buy some shirts, and of course, enjoy all the types of beverages with all the types of humans. All of these mini-events associated with DEF CON have certainly helped make the event what it is today. I’ve met people from the Homebrew Computer Club, Anonymous, the Legion of Doom, ShadowCrew, the Cult of the Dead Cow, and other groups there. I also met legends like Captain Crunch, Kevin Poulsen, Kevin Mitnick, L0pht (of L0phtcrack, and many others. By DEF CON 7 in 2000, the conference was getting too big to manage. So the Goons started to take over various portions of the con. People like Cjunky, Agent X, CHS, Code24, flea, Acronym, cyber, Gattaca, Froggy, Lockheed, Londo, Major Malfunction, Mattrix, G Mark, JustaBill, helped me keep from getting by eyebrows shaved off and were joined by other goons over the years. Keep in mind there are a lot of younger script kiddies who show up and this crew helps keep them safe. My favorite goon might be Noid. This was around the time the wall of sheep appeared, showing passwords picked up on the network. DEF CON 11 saw a bit of hacktivism when the conference started raising money for the Electronic Frontier Foundation. By 2005 the conferences had grown enough that Cisco even tried to shut down a talk from Michael Lynn that could basically shut down the Internet as we know it. Those pages mentioning the talk had to be torn out of the books. In one of the funner moments I’ve seen Michell Madigan was run out of the con for trying to secretly record one of the most privacy oriented groups I’ve ever been a part of. Dan Kaminsky rose to prominence in 2008 when he found some serious flaws in DNS. He was one of the inaugural speakers at Def Con China 1 in 2018. 2008 also saw a judge order a subway card hacking talk be cancelled, preventing three MIT students from talking about how they hacked the Boston subway. 2012 saw Keith Alexander, then director of the NSA give the keynote. Will Smith dropped by in 2013, although it was just to prepare for a movie. Probably not Suicide Squad. He didn’t stay log. Probably because Dark Tangent asked the feds to stay away for awhile. DARPA came to play in 2016 giving out a 2 million dollar prize to the team that could build an autonomous AI bot that could handle offense and defense in a Capture the Flag style competition. 2017 made the news because they hosted a voting machine hacking village. Cambridge Global Advisors was a sponsor. They have no connection with Cambridge Analytica. No matter how you feel about politics, the hallmark of any democracy is certifying a fair and, um, democratic election. Jimmy Carter knows. He was 92 then. 2019 saw 30,000 people show up in Vegas for DEF CON 27. At this point, DEF CON has been on the X-Files, Mr. Robot, and given a node in the movie Jason Bourne. It is a special event. Being surrounded by so many people with unbridled curiosity is truly inspiring. I doubt I would ever have written my first book on security if not for the opportunity they gave me to speak at DEF CON and Blackhat. Oh, recording this episode just reminded me - I need to go book my room for next year! If you want to learn more about DEF CON, we’ll include a link to the documentary from 2013 about it in the show notes. https://www.youtube.com/watch?v=3ctQOmjQyYg
8/27/2019 • 9 minutes, 51 seconds
Netscape
Welcome to the History of Computing Podcast, where we explore the history of information technology. Because by understanding the past, we’re able to be prepared for the innovations of the future! Today we’re going to look at the emergence of the web through the lens of Netscape, the browser that pushed everything forward into the mainstream. The Netscape story starts back at the University of Illinois, Champaign-Urbana where the National Center for Supercomputing Applications (or NCSA) inspired Marc Andreessen and Eric Bina to write Mosaic, which was originally called xmosaic and built for X11 or the X Window System. In 1992 there were only 26 websites in the world. But that was up from the 1 that Internet pioneer Tim Berners-Lee built at info.cern.ch in 1991. The internet had really only been born a few years earlier in 1989. But funded by the Gore Bill, Andreessen and a team of developers released the Alpha version of the NCSA Mosaic browser in 1993 and ported it to Windows, Mac, and of course the Amiga. At this point there were about 130 websites. Version two of Mosaic came later that year and then the National Science Foundation picked up the tab to maintain Mosaic from 94 to 97. James Clark, a co-founder of Silicon Graphics and a legend in Silicon Valley, took notice. He recruited some of the Mosaic team, led by Marc Andreessen, to start Mosaic Communications Corporation, which released Netscape Navigator in 1994, the same year Andreessen graduated from college. By then there were over 2,700 websites, and a lot of other people were taking notice after 2 four digit growth years. Yahoo! and EXCITE were released in 1994 and enjoyed an explosion in popularity, entering a field with 25 million people accessing such a small number of sites. Justin Hall was posting personal stuff on links.net, one of the earliest forms of what we now call blogging. Someone else couldn’t help but notice: Bill Gates from Microsoft. He considered cross-platform web pages and the commoditization of the operating system to be a huge problem for his maturing startup called Microsoft, and famously sent The Internet Tidal Wave memo to his direct reports, laying out a vision for how Microsoft would respond to this thread. We got Netscape for free at the University, but I remember when I went to the professional world we had to pay for it. The look and feel of Navigator then can still be seen in modern browsers today. There was an address bar, a customizable home page, a status bar, and you could write little javascripts to do cutesy things like have a message scroll here and there or have blinked things. 1995 also brought us HTML frames, fonts on pages, the ability to change the background color, the ability to embed various forms of media, and image maps. Building sites back then was a breeze. And with an 80% market share for browsers, testing was simple: just open Netscape and view your page! Netscape was a press darling. They had insane fans that loved them. And while they hadn’t made money yet, they did something that a lot of companies do now, but few did then: they went IPO early and raked in $600 million in their first day, turning Marc Andreessen the poster child into an overnight sensation. They even started to say that the PC would live on the web - and it would do so using Netscape. Andreessen then committed the cardinal sin that put many in tech out of a job: he went after Microsoft claiming they’d reduce Microsoft to a set of “poorly debugged device drivers.” Microsoft finally responded. They had a meeting with Netscape and offered to acquire the company or they would put them out of business. Netscape lawyered up, claiming Microsoft offered to split the market up where they owned Windows and left the rest to Netscape. Internet Explorer 1 was released by Microsoft in 1995 - a fork of Mosaic which had been indirectly licensed from the code Andreessen had written while still working with the NCSA in college. And so began the “Browser Wars” with Netscape 2 being released and Internet Explorer 2, the same year. 1995 saw the web shoot up to over 23,000 sites. Netscape 2 added Netscape Mail, an email program with about as simple a name as Microsoft Mail, which had been in Windows since 1991. In 1995, Brendan Eich, a developer at Netscape wrote SpiderMonkey, the original JavaScript engine, a language many web apps still use today (just look for the .jsp extension). I was managing labs at the University of Georgia at the time and remember the fast pace that we were upgrading these browsers. NCSA telnet hadn’t been updated in years but it had never been as cool as this Netscape thing. Geocities popped up and I can still remember my first time building a website there and accessing incredible amounts of content being built - and maybe even learning a thing or two while dinking around in those neighborhoods. 1995 had been a huge and eventful year, with nearly 45 million people now “on the web.” Amazon, early search engine Altavista, LYCOS, and eBay launching as well. The search engine space sure was heating up… Then came 1996. Things got fun. Point releases of browsers came monthly. New features dropped with each release. Plugins for Internet Explorer leveraged API hooks into the Windows operating system that made pages only work on IE. Those of us working on pages had to update for both, and test for both. By the end of 1996 there were over a quarter million web pages and over 77 million people were using the web. Apple, The New York Times, Dell.com appeared on the web, but 41 percent of people checked AOL regularly and other popular sites would be from ISPs for years to come. Finally, after a lot of talk and a lot of point releases, Netscape 3 was released in 1997. Javascript got a rev, a lot of styling elements some still use today like tables and frames came out and forms could be filled out automatically. There was also a gold version of Netscape 3 that allowed editing pages. But Dreamweaver gave us a nice WYSIWIG to build web pages that was far more feature rich. Netscape got buggier, they bit on more and more thus spreading developers thing. They just couldn’t keep up. And Internet Explorer was made free in Windows as of IE 3, and had become equal to Netscape. It had a lot of plugins for Windows that made it work better on that platform, for better or worse. The Browser Wars ended when Netscape decided to open source their code in 1998, creating the Mozilla project by open sourcing the Netscape Browser Suite source code. This led to Waterfox, Pale Moon, SeaMonkey, Ice Weasel, Ice Cat, Wyzo, and of course, Tor Browser, Swiftfox, Swift Weasel, Timberwolf, TenFourFox, Comodo IceDragon, CometBird, Basilisk, Cliqz, AT&T Pogo, IceCat, and Flock. But most importantly, Mozilla released Firefox themselves, which still maintains between 8 and 10 percent marketshare for browser usage according to who you ask. Of course, ultimately everyone lost the browser wars now that Chrome owns a 67% market share! Netscape was sold to AOL in 1999 for $4.2 billion, the first year they dropped out of the website popularity contest called the top 10. At this point, Microsoft controlled the market with an 80% market share. That was the first year Amazon showed up on the top list of websites. The Netscape problems continued. AOL released Netscape 6 in 2000, which was buggy and I remember a concerted effort at the time to start removing Netscape from computers. In 2003, after being acquired by Time Warner, AOL finally killed off Netscape. This was the same year Apple released Safari. They released 7.2 in 2004 after outsourcing some of the development. Netscape 9, a port of Firefox, was released in 2007. The next year Google Chrome was released. Today, Mozilla is a half-billion dollar a year not-for profit. They ship the Firefox browser, the Firefox OS mobile OS, the online file sharing service Firefox Send, the Bugzilla bug tracking tool, the Rust programming language, the Thunderbird email client, and other tools like SpiderMonkey, which is still the javascript engine embedded into Firefox and Thunderbird. If the later stage of Netscape’s code in the form of the open source Mozilla projects appeal to you, consider becoming a Mozilla Rep. You can help contribute, promote, document, and build the community with other passionate and knowledgeable humans that are on the forefront of pushing the web into new and beautiful places. For more on that, go to reps.mozilla.org. Andreessen went on to build Opsware with Ben Horowitz (who’s not a bad author) and others. He sold the hosting business and in 2005 continued on with Horowitz founded Andreessen Horowitz which were early investors of Facebook, Foursquare, GitHub, Groupon, LinkedIn, Pinterest, Twitter, Jawbone, Zynga, Skype, and many, many others. He didn’t win the browser wars, but he has been at the center of helping to shape the Internet as we know it today, and due to the open sourcing of the source code many other browsers popped up. The advent of the cloud has also validated many of his early arguments about the web making computer operating systems more of a commodity. Anyone who’s used Office 365 online or Google apps can back that up. Ultimately, the story of Netscape could be looked at as yet another “Bill Gates screwed us” story. But I’m not sure that does it justice. Netscape did as much to shape the Internet in those early days as anything else. Many of those early contributions, like the open nature of the Internet, various languages and techniques, and of course the code in the form of Mozilla, live on today. There were other browsers, and the Internet might have grown to what it is today. But we might not have had as much of the velocity without Andreessen and Netscape and specifically the heated competition that led to so much innovation in such a short period of time - so we certainly owe them our gratitude that we’ve come as far as we have. And I owe you my gratitude. Thank you so very much for tuning into another episode of the History of Computing Podcast. We’re lucky to have you. Have a great day!
8/25/2019 • 12 minutes, 38 seconds
The History Of Android
Welcome to the History of Computing Podcast, where we explore the history of information technology. Because by understanding the past, we’re able to be prepared for the innovations of the future! Today we’re going to look at the emergence of Google’s Android operating system. Before we look at Android, let’s look at what led to it. Frank Canova who built a device he showed off as “Angler” at COMDEX in 1992. This would be released as the Simon Personal Communicator by BellSouth and manufactured as the IBM Simon by Mitsubishi. The Palm, Newton, Symbian, and Pocket PC, or Windows CE would come out shortly thereafter and rise in popularity over the next few years. CDMA would slowly come down in cost over the next decade. Now let’s jump to 2003. At the time, you had Microsoft Windows CE, the Palm Treo was maturing and supported dual-band GSM, Handspring merged into the Palm hardware division, Symbian could be licensed but I never met a phone of theirs I liked. Like the Nokia phones looked about the same as many printer menu screens. One other device that is more relevant because of the humans behind it was the T-Mobile sidekick, which actually had a cool flippy motion to open the keyboard! Keep that Sidekick in mind for a moment. Oh and let’s not forget a fantastic name. The mobile operating systems were limited. Each was proprietary. Most were menu driven and reminded us more of an iPod, released in 2001. I was a consultant at the time and remember thinking it was insane that people would pay hundreds of dollars for a phone. At the time, flip phones were all the rage. A cottage industry of applications sprung up, like Notify, that made use of app frameworks on these devices to connect my customers to their Exchange accounts so their calendars could sync wirelessly. The browsing experience wasn’t great. The messaging experience wasn’t great. The phones were big and clunky. And while you could write apps for the Symbian in Qt Creator or Flash Lite or Python for S60, few bothered. That’s when Andy Rubin left Danger, the company the cofounded that made the Sidekick and joined up with Rich Miner, Nick Sears, and Chris White in 2003 to found a little company called Android Inc. They wanted to make better mobile devices than were currently on the market. They founded Android Inc and set out to write an operating system based on Linux that could rival anything on the market. Rubin was no noob when cofounding Danger. He had been a robotics engineer in the 80s, a manufacturing engineer at Apple for a few years and then got on his first mobility engineering gig when he bounced to General Magic to work on Magic Cap, a spinoff from Apple FROM 92 TO 95. He then helped build WebTV from 95-99. Many in business academia have noted that Android existed before Google and that’s why it’s as successful as it is today. But Google bought Android in 2005, years before the actual release of Android. Apple had long been rumor milling a phone, which would mean a mobile operating system as well. Android was sprinting towards a release that was somewhat Blackberry-like, focused on competing with similar devices on the market at the time, like the Blackberries that were all the rage. Obama and Hillary Clinton was all about theirs. As a consultant, I was stoked to become a Blackberry Enterprise Server reseller and used that to deploy all the things. The first iPhone was released in 2007. I think we sometimes think that along came the iPhone and Blackberries started to disappear. It took years. But the fall was fast. While the iPhone was also impactful, the Android-based devices were probably more-so. That release of the iPhone kicked Andy Rubin in the keister and he pivoted over from the Blackberry-styled keyboard to a touch screen, which changed… everything. Suddenly this weird innovation wasn’t yet another frivolous expensive Apple extravagance. The logo helped grow the popularity as well, I think. Internally at Google Dan Morrill started creating what were known as Dandroids. But the bugdroid as it’s known was designed by Irina Blok on the Android launch team. It was eventually licensed under Creative Commons, which resulted in lots of different variations of the logo; a sharp contrast to the control Apple puts around the usage of their own logo. The first version of the shipping Android code came along in 2008 and the first phone that really shipped with it wasn’t until the HTC Dream in 2009. This device had a keyboard you could press but also had a touch screen, although we hadn’t gotten a virtual keyboard yet. It shipped with an ARM11, 192MB of RAM, and 256MB of storage. But you could expand it up to 16 gigs with a microSD card. Oh, and it had a trackball. It bad 802.11b and g, Bluetooth, and shipped with Android 1.0. But it could be upgraded up to 1.6, Donut. The hacker in me just… couldn’t help but mod the thing much as I couldn’t help but jailbreak the iPhone back before I got too lazy not to. Of course, the Dev Phone 1 shipped soon after that didn’t require you to hack it, something Apple waited until 2019 to copy. The screen was smaller than that of an iPhone. The keyboard felt kinda’ junky. The app catalog was lacking. It didn’t really work well in an office setting. But it was open source. It was a solid operating system and it showed promise as to the future of not-Apple in a post-Blackberry world. Note: Any time a politician uses a technology it’s about 5 minutes past being dead tech. Of Blackberry, iOS, and Android, Android was last in devices sold using those platforms in 2009, although the G1 as the Dream was also known as, took 9% market share quickly. But then came Eclair. Unlike sophomore efforts from bands, there’s something about a 2.0 release of software. By the end of 2010 there were more Androids than iOS devices. 2011 showed the peak year of Blackberry sales, with over 50 million being sold, but those were the lagerts spinning out of the buying tornado and buying the pivot the R&D for the fruitless next few Blackberry releases. Blackberry marketshare would zero out in just 6 short years. iPhone continued a nice climb over the past 8 years. But Android sales are now in the billions per year. Ultimately the blackberry, to quote Time a “failure to keep up with Apple and Google was a consequence of errors in its strategy and vision.” If you had to net-net that, touch vs menus was a substantial part of that. By 2017 the Android and iOS marketshare was a combined 99.6%. In 2013, now Google CEO, Sundar Pichai took on Android when Andy Rubin was embroiled in sexual harassment charges and now acts as CEO of Playground Global, an incubator for hardware startups. The open source nature of Android and it being ready to fit into a device from manufacturers like HTC led to advancements that inspired and were inspired by the iPhone leading us to the state we’re in today. Let’s look at the released per year and per innovation: * 1.0, API 1, 2008: Include early Google apps like Gmail, Maps, Calendar, of course a web browser, a media player, and YouTube * 1.1 came in February the next year and was code named Petit Four * 1.5 Cupcake, 2009: Gave us on an-screen keyboard and third-party widgets then apps on the Android Market, now known as the Google Play Store. Thus came the HTC Dream. Open source everything. * 1.6 Donut, 2009: Customizeable screen sizes and resolution, CDMA support. And the short-lived Dell Streak! Because of this resolution we got the joy of learning all about the tablet. Oh, and Universal Search and more emphasis on battery usage! * 2.0 Eclair, 2009: The advent of the Motorola Droid, turn by turn navigation, real time traffic, live wallpapers, speech to text. But the pinch to zoom from iOS sparked a war with Apple.We also got the ability to limit accounts. Oh, new camera modes that would have impressed even George Eastman, and Bluetooth 2.1 support. * 2.2 Froyo, four months later in 2010 came Froyo, with under-the-hood tuning, voice actions, Flash support, something Apple has never had. And here came the HTC Incredible S as well as one of the most mobile devices ever built: The Samsung Galaxy S2. This was also the first hotspot option and we got 3G and better LCDs. That whole tethering, it took a year for iPhone to copy that. * 2.3 Gingerbread: With 2010 came Gingerbread. The green from the robot came into the Gingerbread with the black and green motif moving front and center. More sensors, NFC, a new download manager, copy and paste got better, * 3.0 Honeycomb, 2011. The most important thing was when Matias Duarte showed up and reinvented the Android UI. The holographic design traded out the green and blue and gave you more screen space. This kicked off a permanet overhaul and brought a card-UI for recent apps. Enter the Galaxy S9 and the Huawei Mate 2. * 4.0 Ice Cream Sandwich, later in 2011 - Duarte’s designs started really taking hold. For starters, let’s get rid of buttons. THat’s important and has been a critical change for other devices as well. We Reunited tablets and phones with a single vision. On screen buttons, brought the card-like appearance into app switching. Smarter swiping, added swiping to dismiss, which changed everything for how we handle email and texts with gestures. You can thank this design for Tinder. * 4.1 to 4.3 Jelly Bean, 2012: Added some sweet sweet fine tuning to the foundational elements from Ice Cream Sandwich. Google Now that was supposed to give us predictive intelligence, interactive notifications, expanded voice search, advanced search, sill with the card-based everything now for results. We also got multiuser support for tablets. And the Android Quick Settings pane. We also got widgets on the lock screen - but those are a privacy nightmare and didn’t last for long. Automatic widget resizing, wireless display projection support, restrict profiles on multiple user accounts, making it a great parent device. Enter the Nexus 10. AND TWO FINGER DOWN SWIPES. * 4.4 KitKat, in 2013 ended the era of a dark screen, lighter screens and neutral highlights moved in. I mean, Matrix was way before that after all. OK, Google showed up. Furthering the competition with Apple and Siri. Hands-free activation. A panel on the home screen, and a stand-alone launcher. AND EMOJIS ON THE KEYBOARD. Increased NFC security. * 5. Lollipop came in 2014 bringing 64 bit, Bluetooth Low Energy, flatter interface, But more importantly, we got annual releases like iOS. * 6: Marshmallow, 2015 gave us doze mode, sticking it to iPhone by even more battery saving features. App security and prompts to grant apps access to resources like the camera and phone were . The Nexus 5x and 6P ports brought fingerprint scanners and USB-C. * 7: Nougat in 2016 gave us quick app switching, a different lock screen and home screen wallpaper, split-screen multitasking, and gender/race-centric emojis. * 8: Oreo in 2017 gave us floating video windows, which got kinda’ cool once app makers started adding support in their apps for it. We also got a new file browser, which came to iOS in 2019. And more battery enhancements with prettied up battery menus. Oh, and notification dots on app icons, borrowed from Apple. * 9: Pie in 2018 brought notch support, navigations that were similar to those from the iPhone X adopting to a soon-to-be bezel-free world. And of course, the battery continues to improve. This brings us into the world of the Pixel 3. * 10, Likely some timed in 2019 While the initial release of Android shipped with the Linux 2.1 kernel, that has been updated as appropriate over the years with, 3 in Ice Cream Sandwich, and version 4 in Nougat. Every release of android tends to have an increment in the Linux kernel. Now, Android is open source. So how does Google make money? Let’s start with what Google does best. Advertising. Google makes a few cents every time you click on an ad in an advertisement in messages or web pages or any other little spot they’ve managed to drop an ad in there. Then there’s the Google Play Store. Apple makes 70% more revenue from apps than Android, despite the fact that Android apps have twice the number of installs. The old adage is if you don’t pay for a product, you are the product. I don’t tend to think Google goes overboard with all that, though. And Google is probably keeping Caterpillar in business just to buy big enough equipment to move their gold bars from one building to the next on campus. Any time someone’s making money, lots of other people wanna taste. Like Oracle, who owns a lot of open source components used in Android. And the competition between iOS and Android makes both products better for consumers! Now look out for Android Auto, Android Things, Android TV, Chrome OS, the Google Assistant and others - given that other types of vendors can make use of Google’s open source offerings to cut R&D costs and get to market faster! But more importantly, Android has contributed substantially to the rise of ubiquitious computing despite how much money you have. I like to think the long-term impact of such a democratization of Mobility and the Internet will make the world a little less idiocracy and a little more wikipedia. Thank you so very much for tuning into another episode of the History of Computing Podcast. We’re lucky to have you. Have a great day!
8/22/2019 • 18 minutes, 2 seconds
Once Upon A Friendster
Welcome to the History of Computing Podcast, where we explore the history of information technology. Because understanding the past prepares us for the innovations of the future! Todays episode is on former Social Networking pioneer, Friendster. Today when you go to friendster.com you get a page that the social network is taking a break. The post was put up in 2018. How long did Rip Van Winkle Sleep? But what led to the rise of the first big social network and well, what happened? The story begins in 1973. Talkomatic was a chat room and was a hit in the PLATO or Programmed Logic for Automatic Teaching Operations community at the University of Illinois, an educational learning system that had been running since 1960. Dave Woolley and Douglas Brows at the University of Illinois brought chat and then the staff built TERM-Talk the same year, adding screen sharing and PLATO Notes would be added where you could add notes to your profile. This was the inspiration for the name of Lotus Notes. Then in the 80s came Bulletin Board Systems, 84 brought FidoNet, 88 brought IRC, 96 brought ICQ, and in 96 we got Bolt.com, the first social networking and video website with SixDegrees coming in 1997 as the first real social media website. AOL Instant Messenger showed up the same year and AOL bought ICQ in 99. It was pretty sweet that I didn’t have to remember all those ICQ numbers any more! 1999 - Yahoo! And Microsoft got in the game launching tools called Messenger at about the same time and LiveJournal came along, as well as Habbo, a social networking site for games. By 2001 Six Degrees shut down and Messenger was shipped with XP. But 2002. That was the year the Euro hit the street. Before England dissed it. That was the year Israeli and Palestinian conflicts escalated. Actually, that’s a lot of years, regrettably. I remember scandals at Enron and Worldcom well that year, ultimate resulting in Sarbanes Oxley to counter the more than 5 trillion dollars in corporate scandals that sent the economy into a tailspin. My Georgia Bulldogs football team beat Arkansas to win the SEC title and then beat Florida State in the Sugar Bowl. Nelly released Hot In Here and Eminem released Lose Yourself and Without Me. If film, Harry Potter was searching for the Chamber of Secrets and Frodo was on a great trek to the Two Towers. Eminem was in the theaters as well with 8 Mile. And Friendster was launched by Jonathan Abrams in Mountain View California. They wanted to get people making new friends and meeting in person. It was an immediate hit and people flocked to the site. They grew to three million users in just a few months, catching the attention of investors. As a young consultant, I loved keeping track of my friends who I never got to see in person using Friendster. Napster was popular at the time and the name Friendster came from a mashup of friends and Napster. With this early success, Friendster took $12 million dollars in funding from VC firm Kleiner Perkins Caufield & Byers, Benchmark Capital the next year. That was the year a Harvard student named Mark Zuckerburg launched FaceMash with his roommate Eduardo Saverin for Harvard students in a kinda’ “Hot or Not” game. They would later buy Instagram as a form of euphoric recall, looking back on those days. Google has long wanted a social media footprint and tried to buy Friendster in 2003, but when rejected launched Orkut in 2004 - which just ran in Brazil, tried Google Friend Connect in 2008, which lasted until 2012, Google Buzz, which launched in 2010 and only lasted a year, Google Wave, which launched in 2009 and also only lasted a year, and of course, Google + which ran from 2011 to 2019. Google is back at it again with a new social network called Shoelace out of their Area 120 incubator. The $30 million dollars in Google stock would be worth a billion dollars today. MySpace was also launched in 2003 by Chris DeWolfe and Tom Anderson, growing to have more traffic than Google over time. But Facebook launched in 2004 and after having problems keeping the servers up and running, Friendster's board replaced Abrams as CEO and moved him to chairmen of the board. He was replaced by Scott Sassa. And then in 2005 Sassa was replaced by Taek Kwn and then he was replaced by Kent Lindstrom who was replaced by Richard Kimber. Such rapid churn in the top spot means problems. A rudderless ship. In 2006 they added widgets to keep up with MySpace. They didn’t. They also opened up a developer program and opened up APIs. They still had 52 million unique visitors worldwide in June 2008. But by then, MySpace had grown to 7 times their size. MOL Global, an online payments processor from Malaysia bought the company in 2009 and relaunched the site. All user data was erased and Friendster provided an export tool to move data to other popular sites at the time, such as Flickr. In 2009 Friendster had 3 Million unique visitors per day. They relaunched But that dropped to less than a quarter million by the end of 2010. People abandoned the network. What happened? Facebook eclipsed the Friendster traffic in 2009. Friendster became something more used in Asia than the US. Really, though, I remember early technical problems. I remember not being able to log in, so moving over to MySpace. I remember slow loading times. And I remember more and more people spending time on MySpace, customizing their MySpace page. Facebook did something different. Sure, you couldn’t customize the page, but the simple layout loaded fast and was always online. This reminds me of the scene in the show Silicon Valley, when they have to grab the fire extinguisher because they set the house on fire from having too much traffic! In 2010, Facebook acquired Friendster's portfolio of social networking patents for $40 million dollars. In 2011, Newscorp sold MySpace for $35 million dollars after it had been valued at it peak in 2008. After continuing its decline, Friendster was sold to a social gaming site in 2015, trying to capitalize on the success that Facebook had doing online gaming. But after an immediate burst of users, it too was not successful. In 2018 the site finally closed its doors. Today Friendster is the 651,465th ranked site in the world. There are a few thing to think about when you look at the Friendster story: 1. The Internet would not be what it is today without sites like Friendster to help people want to be on it. 2. The first company on a new thing isn’t always the one that really breaks through 3. You have to, and I mean, have to keep your servers up. This is a critical aspect of maintaining you’re momentum. I was involved with one of the first 5 facebook apps. And we had no idea 2 million people would use that app in the weekend it was launched. We moved mountains to get more servers and clusters brought online and refactored sql queries on the fly, working over 70 hours in a weekend. And within a week we hit 10 million users. That app paid for dozens of other projects and was online for years. 4. When investors move in, the founder usually gets fired at the first sign of trouble. Many organizations simply can’t find their equilibrium after that and flounder. 5. Last but not least: Don’t refactor every year, but if you can’t keep your servers up, you might just have too much technical debt. I’m sure everyone involved with Friendster wishes they could go back and do many things differently. But hindsight is always 20/20. They played their part in the advent of the Internet. Without early pioneers like Friendster we wouldn’t be where we are at today. As Heinlein said, “yet another crew of Rip Van Winkle’s” But Buck Rogers eventually did actually wake back up, and maybe Friendster will as well. Thank you for tuning into another episode of the History of Computing Podcast. We’re lucky to have you. Have a great day!
8/17/2019 • 9 minutes, 49 seconds
The Internet Tidal Wave
Welcome to the History of Computing Podcast, where we explore the history of information technology. Because understanding the past prepares us for the innovations of the future! Todays episode is going to be just a little bit unique. Or not unique as the case may be. Bill Gates sent a very important memo on May 26th, 1995. It’s so important because of how well it foreshadows what was about to happen with this weird thing called the Internet. So we’re going to simply provide the unaltered transcript and if you dig it, read a book or two of his. He is a surprisingly good writer. To: Executive Staff and direct reports From: Bill Gates Date: May 26, 1995 The Internet Tidal Wave Our vision for the last 20 years can be summarized in a succinct way. We saw that exponential improvements in computer capabilities would make great software quite valuable. Our response was to build an organization to deliver the best software products. In the next 20 years the improvement in computer power will be outpaced by the exponential improvements in communications networks. The combination of these elements will have a fundamental impact on work, learning and play. Great software products will be crucial to delivering the benefits of these advances. Both the variety and volume of the software will increase. Most users of communications have not yet seen the price of communications come down significantly. Cable and phone networks are still depreciating networks built with old technology. Universal service monopolies and other government involvement around the world have kept communications costs high. Private networks and the Internet which are built using state of the art equipment have been the primary beneficiaries of the improved communications technology. The PC is just now starting to create additional demand that will drive a new wave of investment. A combination of expanded access to the Internet, ISDN, new broadband networks justified by video based applications and interconnections between each of these will bring low cost communication to most businesses and homes within the next decade. The Internet is at the forefront of all of this and developments on the Internet over the next several years will set the course of our industry for a long time to come. Perhaps you have already seen memos from me or others here about the importance of the Internet. I have gone through several stages of increasing my views of its importance. Now I assign the Internet the highest level of importance. In this memo I want to make clear that our focus on the Internet is crucial to every part of our business. The Internet is the most important single development to come along since the IBM PC was introduced in 1981. It is even more important than the arrival of the graphical user interface (GUI). The PC analogy is apt for many reasons. The PC wasn't perfect. Aspects of the PC were arbitrary or even poor. However a phenomena grew up around the IBM PC that made it a key element of everything that would happen for the next 15 years. Companies that tried to fight the PC standard often had good reasons for doing so but they failed because the phenomena overcame any weaknesses that resisters identified. The Internet Today The Internet's unique position arises from a number of elements. TCP/IP protocols that define its transport level support distributed computing and scale incredibly well. The Internet Engineering Task Force (IETF) has defined an evolutionary path that will avoid running into future problems even as eventually everyone on the planet connects up. The HTTP protocols that define HTML Web browsing are extremely simple and have allowed servers to handle incredible traffic reasonably well. All of the predictions about hypertext - made decades ago by pioneers like Ted Nelson - are coming true on the Web. Although other protocols on the Internet will continue to be used (FTP, Gopher, IRC, Telnet, SMTP, NNTP). HTML with extensions will be the standard that defines how information will be presented. Various extensions to HTML, including content enhancements like tables, and functionality enhancements like secure transactions, will be widely adopted in the near future. There will also be enhanced 3D presentations providing for virtual reality type shopping and socialization. Another unique aspect of the Internet is that because it buys communications lines on a commodity bid basis and because it is growing so fast, it is the only "public" network whose economics reflect the latest advances in communications technology. The price paid for corporations to connect to the Internet is determined by the size of your "on-ramp" to the Internet and not by how much you actually use your connection. Usage isn't even metered. It doesn't matter if you connect nearby or half way around the globe. This makes the marginal cost of extra usage essentially zero encouraging heavy usage. Most important is that the Internet has bootstrapped itself as a place to publish content. It has enough users that it is benefiting from the positive feedback loop of the more users it gets, the more content it gets, and the more content it gets, the more users it gets. I encourage everyone on the executive staff and their direct reports to use the Internet. I've attached an appendix, which Brian Flemming helped me pull together that shows some hot sites to try out. You can do this by either using the .HTM enclosure with any Internet browser or, if you have Word set up properly, you can navigate right from within this document. Of particular interest are the sites such as "YAHOO" which provide subject catalogs and searching. Also of interest are the ways our competitors are using their Websites to present their products. I think SUN, Netscape and Lotus do some things very well. Amazingly it is easier to find information on the Web than it is to find information on the Microsoft Corporate Network. This inversion where a public network solves a problem better than a private network is quite stunning. This inversion points out an opportunity for us in the corporate market. An important goal for the Office and Systems products is to focus on how our customers can create and publish information on their LANs. All work we do here can be leveraged into the HTTP/Web world. The strength of the Office and Windows businesses today gives us a chance to superset the Web. One critical issue is runtime/browser size and performance. Only when our Office - Windows solution has comparable performance to the Web will our extensions be worthwhile. I view this as the most important element of Office 96 and the next major release of Windows. One technical challenge facing the Internet is how to handle "real-time" content - specifically audio and video. The underlying technology of the Internet is a packet network which does not guarantee that data will move from one point to another at a guaranteed rate. The congestion on the network determines how quickly packets are sent. Audio can be delivered on the Internet today using several approaches. The classic approach is to simply transmit the audio file in its entirety before it is played. A second approach is to send enough of it to be fairly sure that you can keeping playing without having to pause. This is the approach Progressive Networks Real Audio (Rob Glaser's new company) uses. Three companies (Internet Voice Chat, Vocaltec, and Netphone) allow phone conversations across the Internet but the quality is worse than a normal phone call. For video, a protocol called CU-SeeMe from Cornell allows for video conferencing. It simply delivers as many frames per second as it sees the current network congestion can handle, so even at low resolution it is quite jerky. All of these "hacks" to provide video and audio will improve because the Internet will get faster and also because the software will improve. At some point in the next three years, protocol enhancements taking advantage of the ATM backbone being used for most of the Internet will provide "quality of service guarantees". This is a guarantee by every switch between you and your destination that enough bandwidth had been reserved to make sure you get your data as fast as you need it. Extensions to IP have already been proposed. This might be an opportunity for us to take the lead working with UUNET and others. Only with this improvement and an incredible amount of additional bandwidth and local connections will the Internet infrastructure deliver all of the promises of the full blown Information Highway. However, it is in the process of happening and all we can do is get involved and take advantage. I think that virtually every PC will be used to connect to the Internet and that the Internet will help keep PC purchasing very healthy for many years to come. PCs will connect to the Internet a variety of ways. A normal phone call using a 14.4k or 28.8k baud modem will be the most popular in the near future. An ISDN connection at 128kb will be very attractive as the connection costs from the RBOCs and the modem costs come down. I expect an explosion in ISDN usage for both Internet connection and point-to-point connections. Point-to-point allows for low latency which is very helpful for interactive games. ISDN point-to-point allows for simultaneous voice data which is a very attractive feature for sharing information. Example scenarios include planning a trip, discussing a contract, discussing a financial transaction like a bill or a purchase or taxes or getting support questions about your PC answered. Eventually you will be able to find the name of someone or a service you want to connect to on the Internet and rerouting your call to temporarily be a point-to-point connection will happen automatically. For example when you are browsing travel possibilities if you want to talk to someone with expertise on the area you are considering, you simply click on a button and the request will be sent to a server that keeps a list of available agents who can be working anywhere they like as long as they have a PC with ISDN. You will be reconnected and the agent will get all of the context of what you are looking at and your previous history of travel if the agency has a database. The reconnection approach will not be necessary once the network has quality of service guarantees. Another way to connect a PC will be to use a cable-modem that uses the coaxial cable normally used for analog TV transmission. Early cable systems will essentially turn the coax into an Ethernet so that everyone in the same neighborhood will share a LAN. The most difficult problem for cable systems is sending data from the PC back up the cable system (the "back channel"). Some cable companies will promote an approach where the cable is used to send data to the PC (the "forward channel") and a phone connection is used for the back channel. The data rate of the forward channel on a cable system should be better than ISDN. Eventually the cable operators will have to do a full upgrade to an ATM-based system using either all fiber or a combination of fiber and Coax - however, when the cable or phone companies will make this huge investment is completely unclear at this point. If these buildouts happen soon, then there will be a loose relationship between the Internet and these broadband systems. If they don't happen for some time, then these broadband systems could be an extension of the Internet with very few new standards to be set. I think the second scenario is very likely. Three of the biggest developments in the last five years have been the growth in CD titles, the growth in On-line usage, and the growth in the Internet. Each of these had to establish critical mass on their own. Now we see that these three are strongly related to each other and as they come together they will accelerate in popularity. The On-line services business and the Internet have merged. What I mean by this is that every On-line service has to simply be a place on the Internet with extra value added. MSN is not competing with the Internet although we will have to explain to content publishers and users why they should use MSN instead of just setting up their own Web server. We don't have a clear enough answer to this question today. For users who connect to the Internet some way other than paying us for the connection we will have to make MSN very, very inexpensive - perhaps free. The amount of free information available today on the Internet is quite amazing. Although there is room to use brand names and quality to differentiate from free content, this will not be easy and it puts a lot of pressure to figure out how to get advertiser funding. Even the CD-ROM business will be dramatically affected by the Internet. Encyclopedia Brittanica is offering their content on a subscription basis. Cinemania type information for all the latest movies is available for free on the Web including theater information and Quicktime movie trailers. Competition Our traditional competitors are just getting involved with the Internet. Novell is surprisingly absent given the importance of networking to their position however Frankenberg recognizes its importance and is driving them in that direction. Novell has recognized that a key missing element of the Internet is a good directory service. They are working with AT&T and other phone companies to use the Netware Directory Service to fill this role. This represents a major threat to us. Lotus is already shipping the Internotes Web Publisher which replicates Notes databases into HTML. Notes V4 includes secure Internet browsing in its server and client. IBM includes Internet connection through its network in OS/2 and promotes that as a key feature. Some competitors have a much deeper involvement in the Internet than Microsoft. All UNIX vendors are benefiting from the Internet since the default server is still a UNIX box and not Windows NT, particularly for high end demands, SUN has exploited this quite effectively. Many Web sites, including Paul Allen's ESPNET, put a SUN logo and link at the bottom of their home page in return for low cost hardware. Several universities have "Sunsites" named because they use donated SUN hardware. SUN's Java project involves turning an Internet client into a programmable framework. SUN is very involved in evolving the Internet to stay away from Microsoft. On the SUN Homepage you can find an interview of Scott McNealy by John Gage where Scott explains that if customers decide to give one product a high market share (Windows) that is not capitalism. SUN is promoting Sun Screen and HotJava with aggressive business ads promising that they will help companies make money. SGI has also been advertising their leadership on the Internet including servers and authoring tools. Their ads are very business focused. They are backing the 3D image standard, VRML, which will allow the Internet to support virtual reality type shopping, gaming, and socializing. Browsing the Web, you find almost no Microsoft file formats. After 10 hours of browsing, I had not seen a single Word .DOC, AVI file, Windows .EXE (other than content viewers), or other Microsoft file format. I did see a great number of Quicktime files. All of the movie studios use them to offer film trailers. Apple benefited by having TCP support before we did and is working hard to build a browser built from OpenDoc components. Apple will push for OpenDoc protocols to be used on the Internet, and is already offering good server configurations. Apple's strength in education gives them a much stronger presence on the Internet than their general market share would suggest. Another popular file format on the Internet is PDF, the short name for Adobe Acrobat files. Even the IRS offers tax forms in PDF format. The limitations of HTML make it impossible to create forms or other documents with rich layout and PDF has become the standard alternative. For now, Acrobat files are really only useful if you print them out, but Adobe is investing heavily in this technology and we may see this change soon. Acrobat and Quicktime are popular on the network because they are cross platform and the readers are free. Once a format gets established it is extremely difficult for another format to come along and even become equally popular. A new competitor "born" on the Internet is Netscape. Their browser is dominant, with 70% usage share, allowing them to determine which network extensions will catch on. They are pursuing a multi-platform strategy where they move the key API into the client to commoditize the underlying operating system. They have attracted a number of public network operators to use their platform to offer information and directory services. We have to match and beat their offerings including working with MCI, newspapers, and other who are considering their products. One scary possibility being discussed by Internet fans is whether they should get together and create something far less expensive than a PC which is powerful enough for Web browsing. This new platform would optimize for the datatypes on the Web. Gordon Bell and others approached Intel on this and decided Intel didn't care about a low cost device so they started suggesting that General Magic or another operating system with a non-Intel chip is the best solution. Next Steps In highlighting the importance of the Internet to our future I don't want to suggest that I am alone in seeing this. There is excellent work going on in many product groups. Over the last year, a number of people have championed embracing TCP/IP, hyperlinking, HTML, and building client, tools and servers that compete on the Internet. However, we still have a lot to do. I want every product plan to try and go overboard on Internet features. One element that will be crucial is coordinating our various activities. The challenge/opportunity of the Internet is a key reason behind the recent organization. Paul Maritz will lead the Platform group to define an integrated strategy that makes it clear that Windows machines are the best choice for the Internet. This will protect and grow our Windows asset. Nathan and Pete will lead the Applications and Content group to figure out how to make money providing applications and content for the Internet. This will protect our Office asset and grow our Office, Consumer, and MSN businesses. The work that was done in the Advanced Technology group will be extremely important as it is integrated in with our products. We must also invest in the Microsoft home page, so it will be clear how to find out about our various products. Today it's quite random what is on the home page and the quality of information is very low. If you look up speeches by me all you find are a few speeches over a year old. I believe the Internet will become our most important promotional vehicle and paying people to include links to our home pages will be a worthwhile way to spend advertising dollars. First we need to make sure that great information is available. One example is the demonstration files (Screencam format) that Lotus includes on all of their products organized by feature. I think a measurable part of our ad budget should focus on the Internet. Any information we create - white papers, data sheets, etc., should all be done on our Internet server. ITG needs to take a hard look at whether we should drop our leasing arrangements for data lines to some countries and simply rely on the Internet. The actions required for the Windows platform are quite broad. Pual Maritz is having an Internet retreat in June which will focus on coordinating these activities. Some critical steps are the following: 1. Server. BSD is working on offering the best Internet server as an integrated package. We need to understand how to make NT boxes the highest performance HTTP servers. Perhaps we should have a project with Compaq or someone else to focus on this. Our initial server will have good performance because it uses kernel level code to blast out a file. We need a clear story on whether a high volume Web site can use NT or not becaues SUN is viewed as the primary choice. Our plans for security need to be strengthened. Other Backoffice pieces like SMS and SQL server also need to stay out in front in working with the Internet. We need to figure out how OFS can help perhaps by allowing pages to be stored as objects and having properties added. Perhaps OFS can help with the challenge of maintaining Web structures. We need to establish distributed OLE as the protocol for Internet programming. Our server offerings need to beat what Netscape is doing including billing and security support. There will be substantial demand for high performance transaction servers. We need to make the media server work across the Internet as soon as we can as new protocols are established. A major opportunity/challenge is directory. If the features required for Internet directory are not in Cairo or easily addable without a major release we will miss the window to become the world standard in directory with serious consequences. Lotus, Novell, and AT&T will be working together to try and establish the Internet directory. Actually getting the content for our directory and popularizing it could be done in the MSN group. 2. Client. First we need to offer a decent client (O'Hare) that exploits Windows 95 shortcuts. However this alone won't get people to switch away from Netscape. We need to figure out how to integrate Blackbird, and help browsing into our Internet client. We have made the decision to provide Blackbird capabilities openly rather than tie them to MSN. However, the process of getting the size, speed, and integration good enough for the market needs works and coordination. We need to figure out additional features that will allows us to get ahead with Windows customers. We need to move all of our Internet value added from the Plus pack into Windows 95 itself as soon as we possible can with a major goal to get OEMs shipping our browser preinstalled. This follows directly from the plan to integrate the MSN and Internet clients. Another place for integration is to eliminate today's Help and replace it with the format our browser accepts including exploiting our unique extensions so there is another reason to use our browser. We need to determine how many browsers we promote. Today we have O'Hare, Blackbird, SPAM MediaView, Word, PowerPoint, Symettry, Help and many others. Without unification we will lose to Netscape/HotJava. Over time the shell and the browser will converge and support hierarchical/list/query viewing as well as document with links viewing. The former is the structured approach and the later allows for richer presentation. We need to establish OLE protocols as the way rich documents are shared on the Internet. I am sure the OpenDoc consortium will try and block this. 3. File sharing/Window sharing/Multi-user. We need to give away client code that encourages Windows specific protocols to be used across the Internet. It should be very easy to set up a server for file sharing across the Internet. Our PictureTel screen sharing client allowing Window sharing should work easily across the Internet. We should also consider whether to do something with the Citrix code that allows you to become a Windows NT user across the Network. It is different from the PictureTel approach because it isn't peer to peer. Instead it allows you to be a remote user on a shared NT system. By giving away the client code to support all of these scenarios, we can start to show that a Windows machine on the Internet is more valuable than an artitrary machine on the net. We have immense leverage because our Client and Server API story is very strong. Using VB or VC to write Internet applications which have their UI remoted is a very powerful advantage for NT servers. 4. Forms/Languages. We need to make it very easy to design a form that presents itself as an HTML page. Today the Common Gateway Interface (CGI) is used on Web servers to give forms 'behavior' but its quite difficult to work with. BSD is defining a somewhat better approach they call BGI. However we need to integrate all of this with our Forms3 strategy and our languages. If we make it easy to associate controls with fields then we get leverage out of all of the work we are doing on data binding controls. Efforts like Frontier software's work and SUN's Java are a major challenge to us. We need to figure out when it makes sense to download control code to the client including a security approach to avoid this being a virus hole. 5. Search engines. This is related to the client/server strategies. Verity has done good work with Notes, Netscape, AT&T and many others to get them to adopt their scalable technology that can deal with large text databases with very large numbers of queries against them. We need to come up with a strategy to bring together Office, Mediaview, Help, Cairo, and MSN. Access and Fox do not support text indexing as part of their queries today which is a major hole. Only when we have an integrated strategy will we be able to determine if our in-house efforts are adequate or to what degree we need to work with outside companies like Verity. 6. Formats. We need to make sure we output information from all of our products in both vanilla HTML form and in the extended forms that we promote. For example, any database reports should be navigable as hypertext documents. We need to decide how we are going to compete with Acrobat and Quicktime since right now we aren't challenging them. It may be worth investing in optimizing our file formats for these scenarios. What is our competitor to Acrobat? It was supposed to be a coordination of extended metafiles and Word but these plans are inadequate. The format issue spans the Platform and Applications groups. 7. Tools. Our disparate tools efforts need to be brought together. Everything needs to focus on a single integrated development environment that is extensible in a object oriented fashion. Tools should be architected as extensions to this framework. This means one common approach to repository/projects/source control. It means one approach to forms design. The environment has to support sophisticated viewing options like timelines and the advanced features SoftImage requires. Our work has been separated by independent focus on on-line versus CD-ROM and structured display versus animated displays. There are difficult technical issues to resolve. If we start by looking at the runtime piece (browser) I think this will guide us towards the right solution with the tools. The actions required for the Applications and Content group are also quite broad. Some critical steps are the following: 1. Office. Allowing for collaboration across the Internet and allowing people to publish in our file formats for both Mac and Windows with free readers is very important. This won't happen without specific evangelization. DAD has written some good documents about Internet features. Word could lose out to focused Internet tools if it doesn't become faster and more WYSIWYG for HTML. There is a critical strategy issue of whether Word as a container is strict superset of our DataDoc containers allowing our Forms strategy to embrace Word fully. 2. MSN. The merger of the On-line business and Internet business creates a major challenge for MSN. It can't just be the place to find Microsoft information on the Internet. It has to have scale and reputation that it is the best way to take advantage of the Internet because of the value added. A lot of the content we have been attracting to MSN will be available in equal or better form on the Internet so we need to consider focusing on areas where we can provide something that will go beyond what the Internet will offer over the next few years. Our plan to promote Blackbird broadly takes away one element that would have been unique to MSN. We need to strengthen the relationship between MSN and Exchange/Cairo for mail, security and directory. We need to determine a set of services that MSN leads in - money transfer, directory, and search engines. Our high-end server offerings may require a specific relationship with MSN. 3. Consumer. Consumer has done a lot of thinking about the use of on-line for its various titles. On-line is great for annuity revenue and eliminating the problems of limited shelf-space. However, it also lowers the barriers to entry and allows for an immense amount of free information. Unfortunately today an MSN user has to download a huge browser for every CD title making it more of a demo capability than something a lot of people will adopt. The Internet will assure a large audience for a broad range of titles. However the challenge of becoming a leader in any subject area in terms of quality, depth, and price will be far more brutal than today's CD market. For each category we are in we will have to decide if we can be #1 or #2 in that category or get out. A number of competitors will have natural advantages because of their non-electronic activities. 4. Broadband media applications. With the significant time before widescale iTV deployment we need to look hard at which applications can be delivered in an ISDN/Internet environment or in a Satellite PC environment. We need a strategy for big areas like directory, news, and shopping. We need to decide how to persue local information. The Cityscape project has a lot of promise but only with the right partners. 5. Electronic commerce. Key elements of electronic commerce including security and billing need to be integrated into our platform strategy. On-line allows us to take a new approach that should allow us to compete with Intuit and others. We need to think creatively about how to use the Internet/on-line world to enhance Money. Perhaps our Automatic teller machine project should be revived. Perhaps it makes sense to do a tax business that only operates on on-line. Perhaps we can establish the lowest cost way for people to do electronic bill paying. Perhaps we can team up with Quickbook competitors to provide integrated on-line offerings. Intuit has made a lot of progress in overseas markets during the last six months. All the financial institutions will find it very easy to buy the best Internet technology tools from us and others and get into this world without much technical expertise. The Future We enter this new era with some considerable strengths. Among them are our people and the broad acceptance of Windows and Office. I believe the work that has been done in Consumer, Cairo, Advanced Technology, MSN, and Research position us very well to lead. Our opportunity to take advantage of these investments is coming faster than I would have predicted. The electronic world requires all of the directory, security, linguistic and other technologies we have worked on. It requires us to do even more in these ares than we planning to. There will be a lot of uncertainty as we first embrace the Internet and then extend it. Since the Internet is changing so rapidly we will have to revise our strategies from time to time and have better inter-group communication than ever before. Our products will not be the only things changing. The way we distribute information and software as well as the way we communicate with and support customers will be changing. We have an opportunity to do a lot more with our resources. Information will be disseminated efficiently between us and our customers with less chance that the press miscommunicates our plans. Customers will come to our "home page" in unbelievable numbers and find out everything we want them to know. The next few years are going to be very exciting as we tackle these challenges are opportunities. The Internet is a tidal wave. It changes the rules. It is an incredible opportunity as well as incredible challenge I am looking forward to your input on how we can improve our strategy to continue our track record of incredible success. HyperLink Appendix Related reading, double click to open them On-line! (Microsoft LAN only, Internet Assistant is not required for this part): * "Gordon Bell on the Internet" email by Gordon Bell * "Affordable Computing: advertising subsidized hardware" by Nicholas Negroponie * "Brief Lecture Notes on VRML & Hot Java" email by William Barr * "Notes from a Lecture by Mark Andresson (Netscape)" email by William Barr * "Application Strategies for the World Wide Web" by Peter Pathe (Contains many more links!) Below is a hotlist of Internet Web sites you might find interesting. I've included it as an embedded .HTM file which should be readable by most Web Browsers. Double click it if you're using a Web Browser like O'Hare or Netscape. HotList.htm A second copy of these links is below as Word HTML links. To use these links, you must be running the World Internet Assistant, and be connected to the Web. Cool, Cool, Cool.. The Lycos Home Page Yahoo RealAudio Homepage HotWired - New Thinking for a New Medium Competitors Microsoft Corporation World-Wide-Web Server Welcome To Oracle Lotus on the Web Novell Inc. World Wide Web Home Page Symantec Corporation Home Page Borland Online Disney/Buena Vista Paramount Pictures Adobe Systems Incorporated Home Page MCI Sony Online Sports ESPNET SportsZone The Gate Cybersports Page The Sports Server Las Vegas Sports Page News CRAYON Mercury Center Home Page Travel/Entertainment ADDICTED TO NOISE CDnow The Internet Music Store Travel & Entertainment Network home page Virtual Tourist World Map C(?) Net Auto Dealernet Popular Mechanics
8/15/2019 • 40 minutes, 26 seconds
Broadcom and Avago
Welcome to the History of Computing Podcast, where we explore the history of information technology. Because understanding the past prepares us for the innovations of the future! Todays episode is on the History of chip-maker Broadcom. This is actually two stories. The first starts with a movement called fabless semiconductors. LSI had been part of Control Data Corporation and spun off to make chips. Kickstarted by LSI in the late sixties and early seventies, fabless companies started popping up. These would have what are known as foundries make their chips. The foundries didn’t compete with the organizations they were making chips for. This allowed the chip designers to patent, design, and sell chips without having to wield large manufacturing operations. Such was the state of the semiconductor industry when Henry Nicholas met Dr Henry Samueli while working at TRW in the 1980s. Samueli had picked up an interest in electronics early on, while building an AM/FM radio in school. By the 80s he was a professor at UCLA and teamed up with Nicholas, who was a student as well, to form Broadcom in 1991. They began designing integrated circuit (also referred to as a microchip). These are electronic circuits on a small flat piece (or "chip") of semiconductor material, usually silicon. Jack Kilby and Robert Noyce had been pioneers in the field in the late 50s and early 60s and by the 80s, there were lots and lots of little transistors in there and people like our two Henry’s were fascinated with how to shove as many transistors into as small a chip as possible. So the two decided to leave academia and go for it. They founded Broadcom Corporation, Henry Nicholas’ wife made them a logo and they started selling their chips. They made chips for power management, memory controllers, control units, and early mobile devices. But most importantly, they made chips for wi-fi. Today, their chips provide the chips for most every Apple device sold. They also make chips for use in network switches, are responsible for the raspberry pi and more. Samueli holds over 70 patents on his own, although in-all Broadcom has over 20,000, many in mobile, internet of things, and data center! By 1998 sales were good and Broadcom went public. In 2000, UCLA renamed the school of engineering to the Henry Samueli School of Engineering. Nicholas retired from Broadcom in 2003, Samueli bought the Anaheim Ducks in 2005. They continued to grow, make chips, and by 2009 they hit the Fortune 500 list. They were purchased by Avago Technologies in 2016. Samueli became the Chief Technology officer of the new combined company. Wait, who’s Avago?!?! Avago started in 1961 as the semiconductor division of Hewlett-Packard. In the 60s they were pioneers in using LEDs in displays. They moved into fiber in the 70s and semiconductors by the 90s, giving the world the optical mouse and cable modems along the way. They spun out of HP in 99 as part of Agilent and then were acquired from there to become Avago in 2005, naming Hock Tan as CEO. The numbers were staggering. Not only did they ship over a billion optical mouse chips, but they also pushed the boudoirs of radio frequency chips, enabling industries like ATMs and cash registers but also gave us IR on computers as a common pre-bluetooth way of wirelessly connecting peripherals. They were also key in innovations giving us wifi+bluetooth+fm combo chips for phones, pushing past the 100Gbps transfer speeds for optical and doing innovative work with touch screens. Their 20,000 patents combined with the Broadcom patents give them over 40,000 patents in just those companies. They went public in 2009 and got pretty good at increasing revenue and margins concurrently. By 2016 they went out and purchased Broadcom for $37 Billion. They helped Broadcom diversify the business and kept the name. They bought Brocade for $5.9B in 2017 and CA for $18.9 billion in 2018. Buying Symantec in 2019 bumps the revenue of Broadcom up from $2.5 billion to 24.6 billion and EBITDA margins from 33 percent to 56 percent. The aggressive acquisitions caught the eyes of Donald Trump who shut down a $117 billion dollar attempted takeover of Qualcomm, a rival of both the old Broadcom and the new Broadcom. Broadcom makes the Trident+ chips, the network interface controllers used in Dell PowerEdge blade servers, the systems on a chip used in the raspberry pi, the wifi chipsets used in the Nexus, the wifi + bluethooth chips used in every iPhone since the iPhone 3GS, the Jericho chip, the tomahawk chip. They employ some of the best chip designers of the day, including Sophie Wilson who designed the instruction set for an early RISC processor and designed the ARM chip in the 80s when she was at Acorn. Ultimately cash is cheap these days. Broadcom CEO Hock Tan has proven he can raise and deploy capital quickly. Mostly building on past successes in go-to-market infrastructure. But, if you remember from our previous episode on the history of Symantec, that’s exactly what Symantec had been doing when they became a portfolio company! But here’s the thing. If you acquire companies and your EBITDA drops, you’re stuck. You have to increase revenues and reduce EBITDA. If you can do that in Mergers and Acquisitions, investors are likely to allow you to build as big a company as you want! With or without a unified strategy. But the recent woes of GE should be a warning. As you grow, you have to retool your approach. Otherwise, the layers upon layers of management begin to eat away at those profits. But you dig too far into that and quality suffers, as Symantec learned with their merger and then demerger with Veritas. Think about this. CA is strong in Identity and Access Management, with 1,500 patents. Symantec is strong in endpoint, web, and DLP security, with 3,600 patents. Brocade has over 900 in switching and fiber in the data center. The full device trust and reporting could, if done properly go from the user to the agent on a device to the data center and then down to the chip in a full zero trust model. Or Broadcom could just be a holding company, sitting on around 50,000 patents and eeking out profit where they can. Only time will tell. But the lesson to learn from the history of both of these companies is that if you’re innovating, increasing revenues and reducing EBITDA, you too can have tens of billions of dollars, because you’ve proven to be a great investment.
8/13/2019 • 8 minutes, 47 seconds
The History of Symantec
Welcome to the History of Computing Podcast, where we explore the history of information technology. Because understanding the past prepares us for the innovations of the future! Todays episode is on the History of Symantec. This is really more part one of a part two series. Broadcom announced they were acquiring Symantec in August of 2019, the day before we recorded this episode. Who is this Symantec and what do they do - and why does Broadcom want to buy them for 10.7 Billion dollars? For starters, by themselves Symantec is a Fortune 500 company with over $4 billion dollars in annual revenues so $10.7 Billion is a steal for an enterprise software company. Except they’re just selling the Enterprise software division and keeping Norton in the family. With just shy of 12,000 employees, Symantec has twisted and turned and bought and sold companies for a long time. But how did they become a Fortune 500 company? It all started with Eisenhower. ARPA or the Advanced Research Projects Agency, which would later add the word Defense to their name, become DARPA and build a series of tubes call the interweb. While originally commissioned so Ike could counter Sputnik, ARPA continued working to fund projects in computers and in the 1970s, this kid out of the University of Texas named Gary Hendrix saw that they were funding natural language understanding projects. This went back to Turing and DARPA wanted to give some AI-complete a leap forward, trying to make computers as intelligent as people. This was obviously before Terminator told us that was a bad idea (pro-tip, it’s a good idea). Our intrepid hero Gary saw that sweet, sweet grant money and got his PhD from the UT Austin Computational Linguistics Lab. He wrote some papers on robotics and the Stanford Research Institute, or SRI for short. Yes, that’s the same SRI that invented the hosts.txt file and is responsible for keeping DNS for the first decade or so of the internet. So our pal Hendrix joins SRI and chases that grant money, leaving SRI in 1980 with about 15 other Stanford researchers to start a company they called Machine Intelligence Corporation. That went bust and so he started Symantec Corporation in 1982 got a grant from the National Science foundation to build natural language processing software; it turns out syntax and semantics make for a pretty good mashup. So the new company Symantec built out a database and some advanced natural language code, but by 1984 the PC revolution was on and that code had been built for a DEC PDP so could not be run on the emerging PCs in the industry. Symantec was then acquired by C&E Software short for the names of its founders, Dennis Coleman and Gordon Eubanks. The Symantec name stayed and Eubanks became the chairman of the board for the new company. C&E had been working on PC software called Q&A, which the new team finished and then added natural language processing to make using the tools easier to use. They called that “The Intelligent Assistant” and they now had a tool that would take them through the 80s. People swapped rolls, and due to a sharp focus on sales they did well. During the early days of the PC, dealers - or small computer stores that were popping up all over the country, were critical to selling hardware and software. Every Symantec employee would go on the road for six days a week, visiting 6 dealers a day. It was grueling but kept them growing and building. They became what we now call a “portfolio” company in 1985 when they introduced NoteIt, a natural language processing tool used to annotate docs in Lotus 1-2-3. Lotus was in the midst of eating the lunch of previous tools. They added another devision and made SQZ a Lotus 1-2-3 spreadsheet tool. This is important, they were a 3 product company with divisions when in 1987 they got even more aggressive and purchased Breakthrough Software who made an early project management tool called TimeLine. And this is when they did something unique for a PC software company: they split each product into groups that leveraged a shared pool of resources. Each product had a GM that was responsible for the P&L. The GM ran the development, Quality Assurance, Tech Support, and Product Market - those teams reported directly to the GM, who reported to then CEO Eubanks. But there was a shared sales, finance, and operations team. This laid the framework for massive growth, increased sales, and took Symantec to their IPO in 1989. Symantec purchased what was at the time the most popular CRM app called ACT! In 1993 Meanwhile, Peter Norton had a great suite of tools for working with DOS. Things that, well, maybe should have been built into operating systems (and mostly now are). Norton could compress files, do file recovery, etc. The cash Symantec raised allowed them to acquire The Peter Norton Company in 1999 which would completely change the face of the company. This gave them development tools for PC and Mac as Norton had been building those. This lead to the introduction of Symantec Antivirus for the Macintosh and called the anti-virus for PC Norton Antivirus because people already trusted that name. Within two years, with the added sales and marketing air cover that the Symantec sales machine provided, the Norton group was responsible for 82% of Symantecs total revenues. So much so that Symantec dropped building Q&A because Microsoft was winning in their market. I remember this moment pretty poignantly. Sure, there were other apps for the Mac like Virex, and other apps for Windows, like McAfee. But the Norton tools were the gold standard. At least until they later got bloated. The next decade was fast, from the outside looking in, except when Symantec acquired Veritas in 2004. This made sense as Symantec had become a solid player in the security space and before the cloud, backup seemed somewhat related. I’d used Backup Exec for a long time and watched Veritas products go from awesome to, well, not as awesome. John Thompson was the CEO through that decade and Symantec grew rapidly - purchasing systems management solution Altiris in 2007 and got a Data Loss Prevention solution that year in Vontu. Application Performance Management, or APM wasn’t very security focused so that business until was picked up by Vector Capital in 2008. They also picked up MessageLabs and AppStream in 2008. Enrique Salem replaced Thompson and Symantec bought Versign’s CA business in 2010. If you remember from our encryption episode, that was already spun off of RSA. Certificates are security-focused. Email encryption tool PGP and GuardianEdge were also picked up in 2010 providing key management tools for all those, um, keys the CA was issuing. These tools were never integrated properly though. They also picked up Rulespace in 2010 to get what’s now their content filtering solution. Symantec acquired LiveOffice in 2012 to get enterprise vault and instant messaging security - continuing to solidify the line of security products. They also acquired Odyssey Software for SCCM plugins to get better at managing embedded, mobile, and rugged devices. Then came Nukona to get a MAM product, also in 2012. During this time, Steve Bennett was hired as CEO and fired in 2014. Then Michael Brown, although in the interim Veritas was demerged in 2014 and as their products started getting better they were sold to The Carlyle Group in 2016 for $8B. Then Greg Clark became CEO in 2016, when Symantec purchased Blue Coat. Greg Clark then orchestrated the LifeLock acquisition for $2.3B of that $8B. Thoma Bravo then bought CA business to merge with DigiCert in 2017. Then in 2019 Rick Hill became CEO. Does this seem like a lot of buying and selling? It is. But it also isn’t. If you look at what Symantec has done, they have a lot of things they can sell customers for various needs in the information security space. At times, they’ve felt like a holding company. But ever since the Norton acquisition, they’ve had very specific moves that continue to solidify them as one of the top security vendors in the space. Their sales teams don’t spend six days a week on the road and go to six customers a day, but they have a sales machine. And the’ve managed to leverage that to get inside what we call the buying tornado of many emergent technologies and then sell the company before the tornado ends. They still have Norton, of course. Even though practically every other product in the portfolio has come and gone over the years. What does all of this mean? The Broadcom acquisition of the enterprise security division maybe tells us that Symantec is about to leverage that $10+ billion dollars to buy more software companies. And sell more companies after a little integration and incubation, then getting out of it before the ocean gets too red, the tech too stale, or before Microsoft sherlocks them. Because that’s what they do. And they do it profitably every single time. We often think of how an acquiring company gets a new product - but next time you see a company buying another one, think about this: that company probably had multiple offers. What did the team at the company being acquired get out of this deal? And we’ll work on that in the next episode, when we explore the history of Broadcom. Thank you for sticking with us through this episode of the History of Computing Podcast and have a great day!
8/11/2019 • 12 minutes, 9 seconds
The IBM System/360
Welcome to the History of Computing Podcast, where we explore the history of information technology. Because understanding the past prepares us for the innovations of the future! Todays episode is about the IBM System/360. System/360 was a family of mainframes. IBM has done a great job over the decades following innovations rather than leading them, but there might not be another single innovation that was as influential on computing as the System/360. But it’s certainly hard to think of one. IBM had been building mainframes with the 700 and 7000 series of systems since 1952, so they weren’t new to the concept in 1964 when the S360 was announced (also when Disney released Mary Poppins and ). But they wanted to do something different. They were swimming in a red ocean of vendors who had been leading the technology and while they had a 70 percent market share, they were looking to cement a long-term leadership position in the emerging IT industry. So IBM decided to take a huge leap forward and brought the entire industry with them. This was a risky endeavor. Thomas Watson Jr, son of the great IBM business executive Thomas Watson Sr, bet the proverbial farm on this. And won that bet. In all, IBM spent 5 billion dollars in mid-1960s money, which would be $41B today with a cumulative 726.3% rate of inflation. To put things in context around the impact of the mainframe business, IBM revenues were at $3.23 B in 1964 and more than doubled to $7.19 B by 1970 when the next edition, the 370, was released. To further that context, the Manhattan Project, which resulted in the first atomic bomb, cost $2 B. IBM did not have a project this large before the introduction of the S360 and has not had one in the more than 50 years since then. Further context, the total value of all computers deployed at the start of the project was only $10B. These were huge. They often occupied a dedicated room. The front panel had 12 switches, just to control the electricity that flowed through them. They had over 250 lights. It was called “System” 360 because it was a system, meaning you could hook disk drives, printers, and other peripherals up to them. It had 16 32 bit registers and four 64 bit floating point registers for the crazy math stuffs. The results were fast, with over 1000 orders in the first month and another 1000 by years end. IBM sales skyrocketed and computers suddenly showing up in businesses large and small. The total inventory of computers in the world jumped to a $24B value in just 5 years. A great example of the impact they had can be found in the computer the show Mad Men featured, where the firm got an S360 and it served as a metaphor for how the times were about to change - the computer was analytical, where Don worked through inspiration. Just think, an interactive graphics display that let business nerds do what only computer nerds could do before. This was the real start to “data driven” decision making. By 1970 IBM had deployed 35k mainframes throughout the US. They spawned enough huge competitors that the big mainframe players were referred to as Snow White and the 7 dwarfs and later just “The Bunch” which consisted of Burroughs, NCR, Control Data, Honeywell, and the Univac Division of Sperry Rand. If you remember the earlier episode on Grace Hopper, she spent some time there. Thomas Watson Jr. retired the following year in 1971 after suffering a heart attack, leaving behind one of the greatest legacies of anyone in business. He would serve as an ambassador to Russia from 79 to 81, and remain an avid pilot in retirement. He passed away in 1993. A lot of things sell well. But sales and revenue aren’t the definition that shapes a legacy. The S360 created so many standards and pushed technology forward that the business legacy is almost a derivative of the technical legacy. What standards did the S360 set? Well, the bus was huge. Stndardizing I/O would allow vendors to build expansion and would ultimately become the standard later. The 8-bit byte is still used today and bucked the trend of accessing variable sized arbitrary bit addressing. To speed up larger and larger transactions, the S360 also gave us Byte-addressable memory with 24 bit addressing and 32-bit words. The memory was small and fast with control code stored there permanently, known as microcode memory. This meant you didn’t have to hand wire each memory module into the processor. The control store also lead to emulators, as you could emulate a previous IBM model, the 1401, in the control store. IBM spent $13 M on the patent for the tech that came out of MIT to get access to the best memory on the market. The S360 made permanent store a main-stay. IBM had been using tape storage since 1952. 14 inch disk drives were smaller than 24 inch disk drives used in previous models, had 100x the storage capacity and accessed data 10 times faster. The S360 also brought with it new programming paradigms. We got hexadecimal Floating Point Architectures. These would be important to New Drug Applications to the FDA, weather predicting, geophysics, and graphics databases. We also got Extended Binary Coded Decimal Interchange Code or EBCDIC for short is character encoding in the 8th bit. This came from moving punch cards to persistent storage on the computers. That 8th bit was from two zone and number punches on cards which made up two bits and another to indicate a small s or a large S. EBCDIC was not embraced by the rest of the computer hacker culture. One example was: "So the American government went to IBM to come up with an encryption standard, and they came up with… EBCDIC!" ASCII has mostly been accepted as the standard for encoding characters (before and after EBCDIC). Solid Logic Technology (or SLT) also came with the S360. These flip chip-mounted packages contained transistors, diodes and resistors in a ceramic substrate that had sockets on one edge and could be plugged into the backplane of a computer. Think of these as a precursor to the microchip and the death of vacuum tubes. The central processor could run machine language programs. It ran OS/360, officially known as IBM System/360 Operating System. You could load programs written in COBOL and FORTRAN with many organizations still running code written way back then. The way we saw computers and they way they were made also changed. Architecture vs implementation was another substantial innovation. Before the S360, computers were built for specific use cases. They were good at business and they were good at business or they were good at science. But one system wasn’t typically good at both tasks. In fact, IBM had 7 mainframe lines at this point, sometimes competing with each other. The S360 allowed them to unify that into the size and capacity of a machine rather than the specific use case. We went from: “here’s your scientific mainframe” or “here’s your payroll mainframe” to “here’s your computer”. But the Model 30 was Introduced in 1965, along with 5 other initial models, the 40, 50, 60, 62, and 70. The tasks were not specific to each model and a customer could grow into additional models, or if the needs weren’t growing, could downgrade to a lower model in the planned 5 year obscelence cycle that computers seem to have. Given all of this, the project was huge. So big that it led to Thomas Watson forcing his own brother Dick Watson out of IBM and moving the project to be managed by Fred Brooks, who worked with Chief Architect Gene Amdahl. John Opel managed the launch in 1964. In large part due to his work on the S360 project, Brooks would go on to write a book called The Mythical Man Month, which brought us what’s now referred to as Brooks’ Law, which states that adding additional developers does not speed up a software project, but instead makes it take longer. Amdahl would go on to found his own computer company. In all, there were twenty models of the S360, although only 14 shipped - and IBM had sold 35,000 by 1970. While the 60 in S360 would go on to refer to the decade and the follow-on S370 would define computing in the 70s, the S360 was sold until 1978. With a two-thirds market share came anti-trust cases, which saw software suddenly being sold separately and leasing companies extending that 5 year obscelecence - thus IBM leassors becoming the number one competition. Given just how much happened in the 13 year life of the System/360, even the code endures in some cases. The System Z servers are still compatible with many applications written for the 360. The S360 is iconic. The S360 was bold. It set IBM on a course that would shape their future and the future of the world. But most importantly, before the S360 computers were one thing used for really big jobs - after the S360, they were everywhere and people started to think about business in terms of a new lexicon like “data” and “automation.” It lead to no one ever getting fired for buying IBM and set the IT industry on a course to become what it is today. The revolution was coming no matter what. But not being afraid to refactor everything in such a big, bold demonstration of market dominance made IBM the powerhouse it is even today. So next time you have to refactor something, think of the move you’re making - and ask yourself What Would Watson Do? Or just ask Watson.
8/9/2019 • 12 minutes, 53 seconds
Scraping The Surface Of Modern Cryptography
Welcome to the History of Computing Podcast, where we explore the history of information technology. Because understanding the past prepares us for the innovations of the future! Todays episode is scraping the surface of cryptography. Cryptography is derived from the Greek words kryptos, which stands for hidden and grafein, which stands for to write. Through history, cryptography has meant the process of concealing the contents of a message from all except those who know the key. Dating back to 1900 BC in Egypt and Julius Caesar using substitution cyphers, encryption used similar techniques for thousands of years, until a little before World War II. Vigenere designed the first known cipher thatused an encryption key in the 16th century. Since then with most encryption, you convert the contents, known as plaintext, into encrypted information that’s otherwise unintelligible, known as cipher text. The cypher is a pair of algorithms - one to encrypt, the other to decrypt. Those processes are done by use of a key. Encryption has been used throughout the ages to hide messages. Thomas Jefferson built a wheel cypher. The order of the disks you put in the wheel was the key and you would provide a message, line the wheels up and it would convert the message into cypher text. You would tell the key to the person on the other end, they would put in the cypher text and out would pop the message. That was 1795 era encryption and is synonymous with what we call symmetrical key cryptography, which was independently invented by Etienne Bazeries and used well into the 1900s by the US Army. The Hebern rotor machine in the 19th century gave us an electro-mechanical version of the wheel cypher and then everything changed in encryption with the introduction of the Enigma Machine, which used different rotors placed into a machine and turned at different speeds based on the settings of those rotors. The innovations that came out of breaking that code and hiding the messages being sent by the Allies kickstarted the modern age of encryption. Most cryptographic techniques rely heavily on the exchange of cryptographic keys. Symmetric-key cryptography refers to encryption methods where both senders and receivers of data share the same key and data is encrypted and decrypted with algorithms based on those keys. The modern study of symmetric-key ciphers revolves around block ciphers and stream ciphers and how these ciphers are applied. Block ciphers take a block of plaintext and a key, then output a block of ciphertext of the same size. DES and AES are block ciphers. AES, also called Rijndael, is a designated cryptographic standard by the US government. AES usually uses a key size of 128, 192 or 256 bits. DES is no longer an approved method of encryption triple-DES, its variant, remains popular. Triple-DES uses three 56-bit DES keys and is used across a wide range of applications from ATM encryption to e-mail privacy and secure remote access. Many other block ciphers have been designed and released, with considerable variation in quality. Stream ciphers create an arbitrarily long stream of key material, which is combined with a plaintext bit by bit or character by character, somewhat like the one-time pad encryption technique. In a stream cipher, the output stream is based on an internal state, which changes as the cipher operates. That state’s change is controlled by the key, and, in some stream ciphers, by the plaintext stream as well. RC4 is an example of a well-known stream cipher. Cryptographic hash functions do not use keys but take data and output a short, fixed length hash in a one-way function. For good hashing algorithms, collisions (two plaintexts which produce the same hash) are extremely difficult to find, although they do happen. Symmetric-key cryptosystems typically use the same key for encryption and decryption. A disadvantage of symmetric ciphers is that a complicated key management system is necessary to use them securely. Each distinct pair of communicating parties must share a different key. The number of keys required increases with the number of network members. This requires very complex key management schemes in large networks. It is also difficult to establish a secret key exchange between two communicating parties when a secure channel doesn’t already exist between them. You can think of modern cryptography in computers as beginning with DES, or the Data Encryption Standard, us a 56-bit symmetric-key algorithm developed by IBM and published in 1975, with some tweaks here and there from the US National Security Agency. In 1977, Whitfield Diffie and Martin Hellman claimed they could build a machine for $20 million dollars that could find a DES key in one day. As computers get faster, the price goes down as does the time to crack the key. Diffie and Hellman are considered the inventors of public-key cryptography, or asymmetric key cryptography, which they proposed in 1976. With public key encryption, two different but mathematically related keys are used: a public key and a private key. A public key system is constructed so that calculation of the private key is computationally infeasible from knowledge of the public key, even though they are necessarily related. Instead, both keys are generated secretly, as an interrelated pair. In public-key cryptosystems, the public key may be freely distributed, while its paired private key must remain secret. The public key is typically used for encryption, while the private or secret key is used for decryption. Diffie and Hellman showed that public-key cryptography was possible by presenting the Diffie-Hellman key exchange protocol. The next year, Ron Rivest, Adi Shamir and Leonard Adleman developed the RSA encryption algorithm at MIT and founded RSA Data Security a few years later in 1982. Later, it became publicly known that asymmetric cryptography had been invented by James H. Ellis at GCHQ, a British intelligence organization and that both the Diffie-Hellman and RSA algorithms had been previously developed in 1970 and were initially called “non-secret encryption.” Apparently Ellis got the idea reading a bell labs paper about encrypting voice communication from World War II. Just to connect some dots here, Alan Turing, who broke the Enigma encryption, visited the proposed author of that paper, Shannon, in 1943. This shouldn’t take anything away from Shannon, who was a brilliant mathematical genius in his own right, and got to see Gödel, Einstein, and others at Princeton. Random note: he invented wearables to help people cheat at roulette. Computer nerds have been trying leverage their mad skills to cheat at gambling for a long time. By the way, he also tried to cheat at, er, I mean, program chess very early on, noting that 10 to the 120th power was the game-tree complexity of chess and wrote a paper on it. Of course someone who does those things as a hobby would be widely recognized as the father of informational theory. RSA grew throughout the 80s and 90s and in 1995, they spun off a company called VeriSign, who handled patent agreements for the RSA technology until the patents wore out, er, I mean expired. RSA Security was acquired by EMC Corporation in 2006 for $2.1 billion and was a division of EMC until EMC was acquired by Dell in 2016. They also served as a CA - that business unit was sold in 2010 to Symantec for $1.28B. RSA has made a number of acquisitions and spun other businesses off over the years, helping them get into more biometric encryption options and other businesses. Over time the 56 bit key size of DES was too small and it was followed up by Triple-DES in 1998. And Advanced Encryption Standard, or AES, also in 1998. Diffie-Hellman and RSA, in addition to being the first public examples of high quality public-key cryptosystems have been amongst the most widely used. In addition to encryption, public-key cryptography can be used to implement digital signature schemes. A digital signature is somewhat like an ordinary signature; they have the characteristic that they are easy for a user to produce, but difficult for anyone else to forge. Digital signatures can also be permanently tied to the content of the message being signed as they cannot be moved from one document to another as any attempt will be detectable. In digital signature schemes, there are two algorithms: one for signing, in which a secret key is used to process the message (or a hash of the message or both), and one for verification, in which the matching public key is used with the message to check the validity of the signature. RSA and DSA are two of the most popular digital signature schemes. Digital signatures are central to the operation of public key infrastructures and to many network security schemes (SSL/TLS, many VPNs, etc). Digital signatures provide users with the ability to verify the integrity of the message, thus allowing for non-repudiation of the communication. Public-key algorithms are most often based on the computational complexity of hard problems, often from number theory. The hardness of RSA is related to the integer factorization problem, while Diffie-Hellman and DSA are related to the discrete logarithm problem. More recently, elliptic curve cryptography has developed in which security is based on number theoretic problems involving elliptic curves. Because of the complexity of the underlying problems, most public-key algorithms involve operations such as modular multiplication and exponentiation, which are much more computationally expensive than the techniques used in most block ciphers, especially with typical key sizes. As a result, public-key cryptosystems are commonly hybrid systems, in which a fast symmetric-key encryption algorithm is used for the message itself, while the relevant symmetric key is sent with the message, but encrypted using a public-key algorithm. Hybrid signature schemes are often used, in which a cryptographic hash function is computed, and only the resulting hash is digitally signed. OpenSSL is a software library that most applications use to access the various encryption mechanisms supported by the operating systems. OpenSSL supports Diffie-Hellman and various versions of RSA, MD5, AES, Base, sha, DES, cast and rc. OpenSSL allows you to create ciphers, decrypt information and set the various parameters required to encrypt and decrypt data. There are so many of these algorithms because people break them and then a new person has to come along and invent one and then version it, then add more bits to it, etc. At this point, I personally assume that all encryption systems can be broken. This might mean that the system is broken while encrypting, or the algorithm itself is broken once encrypted. A great example would be an accidental programming mistake allowing a password to be put into the password hint rather than in the password. Most flaws aren’t as simple as that. Although Kerckhoffs's principle teaches us that the secrecy of your message should depend on the secrecy of the key, and not on the secrecy of the system used to encrypt the message. Some flaws are with the algorithms themselves, though. At this point most of those are public and security without a password or private key they just take too long to decrypt to be worth anything once decrypted. This doesn’t mean we don’t encrypt things, it just means that in addition to encryption we now add another factor to that security. But we’ll leave the history of two-factor security to another episode. Finally, RSA made a lot of money because they used ciphers that were publicly reviewed and established as a standard. Public review of various technological innovations allows for commentary and making it better. Today, you can trust most encryption systems because due to that process, it costs more to decrypt what you’re sending over the wire than what is being sent is worth. In other words, collaboration trumps secrecy.
8/7/2019 • 14 minutes, 43 seconds
Xerox Alto
Welcome to the History of Computing Podcast, where we explore the history of information technology. Because understanding the past prepares us for the innovations of the future! Todays episode is about the Xerox Alto. Close your eyes and… Wait, don’t close your eyes if you’re driving. Or on a bike. Or boating. Or… Nevermind, don’t close your eyes But do use your imagination, and think of what it would be like if you opened your phone… Also don’t open your phone while driving. But imagine opening your phone and ordering a pizza using a black screen with green text and no pictures. If that were the case, you probably wouldn’t use an app to order a pizza. Without a graphical interface, or GUI, games wouldn’t have such wide appeal. Without a GUI you wouldn’t probably use a computer nearly as much. You might be happier, but we’ll leave that topic to another podcast. Let’s jump in our time machine and head back to 1973. The Allman Brothers stopped drinking mushroom tea long enough to release Ramblin’ Man, Elton John put out Crocodile Rock, both Carpenters were still alive, and Free Bird was released by Lynard Skynyrd. Nixon was the president of the United States, and suspends offensive actions in North Vietnam, 5 days before being sworn into his second term as president. He wouldn’t make it all four years of course because not long after, Watergate broke, and by the end of the year Nixon claimed “I’m not a crook”. The first handheld cell call is made by Martin Cooper, the World Trade Center opens, Secretariat wins the Belmont Stakes, Skylab 3 is launched, OJ was a running back instead of running from the police, being gay was removed from the DSM, and the Endangered Species Act was passed in the US. But many a researcher at the Palo Alto Research Center, known as Xerox Parc, probably didn’t notice much of this as they were hard at work at doing something many people in Palo Alto talk about these days but rarely do: changing the world. In 1973, Xerox released the Alto, which had the first computer operating system designed from the ground up to support a GUI. It was inspired by the oN-Line System (or NLS for short), which had been designed by Douglas Engelbert of the Stanford Research Institute in the 60s on a DARPA grant. They’d spent a year developing it and that was the day to shine for Doublers Steward, John Ellenby, Bob Nishimura, and Abbey Silverstone. The Alto ran the Alto Executive operating system, had a 2.5 megabyte hard drive, ran with four 74181 MSI chips that ran at a 5.88 MHz clock speed and came with between 96 and 512 kiloBytes of memory. It came with a mouse, which had been designed by Engelbert for NLS. The Alto I ran a pilot of 30 and then an additional 90 were produced and sold before the Alto II was released. Over the course of 10 years, Xerox would sell 2000 more. Some of the programming concepts were borrowed from the Data General Nova, designed by Edson de Castro, a former DEC product manager responsible for the PDP-8. The Alto could run 16 cooperative, prioritized tasks. It was about the size of a mini refrigerator and had a CRTO on a swivel. It also came with an Ethernet connection, a keyboard, a three-button mouse a disk drive, and first a wheel mouse, later followed up with a ball mouse. That monitor was in portrait rather than the common landscape of later computers. You wrote software in BCPL and Mesa. It used raster graphics, came with a document editor, the Laurel email app, and gave us an actual multi-player video game. Oh, and a early graphics editor. And the first versions of Smalltalk - a language we’ll do an upcoming episode on, ran on the Alto. 50 of these were donated to universities around the world in 1978, including Stanford, MIT, and Carnegie Mellon, inspiring a whole generation of computer scientists. One ended up in the White House. But perhaps the most important of the people that were inspired, was Steve Jobs, when he saw one at Xerox Parc, the inspiration for the first Mac. The sales numbers weren’t off the charts though. Byte magazine said: It is unlikely that a person outside of the computer-science research community will ever be able to buy an Alto. They are not intended for commercial sale, but rather as development tools for Xerox, and so will not be mass-produced. What makes them worthy of mention is the fact that a large number of the personal computers of tomorrow will be designed with knowledge gained from the development of the Alto. The Alto was sold for $32,000 in 1979 money, or well over $100,000 today. So they were correct. $220,000,000 over 10 years is nothing. The Alto then begat the Xerox Star, which in 1981 killed the Alto and sold at half the price. But Xerox was once-bitten, twice shy. They’d introduced a machine to rival the DEC PDP-10 and didn’t want to jump into this weird new PC business too far. If they had wanted to they might have released something somewhere between the Star and the Commodore VIC-20, which ran for about $300. Even after the success of the Apple II, which still paled in comparison to the business Xerox is most famous for: copiers. Imagine what they thought of the IBM PCs and Apple II, when they were a decade ahead of that? I’ve heard may say that with all of this technology being invented at Xerox, that they could have owned the IT industry. Sure, Apple went from $774,000 in 1977 to $118 million in 1980 but then CEO Peter McColough was more concerned about the loss of market share for copiers, which dipped from 65 to 46 percent at the time. Xerox revenues had gone from $1.6 billion dollars to $8 billion in the 70s. And there were 100,000 people working in that group! And in the 90s Xerox stock would later skyrocket up to $250/share! They invented Laser Printing, WYSIWYGs, the GUI, Ethernet, Object Oriented Programming, Ubiquitous computing with the PARCtab, networking over optical cables, data storage, and so so so much more. The interconnected world of today likely wouldn’t be what it is without other people iterating on their contributions, but more specifically likely wouldn’t be what it is if they had hoarded them. They made a modicum of money off most of these - and that money helped to fund further research, like hosting the first live streamed concert. Xerox still rakes in over $10 billion in a year in revenue and unlike many companies that went all-in on PCs or other innovations during the incredible 112 year run of Xerox, they’re still doing pretty well. Commodore went bankrupt in 1994, 10 years after Dell was founded. Computing was changing so fast, who can blame Xerox? IBM was reinvented in the 80s because of the PC boom - but it also almost put them out of business. We’ll certainly cover that in a future episode. I’m glad Xerox is still in business, still making solid products, and still researching all the things! So thank you to everyone at every level of Xerox, for all your organization has contributed over the years, including the Alto, which shaped how computers are used today. And thank YOU patient listeners, for tuning in to this episode of the History Of Computing Podcast. We hope you have a great day!
8/5/2019 • 8 minutes, 51 seconds
Alan Turing
Welcome to the History of Computing Podcast, where we explore the history of information technology. Because understanding the past prepares us for the innovations of the future! Todays episode is about Alan Turing. Turing was an English mathematician, cryptanalyst, logician, and the reason he’s so famous today is probably his work in computer science, being the father of what’s often called artificial intelligence. He built the first true working general-purpose computer, although the first Turning-Complete computer would be the Z3 from Konrad Zuse in 1941. Turning was born in 1912. From a young age, he was kinda’ weird, but really good at numbers and science. This started before he went to school and made for an interesting upbringing. Back then, science wasn’t considered as important as it might be today and he didn’t do well in many subjects in school. But in 1931 he went to King’s college in Cambridge, where by 1935 he was elected a fellow. While there, he reimagined Kurt Gödel's limits of proof and computation to develop a model of computation now common known as the Turning machine, which uses an abstract machine to put symbols on a strip of tape based on some rules. This was the first example of a CPU, or Central Processing Unit. The model was simple and he would improve upon it throughout his career. Turning went off to Princeton from 1936 to 1938, where he was awarded a PhD in math, after having studied lambda calculus with Alonzo Church, cryptanalysis, and built built three of the four stages of an electro-mechanical binary multiplier, or a circuit built using binary adders that could multiply two binary numbers and tinkered with most everything he could get his hands on. To quote Turing: “We can only see a short distance ahead, but we can see plenty there that needs to be done.” He returned to Cambridge in 1939 and then went to Bletchley Park to do his part in the World War II effort. Here, he made five major cryptanalytical advances throughout the war, providing Ultra Intelligence. While at what was called Hut 8 he pwned the Enigma with the bombe, an electro-mechanical device used by the British cryptologists to help decipher German Enigma-machine-encrypted secret messages. The bombe discovered the daily settings of the Enigma machines used by the germans, including which set of rotors was used, their starting positions and the message key. This work saved over 10 million lives. Many of his cryptographic breakthroughs are used in modern algorithms today. Turing also went to the US during this time to help the Navy with encryption and while in the states, he went to Bell Labs to help develop secure speech devices. After the war, he designed the Automatic Computing Engine, what is now known as a Universal Turing machine.This computer used stored programs. He couldn’t tell anyone that he’d already done a lot of this because of the Official Secrets Act and the classified nature of his previous work at Bletchley. The computer he designed had a 25 kilobytes of memory and a 1Megahertz processor and cost around 11,000 pounds at the time. In 1952, Turning was rewarded for all of his efforts by being prosecuted for homosexual acts. He chose chemical castration over prison and died two years later in 1954, of suicide. Alan Turing is one of the great minds of computing. Over 50 years later the British government apologized and he was pardoned by Queen Elizabeth. But one of the great minds of the computer era was lost. He gave us the Turing Pattern, Turning Reduction, Turing test, Turing machine and most importantly 10 million souls were not lost. People who had children and grandchildren. Maybe people like my grandfather, or yours. The Turing Award has been given annually by the Association for Computing Machinery since 1966 for technical or theoretical contributions in computing. He has more prizes, colleges, and building and even institutes named after him as well. And there’s a movie, called The Imitation Game. And dozens of books detailing his life have been released since the records of his accomplishments during the war were unsealed. Every now and then a great mind comes along. This one was awkward and disheveled most of the time. But he had as big an impact on the advent of the computer age as any other single human. Next time you’re in the elevator at work or talking to your neighbor and they seem a little bit… weird - just think… do they have a similar story. To quote Turing: “Sometimes it is the people no one can imagine anything of who do the things no one can imagine.” Thank you for tuning in to this episode of the History of Computing Podcast. We hope you can find the cryptographic message in the pod. And if not, maybe it’s time to build your own bo
8/3/2019 • 5 minutes, 33 seconds
The History Of Novell
Welcome to the History of Computing Podcast, where we explore the history of information technology. Because by understanding the past, we’re able to be prepared for the innovations of the future! Todays episode is on the history of Novell. To understand Novell, we’ll go to BYU in 1980. As an honors grad in math and computer science, Drew Major might have been listening to the new wave tunes of Blondie or Deve who released Call Me and Whip it respectively that year. But it’s more likely he was playing with the Rubik’s Cube or Pac-Man, released that year or tuned in to find out Who Shot JR? On Dallas. He probably joined the rest of the world in mourning the loss of John Lennon who was murdered in 1980. He went to work at Eyring Research Institute (ERI) where he, Dale Neibaur and Kyle Powell decided to take some of their work from BYU and started working on the IPX and SPX network protocols and the NetWare operating system using the company name SuperSet Software. Meanwhile, George Canova, Darin Field, and Jack Davis had started a company called Novell a couple of years before, building microcomputers, or the equivalent of the PCs we use today. They weren’t doing so well and Novell Data Systems decided they might be able to sell more computers by hooking them up together - so they hired the SuperSet team to help. The team Superset had worked on ARPANET projects while at the Eyring Research Institute The bankers stepped in and Jack Davis left, then Canova - and Raymond Noorda stepped in as CEO in 1982. In 1983 they released Novell NetWare. NetWare had the first real Network Operating System called ShareNet, which was based on a license to a Unix kernel they bought. While initially based on the Xerox Network System developed at Xerox PARC, they created Internetwork Packet Exchange, or IPX, and Sequenced Packet Exchange, or SPX, creating standards that would become common in most businesses in the subsequent decades. They joined Novell in 1983 and Major later became Chief Scientist. The 1980s were good to Novell. They released Netware 2 in 1986, becoming independent of the hardware and more modular. Servers could be connected through ARCNET, Ethernet, and Token Ring. They added fault tolerance options to remap bad blocks, added RAID support, and used a key card inserted in the ISA bus to license the software. And they immediately started working on Netware 3, which wouldn’t be complete until 1990, with 3.11 setting the standard in network file sharing and when I first worked with Netware. Netware 3 was easier to install. It was 32-bit, allowed volumes up to a terabyte, and I remember this being cool at the time, you could add volume segments on the fly while the volume was mounted. Although growing the volume was always… in need of checking backups first. They didn’t worry a lot about the GUI. Dealers didn’t mind that. HP, DEC, and Data General all licensed OEM versions of the software. This was also my first experience with clustering, as NetWare SFT-III allowed a mirror an a different machine. All of this led to patents and the founding of new concepts that would, whether intentionally or accidentally, be copied by other vendors over the coming years. They grew, they sold hardware, like otherwise expensive ethernet cards, at cost to grab market share, and they had a lot of dealers who were loyal, in part due to great margins they had been earning but also because Netware wasn’t simple to run and so required support contracts with those dealers. By 1990, most businesses used Novell if they needed to network computers. And NetWare 3.x seemed to cement that. They worked with larger and larger customers, becoming the Enterprise standard. Once upon a time, no one ever got fired for buying Netware. But Microsoft had been growing into the powerhouse standard of the day. They opened discussions to merge with Novell but Ray Noorda, then CEO, soon discovered that Bill Gates was working behind his back, a common theme of the era. This is when Novell got aggressive, likely realizing Microsoft was about to eat their lunch. Novell then bought Digital Research in 1991, with a version of DOS called DR DOS, and working with Apple on a project to bring Novell to Mac OS. They bought Univel to get their own Unix for UnixWare, and wrote Novell Directory Services which would later become eDirectory to establish a directory services play. They bought WordPerfect and Quattro Pro, early Office-type tools. By the end of this brisk acquisition time, the company didn’t look like they did just a few years earlier. Microsoft had released Windows NT 3.1 Advanced Server in 1993 as the hate-spat between Noorda and Gates intensified. Noorda supported the first FTC antitrust investigations against Microsoft. It didn’t work. Noorda was replaced by Robert Frankenberg in 1994. And then Windows 95 was released. Novell ended up selling Novell DOS to Caldera, handing over part of the Unix assets to Santa Cruz Operation, selling Integrated Systems, scrapping the Embedded Systems technology they’d been working on, and even selling WordPerfect and Quattro Pro too Corel. Windows of course supported Netware servers in addition to their own offering, having moved to NT 4 in 1996. NT 4 server would become the de facto standard in businesses. Frankenberg didn’t last long and Eric Schmidt was hired as CEO in 1997. NetWare 5 was released in 1998 and I can still remember building zap packages to remove IPX/SPX in favor of TCP/IP. But the company was alienating the channel by squeezing margin out of them while simultaneously losing the war in the small business then the larger businesses to Microsoft, who kept making Windows Server better, and by 1999 I was trading my CNA (or Certified Novell Administrator) out for my first MCSE. After seeing the turnaround at IBM, Novell bought a consulting firm called Cambridge Technology Partners in 2001, replacing Schmidt with their CEO, Jack Messmen - and moving their corporate headquarters to Massachusetts. Drew Major finally left that year. The advancements he’s overseen at Novell are legendary and resulted in technology research and patents that rival any other team in the industry. But the suits had a new idea. They pivoted to Linux, buying Ximian and SuSE in 2003, releasing Suse Linux Enterprise Server and then Novell Linux Desktop in 2004 and finally Open Enterprise Server in 2005. Does all of this seem like a rudderless ship? Yes, they wanted to pivot to Linux and compete with Microsoft, but they’d been through this before. Stop slapping yourself… Microsoft finally settled the competition by buying them off. They gave Novell $348 Million dollars in 2006 for “patent cooperation” and then spent $6M more on Novell products than Novell spent on theirs over the next 5 years (keep in mind that technology spats are multi-front wars). Novell was acquired by Attachmate for $2.2 billion dollars. Because Novell engineers had been creating so much amazing technology all those years, 882 patents from Novell went to CPTN Holdings, a consortium of companies that included Apple, EMC, Microsoft, and Oracle - this consortium the likely architect of the whole deal. SUSE was spun off, Attachmate laid off a lot of the workforce, Attachmate was bought, much word salad was said. You can’t go back in time and do things over. But if he could, I bet Noorda would go back in time and do the deal with Bill Gates instead of going to war. Think about that next time someone goes low. Don’t let your emotions get the best of you. You’re above that. This has been The History of Novell. Thank you for listening we hope you have a great day!
8/2/2019 • 9 minutes, 9 seconds
The History Of DNS
Welcome to the History of Computing Podcast, where we explore the history of information technology. Because by understanding the past, we’re able to be prepared for the innovations of the future! Todays episode is on the history of the Domain Name System, or DNS for short. You know when you go to www.google.com. Imagine if you had to go to 172.217.4.196, or the IP address, instead. DNS is the service that resolves that name to that IP address. Let’s start this story back in 1966. The Beatles released Yellow Submarine. The Rolling Stones were all over the radio with Paint It Black. Indira Ghandi was elected the Prime Minister of India. US Planes were bombing Hanoi smack dab in the middle of the Vietnam War. The US and USSR agreed not to fill space with nukes. The Beach Boys had just released Good Vibrations. I certainly feel the good vibrations when I think that quietly, when no one was watching, the US created ARPANET, or the Advanced Research Projects Agency Network. ARPANET would evolve into the Internet as we know it today. As with many great innovations in technology, it took awhile to catch on. Late into the 1980s there were just over 300 computers on the Internet, most doing research. Sure, there were 254 to the 4th addresses that were just waiting to be used, but the idea of keeping the address of all 300 computers you wanted to talk to seemed cumbersome and it was slow to take hold. To get an address in the 70s you needed to contact Jon Postel at USC to get put on what was called the Assigned Numbers List. You could call or mail them. Stanford Research Institute (now called SRI) had a file they hosted called hosts.txt. This file mapped the name of one of these hosts on the network to a IP address, making a table of computer names and then IP addresses those matched with, or a table of hosts. Many computers still maintain this file. Elizabeth Feinler maintained this directory of systems. She would go on to lead and operate the Network Information Center, or NIC for short, for ARPANET and see the evolution to the Defense Data Network, or DDN for short and later the Internet. She wrote what was then called the Resource Handbook. By 1982, Ken Harrenstien and Vic White on Feinler’s group at Stanford created a service called Whois, defined in RFC 812, which was an online directory. You can still use the whois command on Windows, Mac and Linux computers today. But by 1982 it was clear that the host table was getter’s slower and harder to maintain as more systems were coming online. This meant more people to do that maintenance. But Postel from USC then started reviewing proposals for maintaining this thing, a task he handed off to Paul Mockapetris. That’s when Mockapetris did something that he wasn’t asked to do and created DNS. Mockapetris had been working on some ideas for filesystems at the time and jumped at the chance to apply those ideas to something different. So Jon Postel and Zaw-Sing Su helped him complete his thoughts which were published by the Internet Engineering Task Force, or IETF, in in RFC 882 for the concepts and facilities and RFC 883 for the implementation and specification in November 1983. You can google those and read them today. And most of it is still used. Here, he introduced the concept that a NAME of a TYPE points to an address, or RDATA and lives for a specified amount of time, or TTL short for Time To Live. He also mapped IP addresses to names in the specifications, creating PTR records. All names had a TLD or Top Level Domain name of ARPANET. Designing a protocol isn’t the same thing as implementing a protocol. In 1984, four students from the University of California Berkeley wrote the first version of BIND, short for Berkeley Internet Name Domain, for BSD 4.3. Douglas Terry, Mark Painter, David Riggle, and Songnian Zhou using funds from a DARPA grant. In 1988 Paul Vixie from Digital Equipment Corporation then gave it a little update and maintained it until he founded the Internet Systems Consortium to take it over. BIND is still the primary distribution of DNS, although there are other distributions now. For example, Microsoft added DNS in 1995 with the release of NT 3.51. But back to the 80s real quick. In 1985, came the introduction of .mil, .gov, .edu, .org, .com TLDs. Remember John Postel from USC? He and Joyce K Reynolds started an organization called IANA to assign numbers for use on the Internet. DNS Servers are hierarchical, and so there’s a set of root DNS servers, with a root zone controlled by the US Dept of Commerce. 10 of the 13 original servers were operated in the US and 3 outside, each assigned a letter of A through M. You can still ping a.root-servers.net. These host the root zone database from IANA and handle the hierarchy of the TLD they’re authoritative for with additional servers hosted for .gov, .com, etc. There are now over 1,000 TLDs! And remember how USC was handling the addressing (which became IANA) and Stanford was handling the names? Well Feinler’s group turned over naming to Network Solutions in 1991 and they handled it until 1998 when Postel died and ICANN was formed. ICANN or the Internet Corporation for Assigned Names and Numbers, merged the responsibilities under one umbrella. Each region of the world is allowed to manage their own IP addresses, and so ARIN was formed in 1998 to manage the distribution of IP addresses in America. The collaboration between Feinler and Postel fostered the innovations that would follow. They also didn’t try to take everything on. Postel instigated TCP/IP and DNS. Postel co-wrote many of the RFCs that define the Internet and DNS to this day. And Feinler’s showed great leadership in administering how much of that was implemented. One can only aspire to find such a collaboration in life and to do so with results like the Internet, worth tens of trillions of dollars, but more importantly has reshaped the world, disrupted practically every industry and touched the lives of nearly every human on earth. Thank you for joining us for this episode of the History Of Computing Podcast. We hope you had an easy time finding thehistoryofcomputing.libsyn.com thanks to the hard work of all those who came before us.
7/31/2019 • 8 minutes, 22 seconds
Digital Equipment Corporation
Welcome to the History of Computing Podcast, where we explore the history of information technology. Because by understanding the past, we’re able to be prepared for the innovations of the future! Todays episode is on Digital Equipment Corporation, or DEC. DEC was based in Maynard Massachusetts and a major player in the computer industry from the 1950s through the 1990s. They made computers, software, and things that hooked into computers. My first real computer was a DEC Alpha. And it would be over a decade before I used 64-bit technology again. DEC was started in 1957 by Ken Olsen, Stan Olsen, and Harlan Anderson of the MIT Lincoln Laboratory using a $70,000 loan because they could sell smaller machines than the big mainframes to users where output and realtime operation were more important than performance. Technology was changing so fast and there were so few standards for computers that investors avoided them. So they decided to first ship modules, or transistors that could be put on circuit boards and then ship systems. They were given funds and spent the next few years building a module business to fund a computer business. IBM was always focused on big customers. In the 1960s, this gave little DEC the chance to hit the smaller customers with their PDP-8, the first successful mini-computer, at the time setting customers back around $18,500. The “Straight-8” as it was known was designed by Edson de Castro and was about the size of a refrigerator, weighing in at 250 pounds. This was the first time a company could get a computer for less than $20k and DEC sold over 300,000 of them! The next year came the 8/s. No, that’s not an iPhone model. It only set customers back $10k. Just imagine the sales team shows up at your company talking about the discrete transistors, the transistor-transistor logic, or TTL. And it wouldn’t bankrupt you like that IBM. The sales pitch writes itself. Sign me up! What really sold these though, was the value engineering. They were simpler. Sure, programming was a little harder, and more code. Sure, sometimes that caused the code to overflow the memory. But at the cost savings, you could hire another programmer! The rise of the compiler kinda’ made that a negligible issue anyway. The CPU had only four 12-bit registers. But it could run programs using the FORTRAN compiler anruntime, or DECs FOCAL interpreter. Or later you could use PAL-III Assembly, BASIC, or DIBOL. DEC also did a good job of energizing their user base. The Digital Equipment Corporation User Society was created in 1961 by Edward Fredkin and was subsidized by DEC. Here users could trade source code and documentation, with two DECUS US symposia per year - and there people would actually trade code and later tapes. It would later merge with HP and other groups during the merger era and is alive today as the Connect User Group Community, with over 70,000 members! It is still independent today. The User Society was an important aspect of the rise of DEC and of the development of technology and software for mini computers. The feeling of togetherness through mutual support helped keep the costs of vendor support down while also making people feel like they weren’t alone in the world. It’s also important as part of the history of free software, something we’ll talk about in more depth in a later episode. The PDP continued to gain in popularity until 1977, when the VAX came along. The VAX brought with it the virtual address extension for which it derives its name. This was really the advent of on-demand paged virtual memory, although that had been initially adopted by Prime Computer without the same level of commercial success. This was a true 32-bit CISC, or Complex Instruction Set Computer. It ran Digital’s VAX/VMS which would later be called OpenVMS; although some would run BSD on it, which maintained VAX support until 2016. This thing set standards in 1970s computing. You know Millions of instructions per second (MIPS) - the VAX was the benchmark. The performance was on par with the IBM System/360. The team at DEC was iterating through chips at a fast rate. Over the next 20 years, they got so good that Soviet engineers bought them just to try and reverse engineer the chips. In fact it got to the point that “when you care enough to steal the very best” was etched into microprocessor die. DEC sold another 400,000 of the VAX. They must have felt on top of the world when they took the #2 computer company spot! DEC was the first computer company with a website, launching dec.com in 85. The DEC Western Research Library started to build a RISC chip called Titan in 1982, meant to run Unix. Alan Kotok and Dave Orbits started designing a 64-bit chip to run VMS (maybe to run Spacewar faster). Two other chips, HR-32 and CASCADE were being designed in 1984. And Prism began in 1985. With all of these independent development efforts, turf wars stifled the ability to execute. By 1988, DEC canceled the projects. By then Sun had SPARC, and were nipping at the heels. Something else was happening. DEC made mini-computers. Those were smaller than mainframes. But microcomputers showed up in the 1980s with he first IBM PC shipping in 1981. But by the early 90s they too were 32-bit. DEC was under the gun to bring the world into 64-bit. The DEC Alpha started at about the same time (if not in the same meeting as the termination of the Prism project. It would not be released in 1992 and while it was a great advancement in computing, it came into a red ocean where there were vendors competing to set the standard of the computers used at every level of the industry. The old chips could have been used to build microcomputers and at a time when IBM was coming into the business market for desktop computers and starting to own it, DEC stayed true to the microcomputer business. Meanwhile Sun was growing, open architectures were becoming standard (if not standardized), and IBM was still a formidable beast in the larger markets. The hubris. Yes, DEC had some of the best tech in the market. But they’d gotten away from value engineering the solutions customers wanted. Sales slumped through the 1990s. Linus Torvalds wrote Linux on a DEC Alpha in the mid-late 90s. Alpha chips would work with Windows and other operating systems but were very expensive. X86 chips from Intel were quickly starting to own the market (creating the term Wintel). Suddenly DEC wasn’t an industry leader. When you’ve been through those demoralizing times at a company, it’s hard to get out of a rut. Talent leaves. Great minds in computing like Radia Perlman. She invented Spanning Tree Protocol. Did I mention that DEC played a key role in making ethernet viable. They also invented clustering. More brain drain - Jim Grey (he probably invented half the database terms you use), Leslie Lamport (who wrote LaTex), Alan Eustace (who would go on to become the Senior VP of Engineering and then Senior VP of Knowledge at Google), Ike Nassi (chief scientist at SAP), Jim Keller (who designed Apple’s A4/A5), and many, many others. Fingers point in every direction. Leadership comes and goes. By 2002 it was clear that a change was needed. DEC was acquired by Compaq in the largest merger in the computer industry at the time, in part to get the overseas markets that DEC was well entrenched in. Compaq started to cave from too many mergers that couldn’t be wrangled into an actual vision. So they later merged with HP in 2002, continuing to make PDP, VAX, and Alpha servers. The compiler division was sold to Intel, and DEC goes down as a footnote in history. Innovative ideas are critical to a company surviving after the buying tornadoes. Strong leaders must reign in territorialism, turf wars and infighting in favor of actually shipping products. And those should be products customers want. Maybe even products you value engineered to meet them where they’re at as DEC did in their early days.
7/29/2019 • 9 minutes, 56 seconds
The History of Computer Viruses
Welcome to the History of Computing Podcast, where we explore the history of information technology. Because by understanding the past, we’re able to be prepared for the innovations of the future! Todays episode is not about Fear, Uncertainty, and Death. Instead it’s about viruses. As with many innovations in technology, early technology had security vulnerabilities. In fact, we still have them! Today there are a lot of types of malware. And most gets to devices over the Internet. But we had viruses long before the Internet; in fact we’ve had them about as long as we’ve had computers. The concept of the virus came from a paper published by a Hungarian Scientist in 1949 called “Theory of Self-reproducing automata.” The first virus though, didn’t come until 1971 with Creeper. It copied between DEC PDP-10s running TENEX over the ARPANET, the predecessor to the Internet. It didn’t hurt anything; it just output a simple little message to the teletype that read “I’m the creeper: catch me if you can.” The original was written by Bob Thomas but it was made self-replicating by Ray Tomlinson thus basically making him the father of the worm. He also happened to make the first email program. You know that @ symbol in an email address? He put it there. Luckily he didn’t make that self replicating as well. The first antivirus software was written to, um, to catch Creeper. Also written by Ray Tomlinson in 1972 when his little haxie had gotten a bit out of control. This makes him the father of the worm, creator of the anti-virus industry, and the creator of phishing, I mean, um email. My kinda’ guy. The first virus to rear its head in the wild came in 1981 when a 15 year old Mt Lebanon high school kid named Rich Skrenta wrote Elk Cloner. Rich went on to work at Sun, AOL, create Newhoo (now called the Open Directory Project) and found Blekko, which became part of IBM Watson in 2015 (probably because of the syntax used in searching and indexes). But back to 1982. Because Blade Runner, E.T., and Tron were born that year. As was Elk Cloner, which that snotty little kid Rich wrote to mess with gamers. The virus would attach itself to a game running on version 3.3 of the Apple DOS operating system (the very idea of DOS on an Apple today is kinda’ funny) and then activate on the 50th play of the game, displaying a poem about the virus on the screen. Let’s look at the Whitman-esque prose: Elk Cloner: The program with a personality It will get on all your disks It will infiltrate your chips Yes, it's Cloner! It will stick to you like glue It will modify RAM too Send in the Cloner! This wasn’t just a virus. It was a boot sector virus! I guess Apple’s MASTER CREATE would then be the first anti-virus software. Maybe Rich sent one to Kurt Angle, Orin Hatch, Daya, or Mark Cuban. All from Mt Lebanon. Early viruses were mostly targeted at games and bulletin board services. Fred Cohen coined the term Computer Virus the next year, in 1983. The first PC virus came also to DOS, but this time to MS-DOS in 1986. Ashar, later called Brain, was the brainchild of Basit and Amjad Farooq Alvi, who supposedly were only trying to protect their own medical software from piracy. Back then people didn’t pay for a lot of the software they used. As organizations have gotten bigger and software has gotten cheaper the pirate mentality seems to have subsided a bit. For nearly a decade there was a slow roll of viruses here and there, mainly spread by being promiscuous with how floppy disks were shared. A lot of the viruses were boot sector viruses and a lot of them weren’t terribly harmful. After all, if they erased the computer they couldn’t spread very far. The virus started “Welcome to the Dungeon.” The following year, the poor Alvi brothers realized if they’d of said Welcome to the Jungle they’d be rich, but Axl Rose beat them to it. The brothers still run a company called Brain Telecommunication Limited in Pakistan. We’ll talk about zombies later. There’s an obvious connection here. Brain was able to spread because people started sharing software over bulletin board systems. This was when trojan horses, or malware masked as a juicy piece of software, or embedded into other software started to become prolific. The Rootkits, or toolkits that an attacker could use to orchestrate various events on the targeted computer, began to get a bit more sophisticated, doing things like phoning home for further instructions. By the late 80s and early 90s, more and more valuable data was being stored on computers and so lax security created an easy way to get access to that data. Viruses started to go from just being pranks by kids to being something more. A few people saw the writing on the wall. Bernd Fix wrote a tool to remove a virus in 1987. Andreas Luning and Kai Figge released The Ultimate Virus Killer, an Antivirus for the Atari ST. NOD antivirus was released as well as Flushot Plus and Anti4us. But the one that is still a major force in the IT industry is McAfee VirusScan, founded by a former NASA programmer named John Mcafee. McAfee resigned in 1994. His personal life is… how do I put this… special. He currently claims to be on the run from the CIA. I’m not sure the CIA is aware of this. Other people saw the writing on the wall as well, but went… A different direction. This was when the first file-based viruses started to show up. They infected ini files, .exe files, and .com files. Places like command.com were ripe targets because operating systems didn’t sign things yet. Jerusalem and Vienna were released in 1987. Maybe because he listened to too much Bad Medicine from Bon Jovi, but Robert Morris wrote the ARPANET worm in 1988, which reproduced until it filled up the memory of computers and shut down 6,000 devices. 1988 also saw Friday the 13th delete files and causing real damage. And Cascade came this year, the first known virus to be encrypted. The code and wittiness of the viruses were evolving. In 1989 we got the AIDS Trojan. This altered autoexec.bat and counted how many times a computer would boot. At 90 boots, the virus would hide the dos directories and encrypt the names of files on C:/ making the computer unusable unless the infected computer owner sent $189 a PO Box in Panama. This was the first known instance of ransomeware. 1990 gave us the first polymorphic virus. Symantec released Norton Antivirus in 1991, the same year the first polymorphic virus was found in the wild, called Tequila. Polymorphic viruses change as they spread, making it difficult to find by signature based antivirus detection products. In 1992 we got Michelangelo which John Mcafee said would hit 5 million computers. At this point, there were 1,000 viruses. 1993 Brough us Leandro and Freddy Krueger, 94 gave us OneHalf, and 1995 gave us Concept, the first known macro virus. 1994 gave us the first hoax with “Good Times” - I think of that email sometimes when I get messages of petitions online for things that will never happen. But then came the Internet as we know it today. By the mid 90s, Microsoft had become a force to be reckoned with. This provided two opportunities. The first was the ability for someone writing a virus to have a large attack surface. All of the computers on the Internet were easy targets, especially before network address translation started to somewhat hide devices behind gateways and firewalls. The second was that a lot of those computers were running the same software. This meant if you wrote a tool for Windows that you could get your tool on a lot of computers. One other thing was happening: Macros. Macros are automations that can run inside Microsoft Office that could be used to gain access to lower level functions in the early days. Macro viruses often infected the .dot or template used when creating new Word documents, and so all new word documents would then be infected. As those documents were distributed over email, websites, or good old fashioned disks, they spread. An ecosystem with a homogenous distribution of the population that isn’t inoculated against an antigen is a ripe hunting ground for a large-scale infection. And so the table was set. It’s March, 1999. David Smith of Aberdeen Township was probably listening to Livin’ La Vida Loca by Ricky Martin. Or Smash Mouth. Or Sugar Ray. Or watching the genie In A Bottle video from Christina Aguilera. Because MTV still had some music videos. Actually, David probably went to see American Pie, The Blair Witch Project, Fight Club, or the Matrix then came home and thought he needed more excitement in his life. So he started writing a little prank. This prank was called Melissa. As we’ve discussed, there had been viruses before, but nothing like Melissa. The 100,000 computers that were infected and 1 billion dollars of damage created doesn’t seem like anything by todays standards, but consider this: about 100,000,000 PCs were being sold per year at that point, so that’s roughly one tenth a percent of the units shipped. Melissa would email itself to the first 50 people in an Outlook database, a really witty approach for the time. Suddenly, it was everywhere; and it lasted for years. Because Office was being used on Windows and Mac, the Mac could be a carrier for the macro virus although the payload would do nothing. Most computer users by this time knew they “could” get a virus, but this was the first big outbreak and a wakeup call. Think about this, if there are supposed to be 24 billion computing devices by 2020, then next year this would mean a similar infection would hit 240 million devices. That would mean it hits ever person in Germany, the UK, France, and the Nordic countries. David was fined $5,000 and spent 20 months in jail. He now helps hunt down creators of malware. Macroviruses continued to increase over the coming years and while there aren’t too many still running rampant, you do still see them today. Happy also showed up in 1999 but it just made fireworks. Who doesn’t like fireworks? At this point, the wittiness of the viruses, well, it was mostly in the name and not the vulnerability. ILOVEYOU from 2000 was a vbscript virus and Pikachu from that year tried to get kids to let it infect computers. 2001 gave us Code Red, which attacked IIS and caused an estimated $2 Billion in damages. Other worms were Anna Kournikova, Sircam, Nimda and Klez. The pace of new viruses was going, as was how many devices were infected. Melissa started to look like a drop in the bucket. And Norton and other antivirus vendors had to release special tools, just to remove a specific virus. Attack of the Clones was released in 2002 - not about the clones of Melissa that started wreaking havoc on businesses. Mylife was one of these. We also got Beast, a trojan that deployed a remote administration tool. I’m not sure if that’s what evolved into SCCM yet. In 2003 we got simile, the first metamorphic virus, blaster, sobbing, seem, graybeard, bolgimo, agobot, and then slammer, which was the fastest to spread at that time. This one hit a buffer overflow bug in Microsoft SQL and hit 75,000 devices in 10 minutes. 2004 gave us Bagle, which had its own email server, Sasser, and MyDoom, which dropped speeds for the whole internet by about 10 percent. MyDoom convinced users to open a nasty email attachment that said “Andy, I’m just doing my job, nothing personal.” You have to wonder what that meant… The witty worm wasn’t super-witty, but Netsky, Vundo, bifrost, Santy, and Caribe were. 2005 gave us commwarrior (sent through texts), zotob, Zlob, but the best was that a rootlet ended up making it on CDs from Sony. 2006 brought us Starbucks, Nyxem, Leap, Brotox, stration. 2007 gave us Zeus and Storm. But then another biggee in 2008. Sure, Torpig, Mocmex, Koobface, Bohmini, and Rustock were a thing. But Conficker was a dictionary attack to get at admin passwords creating a botnet that was millions of computers strong and spread over hundreds of countries. At this point a lot of these were used to perform distributed denial of services attacks or to just send massive, and I mean massive amounts of spam. Since then we’ve had student and duqu, Flame, Daspy, ZeroAccess. But in 2013 we got CryptoLocker which made us much more concerned about ransomware. At this point, entire cities can be taken down with targeted, very specific attacks. The money made from Wannacry in 2017 might or might not have helped developed North Korean missiles. And this is how these things have evolved. First they were kids, then criminal organizations saw an opening. I remember seeing those types trying to recruit young hax0rs at DefCon 12. Then governments got into it and we get into our modern era of “cyberwarfare.” Today, people like Park Jin Hyok are responsible for targeted attacks causing billions of dollars worth of damage. Mobile attacks were up 54% year over year, another reason vendors like Apple and Google keep evolving the security features of their operating systems. Criminals will steal an estimated 33 billion records in 2023. 60 million Americans have been impacted by identity theft. India, Japan, and Taiwan are big targets as well. The cost of each breach at a company is now estimated to have an average cost of nearly 8 million dollars in the United States, making this about financial warfare. But it’s not all doom and gloom. Wars in cyberspace between nation states, most of us don’t really care about that. What we care about is keeping malware off our computers so the computers don’t run like crap and so unsavory characters don’t steal our crap. Luckily, that part has gotten easier than ever.
7/26/2019 • 17 minutes
The History Of Apple's Mobile Device Management
Welcome to the History of Computing Podcast, where we explore the history of information technology. Because by understanding the past, we’re better prepared for the innovations of the future! Today we’re going to talk about Apple’s Mobile Device Management; what we now call Mobility. To kick things off we’ll take you back to the year 2001. 2001 was the year Nickelback released How You Remind Me. Destiny’s Child was still together. Dave Matthews released The Space Between, and the first real Mobile Device Management was born. The first real mobile management solution to gain traction was SOTI, which launched in 2001 with an eye towards leveraging automation using mobile devices and got into device management when those options started to emerge. More and more IT departments wanted “Over The Air” management, or OTA management. So Airwatch, founded by John Marshall in 2003 as Wandering Wi-Fi, was the first truly multi-platform device management solution. This time, rather than try to work within the confines of corporate dogma surrounding how the business of IT was done, Apple would start to go their own way. This was made possible by the increasing dominance of the iPhone accessing Exchange servers and the fact that suddenly employees were showing up with these things and using them at work. Suddenly, companies needed to manage the OS that ships on iPhone, iOS. The original iPhone was released in 2007 and iOS management initially occurred manually through iTunes. You could drag an app onto a device and the app would be sent to the phone over the USB cable, and some settings were exposed to iTunes. Back then you had to register an iOS device with Apple by plugging it into iTunes in order to use it. You could also backup and restore a device using iTunes, which came with some specific challenges, such as the account you used to buy an app would follow the “image” to the new device. Additionally, if the backup was encrypted or not determined what was stored in the backup and some information might have to be re-entered. This led to profiles. Profiles were created using a tool called the iPhone Configuration Utility, released in 2008. A Profile is a small xml file that applies a given configuration onto an iOS device. This was necessary because developers wanted to control what could be done on iOS devices. One of those configurations was the ability to install an app over the air that was hosted on an organization’s own web server, provided the .ipa mime type on the web server was defined. This basically mirrored what the App Store was doing and paved the way for internal app stores and profiles that were hosted on servers, both of which could be installed using in-house app stores. During that same time-frame, Jamf, Afaria (by SAP), and MobileIron, founded by Ajay Mishra and Suresh Batchu, in the previous year, were also building similar OTA profile delivery techniques leveraging the original MDM spec. At this point, most OTA management tasks (such as issuing a remote wipe or disabling basic features of devices) were done using Exchange ActiveSync (EAS). You could control basic password policies as well as some rudimentary devices settings such as disabling the camera. With this in mind, Apple began to write the initial MDM specifications, paving the way for an entire IT industry segment to be born. This was the landscape when the first edition of the Enterprise iPhone and iPad Administrator’s Guide was released by Apress in 2010. Additional MDM solutions were soon to follow. TARMAC released MDM for iOS devices using a server running on a Mac in late 2011. AppBlade and Excitor was also released in 2011. Over the course of the next 8 years, MDM became one part of a number of other lovely acronyms: • Mobile Content Management, or MCM, is really just a Content Management System that sends content and services to mobile devices. • Mobile Identity Management, or MIM, refers to where the SIM card of one’s mobile phone works as an identity • Enterprise Mobility Management, or EMM, gets more into managing apps and content that gets put on devices • Unified Endpoint Management, or UEM, brings traditional laptops and then desktops into the management feature, merging EMM with traditional device management. X-Men First Class came in 2011, although the mail server by the same name was all but gone by then. This was a pivotal year for Apple device management and iOS in the enterprise, as Blackberry announced that you would be able to manage Apple devices with their Blackberry Enterprise Server (BES), which had been created in 1999 to manage Blackberry devices. This legitimized using Apple’s mobile devices in enterprise environments and also an opportunistic play for licensing due to the fact that the devices were becoming such a mainstay in the enterprise and a shift towards UEM that would continue until 2018, when BlackBerry Enterprise Server was renamed to BlackBerry Unified Endpoint Manager. An explosion of MDM providers has occurred since Blackberry added Apple to their platform, to keep up with the demand of the market. Filewave and LANrev added MDM to their products in 2011 with new iOS vendors NotifyMDM and SOTI entering into the Apple Device Management family. Then Amtel MDM, AppTrack, Codeproof, Kony, ManageEngine (a part of Zoho corporation), OurPact, Parallels, PUSHMANAGER, ProMDM, SimpleMDM, Sophos Mobile Control, and Tangoe MDM were released in 2012. MaaS360 was acquired by IBM in 2013, the same year auralis, CREA MDM, FancyFon Mobility Center (FAMOC), Hexnode, Lightspeed, and Relution were released, and when Endpoint Protector added MDM to their security products. Citrix also acquired Zenprise in 2013 to introduce XenMobile. Jamf Now (originally called Bushel), Miradore, Mosyle, and ZuluDesk (acquired by Jamf in 2018 and being rebranded to Jamf School) were released in 2014, which also saw VMware acquired Airwatch for $1.54 billion dollars and Good Technology acquire BoxTone, beefing up their Apple device management capabilities. 2014 also saw Microsoft extend Intune to manage iOS devices. Things quieted down a bit but in 2016 after Apple started publishing the MDM specifications guide freely, an open source MDM called MicroMDM was initially committed to github, making it easier for organizations to build their own fork or implement that should they choose. Others crept on the scene as well during those year, such as Absolute Manage MDM, AppTech 360, Avalanche Mobility Center, Baramundi, Circle by Disney, Cisco Meraki (by way of the Cisco acquisition of Meraki), Kaseya EMM, SureMDM, Trend Micro Mobile Security, and many others. Each one of these tools has a great place in the space. Some focus on specific horizontal or vertical markets, while others focus on integrating with other products in a company’s portfolio. With such a wide field of MDM solutions, Apple has been able to focus efforts on building a great API and not spend a ton of time on building out many of the specific features needed for every possible market. A number of family or residential MDM providers have also sprung up, including Circle by Disney. The one market Apple has not made MDM available to has been the home. Apple has a number of tools they believe help families manage devices. It’s been touted as a violation of user privacy to deploy MDM for home environments and in fact is a violation of the APNs terms of service. Whether we believe this to be valid or not, OurPact, initially launched in 2012, was shut down in 2019 along with a number of other screen time apps for leveraging MDM to control various functions of iOS devices. The MDM spec has evolved over the years. iOS 4 in 2010 saw the first MDM and Volume Purchase Program. iOS 5 in 2011 added over the air os updates, Siri management, and provided administrators with the ability to disable the backups of iOS devices to Apple’s iCloud cloud service. iOS 6 saw the addition of APIs for 3rd party developers, managed open in for siloing content, device supervision (which gave us the ability to take additional management tasks on devices we could prove the ownership of) and MDM for the Mac. That MDM for the Mac piece will become increasingly important over the next 7 years. Daft Punk weren’t the only ones that got lucky in 2013. That year brought us iOS 7 for macOS 10.9. The spec was updated to manage TouchID settings, give an Activation Lock bypass key for supervised devices, and the future of per-app settings management came with Managed App Config. 2014 gave us iOS 8 and MacOS 10.10. Here, we got the Device Enrollment Program which allows devices to enroll into an MDM server automatically at setup time and and Apple Configurator enrollments, allowing us to get closer to zero touch installations again. 2015 brought with it The Force Awakens and awakened Device-based VPP in iOS 9 and macOS 2015, which finally allowed administrators to push apps to devices without needing an AppleID, the B2B App Store which allowed for pushing out apps that weren’t available on the standard app store, supervision reminders which are important as it was the first inkling of prompting users in an effort to provide transparency around what was happening on their devices, the ability to enable and disable apps, the ability to manage the home screen, and kiosk mode, or the ability to lock an app into the foreground on a device. The pace continued to seem frenzied in 2016, when Justin Timberlake couldn’t stop the feeling that he got when in iOS 10 and macOS 10.12 he could suddenly restart and shut down a device through MDM commands. And enable Lost Mode. This was also the year Apple shipped their first operating system in a long, long time when APFS was deployed to iOS. Millions of devices got a new filesystem during that upgrade, which went oh so smoothly due to the hard work of everyone involved. iOS 11 with macOS 10.13 saw less management being done on the Mac but a frenzy of updates bringing us Classroom 2 management, FaceID management, AirPrint management, the ability to add devices to DEP through Apple Configurator, QR code based enrollment, User Approved Kernel Extension Loading for Mac and User Approved MDM enrollment for Mac. These last two meant that users needed to explicitly accept enrollment and drivers loading, again trading ease of use out for transparency. Many would consider this a fair trade. Many administrators are frustrated by it. I kinda’ think it is what it is. 2018 saw the Volume Purchase Program, the portal to build an Apple Push Notification certificate, and the DEP portal collapsed into Apple Management Programs, with the arrival of Apple Business Manager. We also got our first salvo of Identity providers with oauth for managed Exchange Accounts, we got the ability to manage tvOS apps on devices and we could start restricting password auto-fill. And this year, we get new content caching configuration options, bluetooth management, autonomous single app mode, os update deferrals, and the automatic renewal of Active Directory Certificates. This year we also get a new enrollment type which uses a Managed Apple ID and then separate encrypted volumes for data storage. What’s so special about Apple’s MDM push? Well, for starters, they took all that legacy IT industry dogma from the past 30 years and decided to do something different. Or did they? The initial MDM options looked a lot like At Ease, a tool from the 1980s. And I mean some of the buttons say the same thing they said on the screens for Newton management. The big difference here is that Push Notifications needed to be added as you couldn’t connect to a socket on a device running on your local network. Because most of the iPhones weren’t on that network. But the philosophy of managing only what you have to to make the lives of your coworkers better means pushing settings, not locking users from changing their background. Or initially it meant that at least. The other thing that is so striking is that this was the largest and fastest adoption of enterprise technology I’ve seen. Sometimes the people who have survived this era tend to get a bit grumpy because the cheese is moved… EVERY YEAR! But keep in mind that Apple has sold 1.4 billion iPhones as have 423 million iPads, and don’t forget a couple hundred million Macs. That’s over 2 billion devices we’ve had to learn to cope with. Granted, not all of them are in the enterprise. But imagine this: that’s more than the entire population of China, the US, and Indonesia. How many people in those three out of the top 5 populated countries in the world go to work every day. And how many go to school. It’s been a monumental and rapid upheaval of the IT world order. And it’s been fun to be a part of!
7/23/2019 • 14 minutes, 23 seconds
Grace Hopper
Welcome to the History of Computing Podcast, where we explore the history of information technology. Because by understanding the past, we’re able to be prepared for the innovations of the future! Today’s episode is on one of the finest minds in the history of computing: Grace Brewster Murray Hopper. Rear Admiral Hopper was born on December 9th, 1906 in New York City. She would go on to graduate from Vassar College in 1928, earn a master’s degree at Yale in 1930, and then a PhD from Yale in 1933, teaching at Vassar from 1931 until 1941. And her story might have ended there. But then World War Two happened. Her great-grandfather was an admiral in the US Navy during the Civil War, and so Grace Hopper would try to enlist. But she was too old and a little too skinny. And she was, well, a she. So instead she went on to join the women’s branch of the United States Naval Reserve called WAVES, or Women Accepted for Volunteer Emergency Service, at the time. She graduated first in her class and was assigned to the Bureau of Ships project at Harvard as a Lieutenant where she was one of the original programmers of the IBM Automatic Sequence Controlled Calculator, better known as the Mark I. The Mark I did what the analytical engine tried to do but using electromechanical components. Approved by the original IBM CEO Thomas Watson Sr, the project had begun in 1937 and was shipped to Harvard in 1944. If you can imagine, Hopper and the other programmers did conditional branching manually. Computers played a key role in the war effort and Hopper played a key role in the development of those computers. She co-authored three papers on the Mark I during those early days. She also found a moth in the Mark II in 1947, creating a term everyone in software uses today: debugging. When peace came, she was offered a professorship at Vassar. But she had a much bigger destiny to fulfill. Hopper stayed on at Harvard working on Navy contracts because the Navy didn’t want her yet. Yet. She would leave Harvard to go into the private sector for a bit. At this point she could have ended up with Remington Rand designing electric razors (yes, that Remington), or working on the battery division, which would be sold to Rayovac decades later. But she ended up there as a package deal with the UNIVAC. And her destiny began to unfold. You see, writing machine code sucks. She wanted to write software, not machine language. She wanted to write code in English that would then run as machine code. This was highly controversial at the time because programmers didn’t see the value in allow what was mainly mathematical notation for data processing to be available in a higher level language, which she proposed would be English statements. She published her first paper on what she called compilers in 1952. There’s a lot to unpack about what compilers brought to computing. For starters, they opened up programming to people that would otherwise have seen a bunch of mathematical notations and run away. In her words: “I could say "Subtract income tax from pay" instead of trying to write that in octal code or using all kinds of symbols.” This opened the field up to the next generation of programmers. It also had a second consequence: the computer was no longer just there to do math. Because the Mark I had been based on the Analytical Engine, it was considered a huge and amazing calculator. But putting actual English words out there and then compiling (you can’t really call it converting because that’s an oversimplification) those into machine code meant the blinders started to come off and that next generation of programmers started to think of computers as… more. The detractors had a couple of valid points. This was the early days of processing. The compiler created code that wasn’t as efficient as machine code developed by hand. Especially as there were more and more instructions you could compile. There’s really no way around that. But the detractors might not have realized how much faster processors would get. After all they were computing with gears just a few decades earlier. The compiler also opened up the industry to non-mathematicians. I’m pretty sure an objection was that some day someone would write a fart app. And they did. But Grace Hopper was right, the compiler transformed computing into the industry it is today. We still compile code and without the compiler we wouldn’t be close to having the industry we have today. In 1954 she basically became the first director of software development when she was promoted to the Director of Automatic Programming. Feeling like an underachiever yet? She was still in the Navy Reserve and in 1957 was promoted to Commander. But she was hard at work at her day job as she and her team at Remington Rand developed a language called FLOW-MATIC the first natural language programming language. In 1959, a bunch of computer nerds were assembled in a conference called CODASYL, or Conference on Data Systems Languages for short. Here, they extended FLOW-MATIC into COBOL making Hopper the mother of compilers and thus the grandmother of COBOL. Picking up a bunch of extra names to add to the end of your title doesn’t necessarily mean a dragon flies away with you though. She retired from the Navy in 1966. But again, her story doesn’t end there. Hopper went back to the Navy in 1967 after a very successful career with Remington Rand, overseeing the Navy Programming Languages Group. After all, putting language into programming was something she, um, pioneered. She was promoted to a Captain in the Navy in 1973. Here, she directed and developed validation software for COBOL and its compiler through much of the 70s. Armed with those standards, she was then able to go to the Defense Department and push for more computers that were smaller. The rest of the world had no idea the mini-computer (or PC revolution) was coming but she did. Her standards would evolve into the standards managed by the National Institute of Standards and Technology, or NIST, today. You know those NIST configuration guides for configuring a Mac or Windows computer? They do that. The Navy promoted her to a commodore in 1983, a rank renamed to rear admiral just before her retirement in 1986. She earned her Defense Distinguished Service Medal after coming home to the Navy time and time again during her 42 year career there. I guess the meaning of her life was computers and the US Navy. After her retirement, she wasn’t ready to slow down. She went to work for Digital Equipment Corporation (DEC) speaking at conferences and industry forums and traveling to the DEC offices. At the time, DEC was the number two computer company in the world. She stayed there until she passed away in 1992. Since her death, she has had a college at Yale renamed in her honor, had a destroyer named after her, and was awarded the Presidential Medal of Honor by then US president Barack Obama. If you don’t yet have a spirit animal, you could do worse than to pick her.
7/20/2019 • 8 minutes, 50 seconds
The History Of Streaming Music
Severe Tire Damage. Like many things we cover in this podcast, the first streaming rock band on the Interwebs came out of Xerox PARC. 1993 was a great year. Mariah Carey released Dreamlover, Janet Jackson released That’s The Way Love Goes. Boyz II Men released In The Still of the Nite. OK, so it wasn’t that great a year. But Soul Asylum’s Runaway Train. That was some pretty good stuff out of Minnesota. But Severe Tire Damage was named after They Might Be Giants and a much more appropriate salvo into the world of streaming media. The members were from DEC Systems Research Center, Apple, and Xerox PARC. All members at the time and later are pretty notable in their own right and will likely show up here and there on later episodes of this podcast. So they kinda’ deserved to use half the bandwidth of the entire internet at the time. The first big band to stream was the Rolling Stones, the following year. Severe Tire Damage did an opening stream of their own. Because they’re awesome. The Stones called the stunt a “good reminder of the democratic nature of the Internet.” They likely had no clue that the drummer is the father of ubiquitous computing, the third wave of computing. But if they have an Apple Watch, a NEST, use an app to remotely throw treats to their dog, use a phone to buy a plane ticket, or check their Twitter followers 20 times a day, they can probably thank Mark Weiser for his contributions to computing. They can also thank Steve Rubin for his contributions on the 3D engine in the Mac. Or his wife Amy for her bestselling book Impossible Cure. But back to streaming media. Really, streaming media goes back to George O Squier getting patents for transmitting music over electrical lines in the 1910s and 1920s. This became Muzak. And for decades, people made fun of elevator music. While he originally meant for the technology to compete with radio, he ended up pivoting in the 30s to providing music to commercial clients. The name Muzak was a mashup of music and Kodak, mostly just for a unique trademark. By the end of the 30s Warner Brothers had acquired Muzak and then it went private again when George Benton, the chairman and publisher of the Encyclopædia Britannica pivoted the company into brainwashing for customers, alternating between music and silence in 15 minute intervals and playing soft tones to make people feel more comfortable while waiting for a doctor or standing in an elevator. Makes you wonder what he might have shoved into the Encyclopedia! Especially since he went on to become a senator. At least he led the charge to get rid of McCarthy who referred to him as “Little Willie Benton.” I guess some things never change. Benton passed away in 1973, but you can stream an interview with him from archives.org ( https://archive.org/details/gov.archives.arc.95761 ). Popularity of Muzak waned over the following decades until they went bankrupt in 2009. After reorganization it was acquired in 2011 and is now Mood Media, which has also gone bankrupt. I guess people want a more democratic form of media these days. I blame the 60s. Not much else happened in streaming until the 1990s. A couple of technologies were maturing at this point to allow for streaming media. The first is the Internet. TCP/IP was standardized in 1982 but public commercial use didn’t really kick on until the late 1980s. We’ll reserve that story for another episode. The next is MPEG. MPEG is short for the Moving Picture Experts Group. MPEG is a working group formed specifically to set standards for audio and video compression and the transmission of that audio and video over networks. The first meeting of the group was in 1988. The group defined a standard format for playing media on the Internet, soon to actually be a thing (but not yet). And thus the MPEG format was born. MPEG is now the international standard for encoding and compressing video images. Following the first release they moved quickly. In 1992, the MPEG-1 standard was approved at a meeting in London. This gave us MPEG Layer 3, or MP3 as well as video CDs. At the Porto meeting in 1994, we got MPEG-2 standard, thus DVDs, DVD players and AAC standard a long standard for iTunes and used for both television and audio encoding. MPEG-4 came in 1999, and the changes began to slow as adoption increased. Today, MPEG-7 and MPEG-21 are under development. Then came the second wave of media. In 1997, Justin Frankel and Dmitry Boldyrev built WinAmp. A lot of people had a lot of CDs. Some of those people also had WinAmp or other MP3 players and rippers. By 1999 enough steam bad been built up that Sean Parker, Shawn Fanning, and John Fanning built a tool called Napster that allowed people to trade those MP3s online. At their height, 80 million people were trading music online. People started buying MP3 players, stereos had MP3 capabilities, and you could find and download any song you could think of easier and cheaper than you could get them at a music store. Brick and mortar music stores began to close their doors and record labels saw a huge drop in profits. I knew people with terabytes of music, where each song was about 3 megs. The music industry had suffered a massive blow. After a long court battle, the RIAA obtained an injunction that forced Napster to shut down in 2001. The music industry thought maybe they were saved. But by then other sites like Limewire and many other services had popped up and to shut pandora’s box, we needed innovation, The innovation was making it simple to buy music. Sure, people could continue to steal music if they wanted, but it turned out that if a song was a buck, people were likely to just go out and buy it. Other vendors followed suit and before long the tide of stealing music was turned back. Another innovation had occurred in 2001 but hadn’t really caught steam yet. Rhapsody (originally TuneTo.com) was launched in December of 2001. Rhapsody slowly built up a catalog of 11 million songs and 750,000 subscribers. Rhapsody worked kinda’ like Radio. Pandora Radio, launched in 2005, allowed users to create their own stations. With 66 million active users, Pandora was bought by Sirius XM for 3.5 Billion dollars. But if these were the only vendors that were in this space, it might not be what it is today. I remember in about 2010, I asked my niece about buying a song. She looked at me like I was stupid. Why would you buy a song. I asked her about downloading them for free. Black stare. That’s when I realized the third wave of streaming music was on us. Spotify, originally created in 2006, allowed users to build their own stations of songs and now has 217 million users with nearly half paying for the subscription so they don’t get ads, with revenue of nearly $6 Billion dollars. Apple Music was late to the party, arriving in 2015, because Steve Jobs wasn’t into music subscription services. But since the launch they are up to 60 million users in 2019. Apple’s services revenue though is over a quarter trillion dollars a year. Google has 15 million streaming subscribers and with the emergence of their Echo’s Amazon is poised to garner a lot of streaming music subscribers. Music isn’t the only business that has been disrupted. You see, the innovation that iTunes and the popularization of the iPod created also made us rethink other business models. Television and movie consumption has shifted to streaming platforms. And Apps. The iOS App Store was released in 2008. The App Stores have shifted many an enterprise software into smaller workflows strung together with apps. There are now 1.8 million apps on that App Store and 2.1 available for Android users. These apps have led to ride sharing services and countless other apps displacing businesses that have operated the same way for sometimes hundreds of years. Yes, this story is about streaming music. But the movement that started with Severe Tire Damage combined with other technologies to have a resounding impact to how we live our lives. It’s no wonder that their drummer, Mark Weiser, is widely considered to be the father of ubiquitous computing.
7/17/2019 • 10 minutes, 8 seconds
The PASCAL Programming Language
PASCAL was designed in 1969 by the Swiss computer scientist Niklaus Wirth and released in 1970, the same year Beneath the Planet of the Apes, Patton, and Love Story was released. The Beatles released Let It Be, Three Dog Night was ruling the airwaves with Mama Told Me Not To Come, and you could buy Pong and Simon Says for home. Wirth had been a PhD student at Berkeley in the early 1960s, at the same time Ken Thompson, co-inventor of Unix and author of the Go programming language was in school there. It’s not uncommon for a language to kick around for a decade or more gathering steam, but PASCAL quickly caught on. In 1983, PASCAL got legit and was standardized, in ISO 7185. The next year Wirth would win the 1984 Turing Award. Perhaps he listened to When Doves Cry when he heard. Or maybe he watched Beverly Hills Cop, Indiana Jones, Gremlines, Red Dawn, The Karate Kid, Ghostbusters, or Terminator on his flight home. Actually, probably not. PASCAL is named after Blaise Pascal, the French Philosopher and Mathemetician. As with many programmers, PASCAL built THE WORLD’S FIRST FULLY FUNCTIONAL MECHANICAL CALCULATOR because he was lazy and his dad made him do too many calculations to help pay his bills. 400 years later, we still need calculators here and there, to help us with our bills. As with many natural scientists of the time, Blaise Pascal contributed to science and math in a variety of ways: PASCAL’S LAW IN HYDROSTATICS PASCAL’S THEOREM TO THE EMERGING FIELD OF PROJECTIVE GEOMETRY IMPORTANT WORK on ATMOSPHERIC PRESSURE AND VACUUM including that REDISCOVERing THAT ATMOSPHERIC PRESSURE DECREASES WITH HEIGHT A pioneer in THE THEORY OF PROBABILITY While Indian and Chinese mathematicians had been using it for centuries, PASCAL POPULARIZED THE PASCAL’S TRIANGLE and was credited with providing PASCAL’s Identity As with many in the 1600s he was deeply religious and dedicated the later part of his life to religious writings including Pensees, which helped shape the French Classical Period. Perhaps he wrote it while listening to Bonini or watching The History of Sir Francis Drake The PASCAL programming language was built to teach students to program, but as with many tools students learn on, it grew in popularity as those students graduated from college throughout the 1970s and 1980s. I learned PASCAL in high school computer science in 1992. Yes, Kris Kross was making you Jump and Billy Ray Cyrus was singing Achy Breaky Heart the same year his daughter was born. I learned my first if, then, else, case, and while statements in PASCAL. PASCAL is a procedural programming language that supports structured data structures and structured programming.At the time I would write programs on notebook paper and type them in next time I had a chance to play with a computer. I also learned enumerations, pointers, type definitions, and sets. PASCAL also gave me my first exposure to integers, real numbers, chars, and booleans. I can still remember writing the word program at the top of a piece of paper, followed by a word to describe the program I was about to write. Then writing begin and end. Never forgetting the period after the end of course. The structures were simple. Instead of echo you would simply use the word write to write text to the screen, followed by hello world in parenthesis wrapped in single quotes. After all, there are special characters if you use a comma and an exclamation point in hello word. I also clearly remember wrapping my comments in {} because if you didn’t comment what you did it was assumed you stole your code from Byte managize. I also remember making my first procedure and how there was a difference between procedures and functions. The code was simple and readable. Later I would use AmigaPascal and hate life. PASCAL eventually branched out into a number of versions including Visual PASCAL, Instant PASCAL, and Turbo PASCAL. There are still live variants including the freepascal compiler available at freepascal.org. PASCAL was the dominant language used in the early days of both Apple and Microsoft. So much so that most of the original Apple software was written in PASCAL, including Desk Accessories, which would later become Extensions. Perhaps the first awesome computer was the Apple II, where PASCAL was all over the place. Because developers knew PASCAL, it ended up being the main high-level language for the Lisa and then the Mac. In fact, some of the original Mac OS was hand-translated to assembly language from PASCAL. PASCAL wasn’t just for parts of the operating system. It was also used for a number of popular early programs, including Photoshop 1. PASCAL became object-oriented first with Lisa Pascal, Classcal then with Object PASCAL in 1985. That year Apple released MacApp, which was an object oriented API for the classic Mac Operating system. Apple stuck with Object PASCAL until the beginning of the end of mainline PASCAL in 1991, when it transitioned to C++ for System 7. MacApp would go on to burn a fiery death when Apple acquired NeXT. Perhaps it was due to Terminator 2 being released that year and developers figured things had gone full circle. Or maybe it was because REM caused them to Lose Their Religion. PASCAL wasn’t just for Apple. Universities all over the world were using PASCAL, including the University of California San Diego, which introduced UCSD Pascal, which was a branch of Pascal-P2. UCSD p-System was one of three operating systems you could run on the original IBM Personal Computer. Microsoft would then implement the Object Pascal compiler, which caught on with developers that wanted to get more than what BASIC offered. Around this time, people were actually making real money and BORLAND released Turbo Pascal, making it cheap to grab big marketshare. Object PASCAL also begat Delphi, still used to write programs by people who refuse to change today. Wirth himself was fascinating. He not only wrote The Pascal User Manual and Report but also went on to write an article called Program Development by Stepwise Refinement, about how to teach programming and something all computer science teachers and aficionados should read. His book Algorithms + Data Structures = Programs helped shape how I think of computers even today, because it turns out the more things change the more they stay the same. He also coined Wirth’s law, which states that software is getting slower more rapidly than hardware becomes faster. Every time I see a beachball or hourglass on my laptop I think of that. This could be initially from his work on building a compiler into the ALGOL language, which resulted in ALGOL W and watching that turn into the swamp thing that was ALGOL X and then ALGOL 68, which became so complex and difficult that writing good compilers became out of the question. Due to this ALGOL languished, making room for PASCAL in the hearts of early programmers. While PASCAL is his most substantial contribution to computing, he also designed parts of Algol, Modula and Oberon, and did two sabbaticals at Xerox PARC, the first from 1976–1977 and the second from 1984–1985. Here, he would have been exposed to a graphical operating system and the mouse, before Apple popularized them. Perhaps one of the most enduring legacies of PACAL though, is A. A is a computer pogromming language that was originally written as a hoax. Dennis Richie had just finished reading a National Lampoon parody of the Lord of the Rings called “Bored of the Rings.” Unix as a parody of Multics mean to “to be as complex and cryptic as possible to maximize casual users' frustration levels” Ken Thompson would go on to describe A this way “Dennis and Brian worked on a warped version of Pascal, called 'A'. 'A' looked a lot like Pascal, but elevated the notion of the direct memory address (which Wirth had banished) to the central concept of the language. This was Dennis's contribution, and he in fact coined the term "pointer" as an innocuous sounding name for a truly malevolent construct.” Anyone who has gotten a null pointer exception should know that their pain is intentional. The prank evolved into B and then C. By the way, the hoax is a hoax. But there is a little truth to every lie. The truth here is that in many ways, C is the anti-pascal. I blame Berkeley in the 60s. But not for wasting your time with the hoax. For that I blame me. I mean, first I blame the creators of Unix, then me. PASCAL has all but been replaced by a multitude of languages starting with the word Visual, Objective-C, Java, Go, Ruby, Python, PHP, and other more “modern” languages. But it is still, as they said in Beneath the Planet of the Apes, it is “Another lovely souvenir from the 20th Century.”
7/13/2019 • 10 minutes, 8 seconds
A Brief History Of Time
Welcome to the History of computing podcast. Today we’re going to review A Brief History of Time - no, not that brief history of time. But instead how time has evolved in computing. We love old things still being used on this podcast. Time is important; so important that it’s epic! Or epoch more specifically. The epoch is a date and time from which a computer measures the time on the system. Most operating systems derive their time from the number of seconds that have passed since January 1st, 1970 when the clock struck midnight, when time began - likely the Catch-22 that the movie was made based on, later that year. This queue is taken from Unix Epoch time. Different systems use different time as their epoch. MATLAB uses January 0, 1BC - which is all you need to know about Matlab developers, really. COBOL used January 1, 1601, likely indicating that was the year Cobol was written. OK so it isn’t - but I’m guessing it’s when many of the philosophies of the language were first conceived. Time must seem like it started on January first 2001 to Apple’s Cocoa framework, which began epoch then. My least favorite would be AmigaOS, which started Epoch time on January first 1978 - Nothing good happened in 1978. Jaws 2 and Halloween were released that year. Yuck. Well, Animal House was pretty good. But I could do without Boogie Oogie Oogie. And I could do without Andy Gib’s Shadow Dancing. Disco died the next year. As did the soul of anyone that had to use an Amiga. Due to how many modern encryption protocols work, you want to keep time in sync between computers. A skew, or offset in that time, by even microseconds can impact the ability to decrypt data. This lead to the Network Time Protocol, or NTP for short. NTP NTP was designed by David L. Mills of the University of Delaware. It is a networking protocol that provides for clock synchronization between computer systems over standard data networks. NTP has been running since 1985, making it one of the oldest Internet protocols still in use today, with the most updated specs defined in RFC 958. `date +%s` NTP has had a number of updates over the years, although they have slowed as it became more popular. NTP 0 was released in 1985, the same year as the Goonies, Pale Rider, the Breakfast Club and ironically Back to the Future. Given that NTP was free, it’s also ironic that Dire Straits released Money for Nothing the same year it was released. Simple Minds, Aha, and Tears for Fears ruled the airwaves that year, with Tears for Fears proving that Everyone wants to rule the world, but despite being free, NTP is the one on all computers, thus outlasting the rest and being the one that ended up ruling the world. Version 1 came in 1988, 2 in 1989, , 3 in 1992, and NTPv4 was drafted in 2010 but has not yet been published given how dependent we as an IT industry now is on NTP. To better understand how dependent we are, let’s look at the three main platforms: In Windows, you can just “Double-click the system clock and then click on the Internet Time tab.” On Mac, open System Preferences > Date & Time which configures the /usr/libexec/timed launchdaemon And on Linux, open System > Admin >Time and Date. These screens allow you to enter an NTP Server. NTP is short for Network Time Protocol. NIST Internet Time Service (ITS) provides 24 names of Network Time Servers, and each vendor often operates their own, such as time.apple.com. Each machine then operates a time zone offset. You know Apple’s time servers because you can read them plain as day by default if you cat /private/etc/ntp.conf - it just outputs server time.apple.com. I’d tell you how to do it in Windows but it would blow your mind. OK, I’ll do it anyways: Just reg query HKLM\SYSTEM\CurrentControlSet\Services\W32Time\Parameters and then read the value of the NtpServer field in the output. OK, not mind blowing. But what is mind blowing? The Big Bang is mind blowing. Not the TV show; that’s not. NTP uses 64-bit timestamps. Those consist of a 32-bit portion used for seconds and a 32-bit portion used for a fraction of a second, meaning that it rolls every 232 seconds, which is 136 years. NTPv4 will still need to get ratified by February 7th, 2036, if only to cover the first rollover. NTPv4 moving to a 128-bit date format should take us until the next Big Bang when this stuff won’t matter any more. Mills was an interesting cat. He got his PhD in Computer Sciences from the University of Michigan in 1971, where he worked on ARPA projects wrote terminal software that provided connections to the IBM360 mainframe. He also worked on the Exterior Gateway Protocol. He initially invented NTP in 1981 and was a professor in computer science at the University of Delaware from 1986 to 2008. He’s still an emeritus professor at the University of Delaware. In 1610 (a few years after the COBOL epoch), the English naval officer Samuel Argall named the Delaware River and Delaware Bay after then governor of Virginia, Thomas West. West happened to be the 12th Baron De La Warr. Did you know that Delaware was the first state to ratify the constitution on December 7th 1787? Delaware is the diamond state, and the second smallest state in the Union. The state insect is a lady bug. Ryan Phillippe is probably more famous than NTP, even though he killed disco with his awful acting in Studio 54. Henry Heimlich is from Delaware. Hopefully you don’t need to use his infamous maneuver as often as NTP gets updated. Elisabeth Shue is also from Delaware. The Karate Kid was awesome. But that’s it. No one else of note. Joe Biden, Senator from Delaware from 1973-2009 and Vice President from 2009 to 2017 - he’s not from Delaware, he’s from Scranton. In case you’re curious, that’s not in Delaware. Following the retirement of Mills from the University of Delaware, the reference implementation is currently maintained as an open source project led by Harlan Stenn, who has submitted bug fixes and portability improvements to the NTP codebase since the 1980s. He’s been allowed to focus on time because of the Network Time Foundation, which can be found at https://www.nwtime.org. What’s next for NTP? For one, ratifying NTS. Network Time Security (NTS), draft RFC 7384, lets users or servers authenticate to the Network Time Protocol (NTP). This involves a key exchange over TLS that protects against man in the middle attacks, using standard PKI as well as a TLS handshake that then allows time synchronization via extension fields.
7/11/2019 • 9 minutes, 25 seconds
The Prehistory of the Computer
One of the earliest computing devices was the abacus. This number crunching device can first be found in use by Sumerians, circa 2700BC. The abacus can be found throughout Asia, the Middle East, and India throughout ancient history. Don’t worry, the rate of innovation always speeds up as multiple technologies can be combined. Leonardo da Vinci sketched out the first known plans for a calculator. But it was the 17th century, or the Early modern period in Europe, that gave us the Scientific Revolution. Names like Kepler, Leibniz, Boyle, Newton, and Hook brought us calculus, telescopes, microscopes, and even electricity. The term computer is first found in 1613, describing a person that did computations. Wilhelm Schickard built the first calculator in 1623, which he described in a letter to Kepler. Opening the minds of humanity caused people like Blaise Pascal to theorize about vacuums and he then did something very special: he built a mechanical calculator that could add and subtract numbers, do multiplication, and even division. And more important than building a prototype, he sold a few! His programming language was a lantern gear. It took him 50 prototypes and many years, but he presented the calculator in 1645, earning him a royal privilege in France for calculators. That’s feudal French for a patent. Leibniz added repetition to the mechanical calculator in his Step Reckoner. And he was a huge proponent of binary, although he didn’t use it in his mechanical calculator. Binary would become even more important later, when electronics came to computers. But as with many great innovations it took awhile to percolate. In many ways, the age of enlightenment was taking the theories from the previous century and building on them. The early industrial revolution though, was about automation. And so the mechanical calculator was finally ready for daily use in 1820 when another Frenchman, Colmar, built the arithmometer, based on Leibniz’s design. A few years earlier, another innovation had occurred: memory. Memory came in the form of punchcards, an innovation that would go on to last until World War II. The Jacquard loom was used to weave textiles. The punch cards controlled how rods moved and thus were the basis of the pattern of the weave. Punching cards was an early form of programming. You recorded a set of instructions onto a card and the loom performed them. The bash programming of today is similar. Charles Babbage expanded on the ideas of Pascal and Leibniz and added to mechanical computing, making the difference engine, the inspiration of many a steampunk. Babbage had multiple engineers building components for the engine and after he scrapped his first, he moved on to the analytical engine, adding conditional branching, loops, and memory - and further complicating the machine. The engine borrowed the punchcard tech from the Jacquard loom and applied that same logic to math. Ada Lovelace contributed the concept of Bernoulli numbers in algorithms giving us a glimpse into what an open source collaboration might some day look like. And she was in many ways the first programmer - and daughter of Lord Byron and Anne Millbanke, a math whiz. She became fascinated with the engine and ended up becoming an expert at creating a set of instructions to punch on cards, thus the first programmer of the analytical engine and far before her time. In fact, there would be no programmer for 100 years with her depth of understanding. Not to make you feel inadequate, but she was 27 in 1843. The engine was a bit too advanced for its time. While Babbage is credited as the father of computing because of his ideas, shipping is a feature. Having said that, it has been proven that if the build had been completed to specifications the device would have worked. Sometimes the best of plans just can’t be operationalized unless you reduce scope. Babbage added scope. Despite having troubles keeping contractors who could build complex machinery, Babbage first looked to tree rings to predict weather and he was a mathematician who worked with keys and ciphers. As with Isaac Newton 150 years earlier, the British government also allowed a great scientist/engineer to reform a political institution: the Postal System. You see, he was also an early proponent of applying the scientific method to the management and administration of governmental, commercial, and industrial processes. He also got one of the first government grants in R&D to help build the difference engine, although ended up putting some of his own money in there as well, of course. Babbage died in 1871 and thus ended computing. For a bit. The typewriter came in 1874, as parts kept getting smaller and people kept tinkerating with ideas to automate all the things. Herman Hollerith filed for a patent in 1884 to use a machine to punch and count punched cars. He used that first in health care management and then in the 1890 census. He later formed Tabulating Machine Company, in 1896. In the meantime, Julius E. Pitrap patented a computing scale in 1885. William S Burroughs (not that one, the other one) formed the American Arithmometer Company in 1886. Sales exploded for these and they merged, creating the Computing-Tabulation-Recording Company. Thomas J Watson, Sr joined the company as president in 1914 and expanded business, especially outside of the United States. The name of the company was changed to International Business Machines, or IBM for short, in 1924. Konrad Zuse built the first electric computer from 1936 to 1938 in his parent’s living room. It was called the Z1. OK, so electric is a stretch, how about electromechanical… In 1936 Alan Turing proposed the Turing machine, which printed symbols on tape that simulated a human following a set of instructions. Maybe he accidentally found one of Ada Lovelace’s old papers. The first truly programmable electric computer came in 1943, with Colossus, built by Tommy flowers to break German codes. The first truly digital computer came from Professor John Vincent Atanasoff and his grad student Cliff Berry from Iowa State University. The ABC, or Atanasoff-Berry Computer took from 1937 to 1942 to build and was the first to add vacuum tubes. The ENIAC came from J Presper Eckert and John Mauchly from the University of Pennsylvania from 1943 to 1946. 1,800 square feet and ten times that many vacuum tubes, ENIAC weighed 50 tons. ENIAC is considered to be the first digital computer because unlike the ABC it was fully functional. The Small-Scale Experimental Machine from Frederic Williams and Tom Kilburn from the University of Manchester came in 1948 and added the ability to store and execute a program. That program was run by Tom Kilburn on June 21st, 1948. Up to this point, the computer devices were being built in universities, with the exception of the Z1. But in 1950, Konrad Zuse sold the Z4, thus creating the commercial computer industry. IBM got into the business of selling computers in 1952 as well, basically outright owning the market until grunge killed the suit in the 90s. MIT added RAM in 1955 and then transistors in 1956. The PDP-1 was released in 1960 from Digital Equipment Corporation (DEC). This was the first minicomputer. My first computer was a DEC. Pier Giorgio Perotto introduced the first desktop computer, the Programmer 101 in 1964. HP began to sell the HP 9100A in 1968. All of this steam led to the first microprocessor, the Intel 4004, to be released in 1971. The first truly personal computer was released in 1975 by Ed Roberts, who was the first to call it that. It was the Altair 8800. The IBM 5100 was the first portable computer, released the same year. I guess it’s portable if 55 pounds is considered portable. And the end of ancient history came the next year, when the Apple I was developed by Steve Wozniak, which I’ve always considered as the date that the modern era of computing be.