The New Stack Makers is all about the developers, software engineers and operations people who build at-scale architectures that change the way we develop and deploy software. For The New Stack Analysts podcast, please see https://soundcloud.com/thenewstackanalysts For The New Stack @ Scale podcast, please see https://soundcloud.com/thenewstackatscale For The New Stack Context podcast, please see https://soundcloud.com/thenewstackcontext Subcribe to TNS on YouTube at: https://www.youtube.com/c/TheNewStack
Internal Developer Platforms: Helping Teams Limit Scope
In this New Stack Makers podcast, Ben Wilcock, a senior technical marketing architect for Tanzu, spoke with TNS editor-in-chief, Heather Joslyn and discussed the challenges organizations face when building internal developer platforms, particularly the issue of scope, at KubeCon + CloudNativeCon North America. He emphasized the difficulty for platform engineering teams to select and integrate various Kubernetes projects amid a plethora of options. Wilcock highlights the complexity of tracking software updates, new features, and dependencies once choices are made. He underscores the advantage of having a standardized approach to software deployment, preventing errors caused by diverse mechanisms. Tanzu aims to simplify the adoption of platform engineering and internal developer platforms, offering a turnkey approach with the Tanzu Application Platform. This platform is designed to be flexible, malleable, and functional out of the box. Additionally, Tanzu has introduced the Tanzu Developer Portal, providing a focal point for developers to share information and facilitating faster progress in platform engineering without the need to integrate numerous open source projects. Learn more from The New Stack about Tanzu and internal developer platforms:VMware Unveils a Pile of New Data Services for Its Cloud VMware VMware Expands Tanzu into a Full Platform Engineering Environment VMware Targets the Platform Engineer Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
1/31/2024 • 15 minutes, 23 seconds
How the Kubernetes Gateway API Beats Network Ingress
In this New Stack Makers podcast, Mike Stefaniak, senior product manager at NGINX and Kate Osborn, a software engineer at NGINX discusses challenges associated with network ingress in Kubernetes clusters and introduces the Kubernetes Gateway API as a solution. Stefaniak highlights the issues that arise when multiple teams work on the same ingress, leading to friction and incidents. NGINX has also introduced the NGINX Gateway Fabric, implementing the Kubernetes Gateway API as an alternative to network ingress. The Kubernetes Gateway API, proposed four years ago and recently made generally available, offers advantages such as extensibility. It allows referencing policies with custom resource definitions for better validation, avoiding the need for annotations. Each resource has an associated role, enabling clean application of role-based access control policies for enhanced security.While network ingress is prevalent and mature, the Kubernetes Gateway API is expected to find adoption in greenfield projects initially. It has the potential to unite North-South and East-West traffic, offering a role-oriented API for comprehensive control over cluster traffic. The article encourages exploring the Kubernetes Gateway API and engaging with the community to contribute to its development.Learn more from The New Stack about NGINX and the open source Kubernetes Gateway API:Kubernetes API Gateway 1.0 Goes Live, as Maintainers Plan for The Future API Gateway, Ingress Controller or Service Mesh: When to Use What and Why Ingress Controllers or the Kubernetes Gateway API? Which is Right for You? Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
1/23/2024 • 15 minutes, 3 seconds
What You Can Do with Vector Search
TNS publisher Alex Williams spoke with Ben Kramer, co-founder and CTO of Monterey.ai Cole Hoffer, Senior Software Engineer at Monterey.ai to discuss how the company utilizes vector search to analyze user voices, feedback, reviews, bug reports, and support tickets from various channels to provide product development recommendations. Monterey.ai connects customer feedback to the development process, bridging customer support and leadership to align with user needs. Figma and Comcast are among the companies using this approach. In this interview, Kramer discussed the challenges of building Large Language Model (LLM) based products and the importance of diverse skills in AI web companies and how Monterey employs Zilliz for vector search, leveraging Milvus, an open-source vector database. Kramer highlighted Zilliz's flexibility, underlying Milvus technology, and choice of algorithms for semantic search. The decision to choose Zilliz was influenced by its performance in the company's use case, privacy and security features, and ease of integration into their private network. The cloud-managed solution and Zilliz's ability to meet their needs were crucial factors for Monterey AI, given its small team and preference to avoid managing infrastructure.Learn more from The New Stack about Zilliz and vector database search:Improving ChatGPT’s Ability to Understand Ambiguous PromptsCreate a Movie Recommendation Engine with Milvus and PythonUsing a Vector Database to Search White House Speeches Join our community of newsletter subscribers to stay on top of the news and at the top of your game. https://thenewstack.io/newsletter/
1/17/2024 • 25 minutes, 28 seconds
How Ethical Hacking Tricks Can Protect Your APIs and Apps
TNS host Heather Joslyn sits down with Ron Masas to discuss trade-offs when it comes to creating fast, secure applications and APIs. He notes a common issue of neglecting documentation and validation, leading to vulnerabilities. Weak authorization is a recurring problem, with instances where changing an invoice ID could expose another user's data.Masas, an ethical hacker, highlights the risk posed by "zombie" APIs—applications that have become disused but remain potential targets. He suggests investigating frameworks, checking default configurations, and maintaining robust logging to enhance security. Collaboration between developers and security teams is crucial, with "security champions" in development teams and nuanced communication about vulnerabilities from security teams being essential elements for robust cybersecurity.For further details, the podcast discusses case studies involving TikTok and Digital Ocean, Masas's views on AI and development, and anticipated security challenges.Learn more from The New Stack about Imperva and API security:What Developers Need to Know about Business Logic AttacksWhy Your APIs Aren’t Safe — and What to Do about ItThe Limits of Shift-Left: What’s Next for Developer Security Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
1/10/2024 • 16 minutes, 20 seconds
2023 Top Episodes - What’s Platform Engineering?
Platform engineering “is the art of designing and binding all of the different tech and tools that you have inside of an organization into a golden path that enables self service for developers and reduces cognitive load,” said Kaspar Von Grünberg, founder and CEO of Humanitec, in this episode of The New Stack Makers podcast.This structure is important for individual contributors, Grünberg said, as well as backend engineers: “if you look at the operation teams, it reduces their burden to do repetitive things. And so platform engineers build and design internal developer platforms, and help and serve users."This conversation, hosted by Heather Joslyn, TNS features editor, dove into platform engineering: what it is, how it works, the problems it is intended to solve, and how to get started in building a platform engineering operation in your organization. It also debunks some key fallacies around the concept.Learn more from The New Stack about Platform Engineering and Humanitec:Platform Engineering Overview, News, and TrendsThe Hype Train Is Over. Platform Engineering Is Here to Stay9 Steps to Platform Engineering Hell
1/3/2024 • 23 minutes, 44 seconds
2023 Top Episodes - The End of Programming is Nigh
Is the end of programming nigh? That's the big question posed in this episode recorded earlier in 2023. It was very popular among listeners, and with the topic being as relevant as ever, we wanted to wrap up the year by highlighting this conversation again.If you ask Matt Welsh, he'd say yes, the end of programming is upon us. As Richard McManus wrote on The New Stack, Welsh is a former professor of computer science at Harvard who spoke at a virtual meetup of the Chicago Association for Computing Machinery (ACM), explaining his thesis that ChatGPT and GitHub Copilot represent the beginning of the end of programming.Welsh joined us on The New Stack Makers to discuss his perspectives about the end of programming and answer questions about the future of computer science, distributed computing, and more.Welsh is now the founder of fixie.ai, a platform they are building to let companies develop applications on top of large language models to extend with different capabilities.For 40 to 50 years, programming language design has had one goal. Make it easier to write programs, Welsh said in the interview.Still, programming languages are complex, Welsh said. And no amount of work is going to make it simple. Learn more from The New Stack about AI and the future of software development:Top 5 Large Language Models and How to Use Them Effectively30 Non-Trivial Ways for Developers to Use GPT-4Developer Tips in AI Prompt Engineering
12/27/2023 • 31 minutes, 59 seconds
The New Age of Virtualization
Kubevirt, a relatively new capability within Kubernetes, signifies a shift in the virtualization landscape, allowing operations teams to run KVM virtual machines nested in containers behind the Kubernetes API. This integration means that the Kubernetes API now encompasses the concept of virtual machines, enabling VM-based workloads to operate seamlessly within a cluster behind the API. This development addresses the challenge of transitioning traditional virtualized environments into cloud-native settings, where certain applications may resist containerization or require substantial investments for adaptation.The emerging era of virtualization simplifies the execution of virtual machines without concerning the underlying infrastructure, presenting various opportunities and use cases. Noteworthy advantages include simplified migration of legacy applications without the need for containerization, thereby reducing associated costs.Kubevirt 1.1, discussed at KubeCon in Chicago by Red Hat's Vladik Romanovsky and Nvidia's Ryan Hallisey, introduces features like memory hotplug and vCPU hotplug, emphasizing the stability of Kubevirt. The platform's stability now allows for the implementation of features that were previously constrained.Learn more from The New Stack about Kubevirt and the Cloud Native Computing Foundation:The Future of VMs on Kubernetes: Building on KubeVirtA Platform for KubernetesScaling Open Source Community by Getting Closer to Users
12/21/2023 • 16 minutes, 23 seconds
Kubernetes Goes Mainstream? With Calico, Yes
The Kubernetes landscape is evolving, shifting from the domain of visionaries and early adopters to a more mainstream audience. Tigera, represented by CEO Ratan Tipirneni at KubeCon North America in Chicago, recognizes the changing dynamics and the demand for simplified Kubernetes solutions. Tigera's open-source Calico security platform has been updated with a focus on mainstream users, presenting a cohesive and user-friendly solution. This update encompasses five key capabilities: vulnerability scoring, configuration hardening, runtime security, network security, and observability.The aim is to provide users with a comprehensive view of their cluster's security through a zero to 100 scoring system, tracked over time. Tigera's recommendation engine suggests actions to enhance overall security based on the risk profile, evaluating factors such as egress traffic controls and workload isolation within dynamic Kubernetes environments. Tigera emphasizes the importance of understanding the actual flow of data across the network, using empirical data and observed behavior to build accurate security measures rather than relying on projections. This approach addresses the evolving needs of customers who seek not just vulnerability scores but insights into runtime behavior for a more robust security profile.Learn more from The New Stack about Tigera and Cloud Native Security:Cloud Native Network Security: Who’s Responsible?Turbocharging Host Workloads with Calico eBPF and XDP3 Observability Best Practices for Cloud Native App Security
12/13/2023 • 20 minutes, 8 seconds
Hello, GitOps -- Boeing's Open Source Push
Boeing, with around 6,000 engineers, is emphasizing open source engagement by focusing on three main themes, according to Damani Corbin, who heads Boeing's Open Source office. He joined our host, Alex Williams, for a discussion at KubeCon+CloudNativeCon in Chicago.The first priority Corbin talks about is simplifying the consumption of open source software for developers. Second, Boeing aims to facilitate developer contributions to open source projects, fostering involvement in communities like the Cloud Native Computing Foundation and the Linux Foundation. The third theme involves identifying opportunities for "inner sourcing" to share internally developed solutions across different groups.Boeing is actively working to break down barriers and encourage code reuse across the organization, promoting participation in open source initiatives. Corbin highlights the importance of separating business-critical components from those that can be shared with the community, prioritizing security and extending efforts to enhance open source security practices. The organization is consolidating its open source strategy by collaborating with legal and information security teams.Corbin emphasizes the goal of making open source involvement accessible and attractive, with a phased approach to encourage meaningful contributions and ultimately enabling the compensation of engineers for open source work in the future.Learn more from The New Stack about Boeing and CNCF open source projects:How Boeing Uses Cloud NativeHow Open Source Has Turned the Tables on Enterprise SoftwareScaling Open Source Community by Getting Closer to UsersMercedes-Benz: 4 Reasons to Sponsor Open Source Projects
12/12/2023 • 19 minutes, 14 seconds
How AWS Supports Open Source Work in the Kubernetes Universe
At KubeCon + CloudNativeCon North America 2022, Amazon Web Services (AWS) revealed plans to mirror Kubernetes assets hosted on Google Cloud, addressing Cloud Native Computing Foundation's (CNCF) egress costs. A year later, the project, led by AWS's Davanum Srinivas, redirects image requests to the nearest cloud provider, reducing egress costs for users.AWS's Todd Neal and Jonathan Innis discussed this on The New Stack Makers podcast recorded at KubeCon North America 2023. Neal explained the registry's functionality, allowing users to pull images directly from the respective cloud provider, avoiding egress costs.The discussion also highlighted AWS's recent open source contributions, including beta features in Kubectl, prerelease of Containerd 2.0, and Microsoft's support for Karpenter on Azure. Karpenter, an AWS-developed Kubernetes cluster autoscaler, simplifies node group configuration, dynamically selecting instance types and availability zones based on running pods.The AWS team encouraged developers to contribute to Kubernetes ecosystem projects and join the sig-node CI subproject to enhance kubelet reliability. The conversation in this episode emphasized the benefits of open development for rapid feedback and community collaboration.Learn more from The New Stack about AWS and Open Source:Powertools for AWS Lambda Grows with Help of VolunteersAmazon Web Services Open Sources a KVM-Based Fuzzing FrameworkAWS: Why We Support Sustainable Open Source
12/7/2023 • 17 minutes, 45 seconds
2024 Forecast: What Can Developers Expect in the New Year?
In the past year, developers have faced both promise and uncertainty, particularly in the realm of generative AI. Heath Newburn, global field CTO for PagerDuty, joins TNS host Heather Joslyn to talk about the impact AI and other topics will have on developers in 2024.Newburn anticipates a growing emphasis on DevSecOps in response to high-profile cyber incidents, noting a shift in executive attitudes toward security spending. The rise of automation-centric tools like Backstage signals a changing landscape in the link between development and operations tools. Notably, there's a move from focusing on efficiency gains to achieving new outcomes, with organizations seeking innovative products rather than marginal coding speed improvements.Newburn highlights the importance of experimentation, encouraging organizations to identify areas for trial and error, learning swiftly from failures. The upcoming year is predicted to favor organizations capable of rapid experimentation and information gathering over perfection in code writing.Listen to the full podcast episode as Newburn further discusses his predictions related to platform engineering, remote work, and the continued impact of generative AI.Learn more from The New Stack about PagerDuty and trends in software development:How AI and Automation Can Improve Operational ResiliencyWhy Infrastructure as Code Is Vital for Modern DevOpsOperationalizing AI: Accelerating Automation, DataOps, AIOps
12/6/2023 • 22 minutes, 16 seconds
How to Know If You’re Building the Right Internal Tools
In this episode of The New Stack Makers, Rob Skillington, co-founder and CTO of Chronosphere, discusses the challenges engineers face in building tools for their organizations. Skillington emphasizes that the "build or buy" decision oversimplifies the issue of tooling and suggests that understanding the abstractions of a project is crucial. Engineers should consider where to build and where to buy, creating solutions that address the entire problem. Skillington advises against short-term thinking, urging innovators to consider the long-term landscape.Drawing from his experience at Uber, Skillington highlights the importance of knowing the audience and customer base, even when they are colleagues. He shares a lesson learned when building a visualization platform for engineers at Uber, where understanding user adoption as a key performance indicator upfront could have improved the project's outcome.Skillington also addresses the "not invented here syndrome," noting its prevalence in organizations like Microsoft and its potential impact on tool adoption. He suggests that younger companies, like Uber, may be more inclined to explore external solutions rather than building everything in-house. The conversation provides insights into Skillington's experiences and the considerations involved in developing internal tools and platforms.Learn more from The New Stack about Software Engineering, Observability, and Chronosphere:Cloud Native Observability: Fighting Rising Costs, IncidentsA Guide to Measuring Developer Productivity 4 Key Observability Best Practices
12/5/2023 • 20 minutes, 7 seconds
Hey Programming Language Developer -- Get Over Yourself
Jean Yang, founder of API observability company Akita Software, emphasizes that programming languages should be shaped by software development needs and data, rather than philosophical ideals. Yang, a former assistant professor at Carnegie Mellon University, believes that programming tools and processes should be influenced by actual use and data, prioritizing the developer experience over the language creator's beliefs. With a background in programming languages, Yang advocates for a shift away from the outdated notion that language developers are building solely for themselves.In this discussion on The New Stack Makers, Yang underscores the importance of understanding the reality of developers' needs, especially as developer tools have evolved into a full-time industry. She argues for a focus on UX design and product fundamentals in developing tools, moving beyond the traditional mindset where developer tools were considered side projects.Yang founded Akita to address the challenges of building reliable software systems in a world dominated by APIs and microservices. The company transitioned to API observability, recognizing the crucial role APIs play in enhancing the understandability of complex systems. Yang's commitment to improving software correctness and the belief in APIs as key to abstraction and ease of monitoring align with Postman's direction after acquiring Akita. Postman aims to serve developers worldwide, emphasizing the significance of APIs in complex systems.Check out more episodes from The Tech Founder Odyssey series:How Byteboard’s CEO Decided to Fix the Broken Tech InterviewA Lifelong ‘Maker’ Tackles a Developer Onboarding ProblemHow Teleport’s Leader Transitioned from Engineer to CEO
11/30/2023 • 26 minutes, 10 seconds
Docker CTO Explains How Docker Can Support AI Efforts
Docker CTO Justin Cormack reveals that Docker has been a go-to tool for data scientists in AI and machine learning for years, primarily in specialized areas like image processing and prediction models. However, the release of OpenAI's ChatGPT last year sparked a significant surge in Docker's popularity within the AI community.The focus shifted to large language models (LLMs), with a growing interest in the retrieval-augmented generation (RAG) stack. Docker's collaboration with Ollama enables developers to run Llama 2 and Code Llama locally, simplifying the process of starting and experimenting with AI applications. Additionally, partnerships with Neo4j and LangChain allow for enhanced support in storing and retrieving data for LLMs.Cormack emphasizes the simplicity of getting started locally, addressing challenges related to GPU shortages in the cloud. Docker's efforts also include building an AI solution using its data, aiming to assist users in Dockerizing applications through an interactive notebook in Visual Studio Code. This tool leverages LLMs to analyze applications, suggest improvements, and generate Docker files tailored to specific languages and applications.Docker's integration with AI technologies demonstrates a commitment to making AI and Docker more accessible and user-friendly.Learn more from The New Stack about AI and Docker:Artificial Intelligence News, Analysis, and ResourcesWill GenAI Take Jobs? No, Says Docker CEODebugging Containers in Kubernetes — It’s Complicated
11/28/2023 • 12 minutes, 28 seconds
What Does Open Mean in AI?
In this episode, Stefano Maffulli, Executive Director of the Open Source Initiative, discusses the need for a new definition as AI differs significantly from open source software. The complexity arises from the unique nature of AI, particularly large language models and transformers, which challenge traditional copyright frameworks. Maffulli emphasizes the urgency of establishing a definition for open source AI and discusses an ongoing effort to release a set of principles by the year's end.The concept of "open" in the context of AI is undergoing a significant transformation, reminiscent of the early days of open source. The recent upheaval at OpenAI, resulting in the removal of CEO Sam Altman, reflects a profound shift in the technology community, prompting a reconsideration of the definition of "open" in the realm of AI.The conversation highlights the parallels between the current AI debate and the early days of software development, emphasizing the necessity for a cohesive approach to navigate the evolving landscape. Altman's ousting underscores a clash of belief systems within OpenAI, with a "safetyist" community advocating caution and transparency, while Altman leans towards experimentation. The historical significance of open source, with a focus on trust preservation over technical superiority, serves as a guide for defining "open" and "AI" in a rapidly changing environment.Learn more from The New Stack about AI and Open Source:Artificial Intelligence News, Analysis, and ResourcesOpen Source Development Threatened in EuropeThe AI Engineer Foundation: Open Source for the Future of AI
11/22/2023 • 22 minutes, 39 seconds
Debugging Containers in Kubernetes
DockerCon showcased a commitment to enhancing the developer experience, with a particular focus on addressing the challenge of debugging containers in Kubernetes. The newly launched Docker Debug offers a language-independent toolbox for debugging both local and remote containerized applications.By abstracting Kubernetes concepts like pods and namespaces, Docker aims to simplify debugging processes and shift the focus from container layers to the application itself. Our guest, Docker Principal Engineer Ivan Pedrazas, emphasized the need to eliminate unnecessary complexities in debugging, especially in the context of Kubernetes, where developers grapple with unfamiliar concerns exposed by the API.Another Docker project, Tape, simplifies deployment by consolidating Kubernetes artifacts into a single package, streamlining the process for developers. The ultimate goal is to facilitate debugging of slim containers with minimal dependencies, optimizing security and user experience in Kubernetes development.While progress is being made, bridging the gap between developer practices and platform engineering expectations remains an ongoing challenge.Learn more from The New Stack about Kubernetes and Docker:Kubernetes Overview, News, and TrendsDocker Rolls out 3 Tools to Speed and Ease DevelopmentWill GenAI Take Jobs? No, Says Docker CEO
11/21/2023 • 15 minutes, 49 seconds
Integrating a Data Warehouse and a Data Lake
TNS host Alex Williams is joined by Florian Valeye, a data engineer at Back Market, to shed light on the evolving landscape of data engineering, particularly focusing on Delta Lake and his contributions to open source communities. As a member of the Delta Lake community, Valeye discusses the intersection of data warehouses and data lakes, emphasizing the need for a unified platform that breaks down traditional barriers.Delta Lake, initially created by Databricks and now under the Linux Foundation, aims to enhance reliability, performance, and quality in data lakes. Valeye explains how Delta Lake addresses the challenges posed by the separation of data warehouses and data lakes, emphasizing the importance of providing asset transactions, real-time processing, and scalable metadata.Valeye's involvement in Delta Lake began as a response to the challenges faced at Back Market, a global marketplace for refurbished devices. The platform manages large datasets, and Delta Lake proved to be a pivotal solution in optimizing ETL processes and facilitating communication between data scientists and data engineers.The conversation delves into Valeye's journey with Delta Lake, his introduction to Rust programming language, and his role as a maintainer in the Rust-based library for Delta Lake. Valeye emphasizes Rust's importance in providing a high-level API with reliability and efficiency, offering a balanced approach for developers.Looking ahead, Valeye envisions Delta Lake evolving beyond traditional data engineering, becoming a platform that seamlessly connects data scientists and engineers. He anticipates improvements in data storage optimization and envisions Delta Lake serving as a standard format for machine learning and AI applications.The conversation concludes with Valeye reflecting on his future contributions, expressing a passion for Rust programming and an eagerness to explore evolving projects in the open-source community. Learn more from The New Stack about Delta Lake and The Linux Foundation:Delta Lake: A Layer to Ensure Data QualityData in 2023: Revenge of the SQL NerdsWhat Do You Know about Your Linux System?
11/16/2023 • 20 minutes, 59 seconds
WebAssembly's Status in Computing
Liam Crilly, Senior Director of Product Management at NGINX, discussed the potential of WebAssembly (Wasm) during this recording at the Open Source Summit in Bilbao, Spain. With over three decades of experience, Crilly highlighted WebAssembly's promise of universal portability, allowing developers to build once and run anywhere across a network of devices.While Wasm is more mature on the client side in browsers, its deployment on the server side is less developed, lacking sufficient runtimes and toolchains. Crilly noted that WebAssembly acts as a powerful compiler target, enabling the generation of well-optimized instruction set code. Despite the need for a virtual machine, WebAssembly's abstraction layer eliminates hardware-specific concerns, providing near-native compute performance through additional layers of optimization.Learn more from The New Stack about WebAssembly and NGINX:WebAssembly Overview, News and TrendsWhy WebAssembly Will Disrupt the Operating SystemTrue Portability Is the Killer Use Case for WebAssembly4 Factors of a WebAssembly Native World
11/14/2023 • 23 minutes, 40 seconds
PostgreSQL Takes a New Turn
Jonathan Katz, a principal product manager at Amazon Web Services, discusses the evolution of PostgreSQL in an episode of The New Stack Makers. He notes that PostgreSQL's uses have expanded significantly since its inception and now cover a wide range of applications and workloads. Initially considered niche, it faced competition from both open-source and commercial relational database systems. Katz's involvement in the PostgreSQL community began as an app developer, and he later contributed by organizing events.PostgreSQL originated from academic research at the University of California at Berkeley in the mid-1980s, becoming an open-source project in 1994. In the mid-1990s, proprietary databases like Oracle, IBM DB2, and Microsoft SQL dominated the market, while open-source alternatives like MySQL, MariaDB, and SQLite emerged.PostgreSQL 16 introduces logical replication from standby servers, enhancing scalability by offloading work from the primary server. The meticulous design process within the PostgreSQL community leads to stable and reliable features. Katz mentions the development of Direct I/O as a long-term feature to reduce latency and improve data writing performance, although it will take several years to implement.Amazon Web Services has built Amazon RDS on PostgreSQL to simplify application development for developers. This managed service handles operational tasks such as deployment, backups, and monitoring, allowing developers to focus on their applications. Amazon RDS supports multiple PostgreSQL releases, making it easier for businesses to manage and maintain their databases.Learn more from The New Stack about PostgreSQL and AWS:PostgreSQL 16 Expands Analytics CapabilitiesPowertools for AWS Lambda Grows with Help of VolunteersHow Donating Open Source Code Can Advance Your Career
11/8/2023 • 21 minutes, 7 seconds
The Limits of Shift-Left: What’s Next for Developer Security
The practice of "shift left," which involves moving security concerns to the code level and increasing developers' responsibility for security, is facing a backlash, with both developers and security professionals expressing concerns. Peter Klimek, director of technology at Imperva, discusses the reasons behind this backlash in this episode.Some organizations may have exhausted the benefits of shift left, while the main challenge for many isn't finding vulnerabilities but finding time to address them. Security attacks are now targeting business logic vulnerabilities rather than dependencies, which shift left tools are better at identifying. These business logic vulnerabilities are often tied to authorization decisions, making them harder to address through code-level tools. Additionally, attacks increasingly focus on the frontend, such as API development and cart attacks.Klimek emphasizes the need for development and security teams to collaborate and advocates for using DORA metrics to assess the impact of security efforts on the development pipeline. Some organizations may reach a point where the tools added to the development lifecycle become counterproductive, he notes. DORA metrics can help determine when this occurs and provide valuable insights for security teams.Learn more from The New Stack about Developer Security and Imperva:Why Your APIs Aren’t Safe — and What to Do about ItWhat Developers Need to Know about Business Logic AttacksAre Your Development Practices Introducing API Security Risks?
11/7/2023 • 22 minutes, 41 seconds
How AI and Automation Can Improve Operational Resiliency
Operational resiliency, as explained by Dormain Drewitz of PagerDuty, involves the ability to bounce back and recover from setbacks, not only technically but also in terms of organizational recovery. True resiliency means maintaining the willingness to take risks even after facing challenges. In a conversation with Heather Joslyn on the New Stack Makers podcast, Drewitz discussed the role of AI and automation in achieving operational resiliency, especially in a context where teams are under pressure to be more productive.Automation, including generative AI code completion tools, is increasingly used to boost developer productivity. However, this may lead to shifting bottlenecks from developers to operations, creating new challenges. Drewitz emphasized the importance of considering the entire value chain and identifying areas where AI and automation can assist. For instance, automating repetitive tasks in incident response, such as checking APIs, closing ports, or database checks, can significantly reduce interruptions and productivity losses.PagerDuty's AI-powered platform leverages generative AI to automate tasks and create runbooks for incident handling, allowing engineers to focus on resolving root causes and restoring services. This includes drafting status updates and incident postmortem reports, streamlining incident response and saving time. Having an operations platform that can generate draft reports at the push of a button simplifies the process, making it easier to review and edit without starting from scratch.Learn more from The New Stack about AI, Automation, Incident Response, and PagerDuty:Operationalizing AI: Accelerating Automation, DataOps, AIOpsThree Ways Automation Can Improve Workplace CultureIncident Response: Three Ts to Rule Them AllFour Ways to Win Executive Buy-In for Automation
11/3/2023 • 20 minutes, 52 seconds
Will GenAI Take Developer Jobs? Docker CEO Weighs In
In this episode, Scott Johnston, CEO of Docker, highlights the evolving role of developers, emphasizing their increasing importance in architectural decision-making and tool development for applications. This shift in prioritizing a great developer experience and rapid tool development has led to substantial spending in the industry.Johnston expressed confidence that integrating generative AI into the developer experience will drive business growth and expand the customer base. He downplayed concerns about AI taking jobs, explaining that it would alleviate repetitive tasks, enabling developers to focus on more complex problem-solving. Johnston likened this evolution to expanding bike lanes in a city, leading to increased bike traffic, equating it to the development of more apps due to increased speed and efficiency.In his talk with TNS host, Alex Williams, Johnston emphasized that each advancement in programming languages and tools has expanded the developer market and driven greater demand for applications. Notably, the demand for over 750 million apps in the next two years, as reported by IDC, demonstrates the ever-increasing appetite for creative solutions from developers.Overall, Johnston sees the integration of generative AI and increasing development velocity as a multifaceted expansion that benefits developers and meets growing demand for applications in the market.Learn more from The New Stack about Generative AI and Docker:Generative AI News, Analysis, and ResourcesDocker Launches GenAI Stack and AI Assistant at DockerConDocker Rolls out 3 Tools to Speed and Ease Development
11/2/2023 • 21 minutes, 27 seconds
How Powertools for AWS Lambda Grew via 40% Volunteers
This episode of The New Stack Makers was recorded on the road at the Linux Foundation’s Open Source Summit Europe in Bilbao, Spain. A pair of technologists from Amazon Web Services (AWS) join us to discuss the development of Powertools for AWS Lambda. Andrea Amorosi, a senior solutions architect at AWS, and Leandro Damascena, a specialist solutions architect, share insights into how Powertools evolved from an observability tool to support more advanced use cases like ensuring workload safety, batch processing, streaming data, and idempotency.Powertools primarily supports Python, TypeScript, Java, and .NET. The latest feature, idempotency for TypeScript, was introduced to help customers achieve best practices for developing resilient and fault-tolerant workloads. By integrating these best practices during the development phase, Powertools reduces the need for costly re-architecting and rewriting of code.The success of Powertools can be attributed to its strong open source community, which fosters collaboration and contributions from users. AWS ensures transparency by conducting all project activities in the open, allowing anyone to understand and influence feature prioritization and contribute in various ways. Furthermore, the project's international support team offers assistance in multiple languages and time zones.A noteworthy aspect is that 40% of new Powertools features have been contributed by the community, providing contributors with valuable networking opportunities at a prominent tech giant like AWS. Overall, Powertools demonstrates how open source principles can thrive within a major corporation, offering benefits to both the company and the open source community.Learn more from The New Stack about Powertools, Lambda, and Amazon Web Services:AWS Offers a TypeScript Interface for Lambda ObservabilityHow Donating Open Source Code Can Advance Your CareerTurn AWS Lambda Functions Stateful with Amazon Elastic File System
11/1/2023 • 17 minutes, 43 seconds
What Will Be Hot at KubeCon in Chicago?
KubeCon 2023 is set to feature three hot topics, according to Taylor Dolezal from the Cloud Native Computing Foundation. Firstly, GenAI and Large Language Models (LLMs) are taking the spotlight, particularly regarding their security and integration with legacy infrastructure. Platform engineering is also on the rise, with over 25 sessions at KubeCon Chicago focusing on its definition and how it benefits internal product teams by fostering a culture of product proliferation. Lastly, WebAssembly is emerging as a significant topic, with a dedicated day during the conference week. It is maturing and finding its place, potentially complementing containers, especially in edge computing scenarios. Wasm allows for efficient data processing before data reaches the cloud, adding depth to architectural possibilities.Overall, these three trends are expected to dominate discussions and presentations at KubeCon NA 2023, offering insights into the future of cloud-native technology.See what came out of the last KubeCon event in Amsterdam earlier this year:AI Talk at KubeConDon’t Force Containers and Disrupt WorkflowsA Boring Kubernetes Release
10/31/2023 • 22 minutes, 1 second
How Will AI Enhance Platform Engineering and DevEx?
Digital.ai, an AI-powered DevSecOps platform, serves large enterprises such as financial institutions, insurance companies, and gaming firms. The primary challenge faced by these clients is scaling their DevOps practices across vast organizations. They aim to combine modern development methodologies like agile DevOps with the need for speed and intimacy with end-users on a large scale.This episode features a discussion between Wing To of Digital.ai and TNS host Heather Joslyn about platform engineering and the role of AI in enhancing automation. It delves into the dilemma of whether increased code production and release frequency driven by DevOps practices are inherently beneficial. Additionally, it explores the emerging challenge of AI-assisted development and how large enterprises are striving to realize productivity gains across their organizations.Digital.ai is focused on incorporating AI into automation to assist developers in creating and delivering code while helping organizations derive more business value from their software in production. The company employs templates to capture and replicate key aspects of software delivery processes and uses AI to automate the rapid setup of developer environments and tooling. These efforts contribute to the concept of the internal developer platform, which consists of multiple toolsets for tasks like creating pipelines and setting up various components.Learn more from The New Stack about Platform Engineering, DevSecOps and Digital.ai:Platform Engineering Overview, News, and TrendsSRE vs. DevOps vs. Platform EngineeringMeet the New DevSecOps
10/27/2023 • 20 minutes, 10 seconds
Why the Cloud Makes Forecasts Difficult and How FinOps Helps
Moving workloads to the cloud presents cost prediction challenges. Traditional setups with on-premises hardware offer predictability, but cloud costs are usage-based and granular. In this podcast episode, Matt Stellpflug, a senior FinOps specialist at ProsperOps, discusses the complexities of forecasting cloud expenses with TNS host Heather Joslyn.Cloud users face fluctuating costs due to continuous deployments and changing workloads. There are additional expenses for data access and transfer. Stellpflug emphasizes the importance of establishing reference workloads and benchmarks for accurate forecasting.Engineers play a vital role in FinOps initiatives since they ensure application availability and system integrity. Stellpflug suggests collaborating with engineering teams to identify essential metrics. He co-authored an "Engineer's Guide to Cloud Cost Optimization," highlighting the distinction between resource and rate optimization. Best practices involve addressing high-impact, low-risk areas first, engaging subject matter experts for complex issues, and maintaining momentum. This episode also provides further insights into implementing FinOps for effective cloud cost management.Learn more from The New Stack about FinOps and ProsperOps:FinOps Overview, News, and TrendsProsperOps Wants to Automate Your FinOps StrategyEngineer’s Guide to Cloud Cost Optimization: Manual DIY OptimizationEngineer’s Guide to Cloud Cost Optimization: Engineering Resources in the CloudEngineer’s Guide to Cloud Cost Optimization: Prioritize Cloud Rate Optimization
10/26/2023 • 13 minutes, 32 seconds
How to Be a Better Ally in Open Source Communities
In her keynote address at the Linux Foundation's Open Source Summit Europe, Fatima Sarah Khalid emphasized that being an ally is more than just superficial gestures like wearing pronouns on badges or correctly pronouncing coworkers' names. True allyship involves taking meaningful actions to support and uplift individuals from underrepresented or marginalized backgrounds. This support is essential, not only in obvious ways but also in everyday interactions, which collectively create a more inclusive community.Open source communities typically lack diversity, with only a small percentage of women, non-binary contributors, and individuals from underrepresented backgrounds. Khalid stressed the importance of improving diversity and inclusion through various means, including using inclusive language, facilitating asynchronous communication to accommodate global contributors, and welcoming non-technical contributions such as documentation.Khalid also provided insights on making open source events more inclusive, like welcoming newcomers and marginalized groups, providing quiet spaces and enforcing a code of conduct, and partnering newcomers with mentors. Moreover, she highlighted GitLab's unique approach to allyship within the organization, including the Ally Lab, which pairs employees from different backgrounds to learn about and understand each other's experiences.To encourage the audience to embrace allyship, Khalid shared a set of commitments to keep in mind, such as educating oneself about the experiences of marginalized groups, speaking up against inappropriate behavior, using one's voice to amplify marginalized voices, donating to support such groups, and advocating for equity and justice through social networks and connections. She also shared real-life examples of allyship, illustrating how meaningful actions can create positive change in communities.Khalid's discussion with host Jennifer Riggins emphasizes the significance of meaningful, everyday actions to promote allyship in open source communities and organizations, ultimately contributing to a more diverse, inclusive, and equitable tech industry.Learn more from The New Stack about Open Source, Allyship, and GitLab:Embracing Open Source for Greater Business ImpactLeadership and Inclusion in the Open Source CommunityHow Implicit Bias Impacts Open Source Diversity and InclusionInvesting in the Next Generation of Tech Talent
10/25/2023 • 16 minutes, 37 seconds
Open Source Development Threatened in Europe
In a recent conversation at the Open Source Summit in Bilbao, Spain, Gabriel Colombo, the General Manager of the Linux Foundation Europe and the Executive Director of the Fintech Open Source Foundation, discussed the potential impact of the Cyber Resilience Act (CRA) on the open source community. The conversation shed light on the challenges and opportunities that the CRA presents to open source and how individuals and organizations can respond.The conversation began by addressing the Cyber Resilience Act and its significance. Gabriel Colombo explained that while the Act is being touted as a measure to bolster cybersecurity and national security, it could have unintended consequences for the open source ecosystem, particularly in Europe. The Act, currently in the legislative process, aims to address cybersecurity concerns but could inadvertently hinder open source development and collaboration.Jim Zemlin, the Executive Director of the Linux Foundation, had previously mentioned the importance of forks in open source development, emphasizing that they are a healthy aspect of the ecosystem. However, Colombo pointed out that the CRA could create a sense of unease, as it might deter people and companies from participating in open source projects or using open source software due to potential legal liabilities.To grasp the implications of the CRA, Colombo explained some of the key provisions. The initial drafts of the Act proposed potential liability for individual developers, open source foundations, and package managers. This raised concerns about the open source supply chain's potential vulnerability and the distribution of liability.As the Act evolves, the liability landscape has shifted somewhat. Individual developers may not be held liable unless they consistently receive donations from commercial companies. However, for open source foundations, especially those accepting recurring donations from commercial entities, there remains a concern about potential liabilities and the need to conform to the CRA's requirements.Colombo emphasized that this issue isn't limited to Europe. It could impact the entire global open source ecosystem and affect the ability of European developers and small to medium-sized businesses to participate effectively.The conversation highlighted the challenges open source communities face when engaging with policymakers. Open source is not structured like traditional corporations or industry consortiums, making it more challenging to present a unified front. Additionally, the legislative process can be slow and complex, which may not align with the rapid pace of technology development.The lack of proactive engagement from the European Commission and the absence of open source communities in the initial consultations on the Act are concerning. The understanding of open source, its nuances, and the role it plays in the broader software supply chain appears limited within policy-making circles.What Can Be Done?Gabriel Colombo stressed the importance of awareness and education. It is vital for individuals, businesses, and open source foundations to understand the implications of the CRA. The Linux Foundation and other organizations have launched campaigns to provide information and resources to help stakeholders comprehend the Act's potential impact.Being vocal and advocating for open source within your network, organization, and through public affairs channels can also make a difference. Engagement with policymakers, especially as the Act progresses through the legislative process, is crucial. Colombo encouraged businesses to emphasize the significance of open source in their operations and supply chains, making policymakers aware of how the CRA might affect their activities.In the face of the Cyber Resilience Act, the open source community must unite and actively engage with policymakers. It's essential to educate and raise awareness about the potential impact of the Act and advocate for a balanced approach that strengthens cybersecurity without stifling open source innovation.The Act's development is ongoing, and there is time for stakeholders to make their voices heard. With a united effort, the open source community can help shape the legislation to ensure that open source remains vibrant and resilient in the face of evolving cybersecurity challenges.Learn more from The New Stack about open source and Linux Foundation Europe:At Open Source Summit: Introducing Linux Foundation EuropeMaking Europe's 'Romantic' Open Source World More PracticalEmbracing Open Source for Greater Business Impact
10/19/2023 • 20 minutes, 18 seconds
How to Get Your Organization Started with FinOps
In this episode of The New Stack Makers podcast, Uma Daniel, a product manager at UST, discusses the current complexities in the global economy, marked by low unemployment except in the tech industry, high inflation, high interest rates, a volatile stock market, and the looming threat of recession. Amid these challenges, organizations are seeking ways to enhance their operational efficiency.Daniel introduces the concept of FinOps, which goes beyond just managing cloud costs. Instead, it focuses on leveraging the cloud to generate revenue. This represents a cultural shift in many organizations, emphasizing the need for a mindset change across different departments, including business, finance, and procurement.She dispels misconceptions, such as the belief that only certain teams should be involved in the FinOps process. Daniel stresses that it's a collaborative effort involving various teams, and it's best to adopt FinOps at the beginning of a cloud journey. Once an organization is already established in the cloud, implementing FinOps becomes more challenging.To foster collaboration, Daniel suggests identifying team members willing to champion FinOps and forming cross-functional teams to lead the initiative. Regular committee meetings and the establishment of generic policies, such as project budgets, help control cloud spending.This episode, hosted by Heather Joslyn, provides insights into how to initiate and implement a FinOps strategy and highlights common ways in which organizations waste cloud resources.Learn more from The New Stack about FinOps and UST:Cloud Cost-Unit Economics — A Modern Profitability ModelWhat Is FinOps? Understanding FinOps Best Practices for CloudVery Large Enterprises Need a Different Approach to FinOps
10/18/2023 • 23 minutes, 13 seconds
What’s Next in Building Better Generative AI Applications?
Since the release of OpenAI's ChatGPT-3 in late 2022, various industries have been actively exploring its applications. Madhukar Kumar, CMO of SingleStore, discussed his experiments with large language models (LLMs) in this podcast episode with TNS host Heather Joslyn. He mentioned a specific LLM called Gorilla, which is trained on APIs and can generate APIs based on specific tasks. Kumar also talked about SingleStore Now, an AI conference, where they plan to teach attendees how to build generative AI applications from scratch, focusing on enterprise applications.Kumar highlighted a limitation with current LLMs - they are "frozen in time" and cannot provide real-time information. To address this, a method called "retrieval augmented generation" (RAG) has emerged. SingleStore is using RAG to keep LLMs updated. In this approach, a user query is first matched with up-to-date enterprise data to provide context, and then the LLM is tasked with generating answers based on this context. This method aims to prevent the generation of factually incorrect responses and relies on storing data as vectors for efficient real-time processing, which SingleStore enables.This strategy ensures that LLMs can provide current and contextually accurate information, making AI applications more reliable and responsive for enterprises.Learn more from The New Stack about LLMs and SingleStore:Top 5 Large Language Models and How to Use Them EffectivelyUsing ChatGPT for Questions Specific to Your Company Data6 Reasons Private LLMs Are Key for Enterprises
Observability in multi-cloud environments is becoming increasingly complex, as highlighted by Martin Mao, CEO and co-founder of Chronosphere. This challenge has two main components: a rise in customer-facing incidents, which demand significant engineering time for debugging, and the ineffectiveness and high cost of existing tools. These issues are creating a problematic return on investment for the industry.Mao discussed these observability challenges on The New Stack Makers podcast with host Heather Joslyn, emphasizing the need to help teams prioritize alerts and encouraging a shift left approach for security responsibility among developers. With the adoption of distributed cloud architectures, organizations are not only dealing with a surge in data but also facing a cultural shift towards DevOps, where developers are expected to be more accountable for their software in production.Historically, operations teams handled software in production, but in the cloud-native world, developers must take on these responsibilities themselves. Many current observability tools were designed for centralized operations teams, which creates a gap in addressing developer needs.Mao suggests that cloud-native observability tools should empower developers to run and maintain their software in production, providing insights into the complex environments they work in. Moreover, observability tools can assist developers in understanding the intricacies of their software, such as its dependencies and operational aspects.To streamline the data obtained from observability efforts and manage costs, Chronosphere introduced the "Observability Data Optimization Cycle." This framework starts with establishing centralized governance to set budgets for teams generating data. The goal is to optimize data usage to extract value without incurring unnecessary costs. This approach applies financial operations (FinOps) concepts to the observability space, helping organizations tackle the challenges of cloud-native observability.Learn more from The New Stack about Observability and Chronosphere:Observability Overview, News and Trends4 Key Observability Best PracticesTop Ways to Reduce Your Observability CostsTop 4 Factors for Cloud Native Observability Tool Selection
10/11/2023 • 22 minutes, 4 seconds
At Run Time: Driving Outcomes with a Platform Engineering Team
Platform engineering is gaining prominence due to the need for faster application deployment, which directly impacts business velocity. Valentina Alaria, Senior Director of Product at VMware, emphasizes that not all organizations pursuing platform engineering have the same goals, context, or pain points. They tailor solutions to each organization's specific needs. Some focus on rapid onboarding for junior developers, while others aim to reduce complexity, friction, and support larger development teams with fewer operational staff.Platform engineering aims to streamline collaboration between developers and operations engineers. Developers want portable code and the ability to focus on coding without worrying about production requirements. Operations engineers and platform teams seek a seamless environment for deploying applications in different contexts.Successful platform engineering initiatives involve strong collaboration models, fostering a cooperative approach rather than a siloed one. The goal is to create applications and value for the organization by facilitating effective interaction between developers and operations engineers.This podcast episode, hosted by Alex Williams of TNS, also delves into VMware Tanzu's latest tools for supporting platform engineering.Learn more from The New Stack about platform engineering and VMware Tanzu:Platform Engineering Overview, News and Trends6 Patterns for Platform Engineering SuccessA Guide to Open Source Platform EngineeringStreamline Platform Engineering with Kubernetes
10/5/2023 • 30 minutes, 8 seconds
How One Open Source Project Derived from Another’s Limits
ByConity is an open source project that emerged from ByteDance's use of Clickhouse, an open-source database system, to address their growing data volume. ByConity focuses on enhancing the separation of compute and storage, improving multitenancy support, and optimizing query performance in cloud-native environments.ByteDance's Vini Jaiswal, a principle developer advocate at the parent company of TikTok, highlights the power of open source in fostering innovation and collaboration. She shares her personal experience of leveraging open source to solve problems quickly and efficiently. She emphasizes the importance of getting involved in open source, even for those who might be hesitant, and suggests starting by identifying a pain point and making small contributions.ByConity's architecture, which separates compute and storage, offers benefits like preventing data lake corruption, read and write separation, elasticity, and scalability. Jaiswal also mentions her previous experience with open source during her time at CitiBank, where she realized how open source accelerated digital transformations.Throughout the conversation, Jaiswal underscores the strength of open source communities in collectively addressing challenges. She encourages listeners to embrace open source and start contributing, emphasizing how even small contributions can lead to significant impacts over time.The episode also delves into Jaiswal's involvement with other open source projects, such as PyTorch, and explores the intersection of open source and generative AI.Learn more from The New Stack about open source and cloud native environments:What Is 'Cloud Native' (and Why Does It Matter)?Cloud Native Ecosystem News and ResourcesHow to Build an Open Source Community
10/4/2023 • 28 minutes, 49 seconds
The Golden Path to Platform Engineering
Along with discussing the emergence and ascension of platform engineering in this episode, we also discuss the role that Humanitec plays in helping organizations establish platforms for developers, as well as Backstage, a popular open source internal developer platform that was developed by Spotify for its own developers.An IDP, our guest Kaspar Von Grünberg explained, is a standardized interface for developers to build applications using a golden path of vetted tools and libraries, allowing for a high degree of efficiency for both the developers themselves as well as the engineers who are supporting the developers. They can include an integration and delivery plane, a continuous integration registry, a platform orchestrator, observability tools and a resource plane."How you're consuming this is a little bit up to the individual preference of the user, and what the platform team has configured for you. So we're seeing some teams like to use a user interface and some teams like to use code based interactions," Von Grünberg explained.In some ways, a IDP is reminiscent of the platform-as-a-service packages of a decade ago. They also were designed to help developer efficiency, though devs chafed at the limited number of tools they were allowed to use in these walled gardens. That was a mistake, Von Grünberg said.Those platforms required developers to use a small set of pre-defined times."We don't want to get back to those times, which is why we want to provide sensible defaults," Von Grünberg said. A good IDP will provide developers with "golden paths" or "paved roads" as Netflix calls them."Developers can stay on those paths if they want," Von Grünberg said. They can enjoy the security default and service-level agreements (SLAs) from the engineers. But developers are also free to leave the path and make low-level configurations on their own as well."Good platform engineering is never about covering all the use cases," he said.Learn more from The New Stack about platform engineering and Humanitec:Platform Engineering Overview, News, and TrendsHow to Pave Golden Paths That Actually Go SomewhereBuild Your IDP at Light Speed with a Platform Reference Architecture
9/27/2023 • 15 minutes, 9 seconds
Don't Listen to a Vendor About AI, Do the DevOps Redo
In this episode of The New Stack Makers, technologist and author John Willis emphasized caution when considering AI solutions from vendors. He advised against blindly following vendor recommendations for "one-size-fits-all" AI products, likening it to discouraging learning Java in the past in favor of purchasing a product.Willis stressed that DevOps serves as an example of how human expertise, not just products, solves problems. He urged C-level executives to first understand AI's intricacies and then make informed purchasing decisions, suggesting a "DevOps redo" to encourage experimentation and collaboration, similar to the early days of the DevOps movement.Willis highlighted that early adopters of DevOps, like successful banks, heavily invested in developing their human capital. He cautioned against hasty product purchases, as the AI landscape is rife with startups that may quickly disappear or be acquired by larger companies.Instead, Willis advocated for educating teams on effective data management techniques, including retrieval augmentation, to fine-tune large language models. He emphasized the need for data cleansing to build robust data pipelines and prevent LLMs from generating undesirable code or sensitive information.According to Willis, the process becomes enjoyable when done correctly, especially for companies using LLMs at scale with retrieval augmentation. To ensure success, he suggested adding governance and structure, including content moderation and red-teaming of data, which vendors may not prioritize in their offerings.Learn more from The New Stack about DevOps and AI:AIOps: Is DevOps Ready for an Infusion of Artificial Intelligence?How to Build a DevOps Engineer in Just 6 MonthsPower up Your DevOps Workflow with AI and ChatGPT
9/21/2023 • 33 minutes, 17 seconds
How Apache Flink Delivers for Deliveroo
Deliveroo, a prominent food delivery company, relies on Apache Flink, a distributed processing engine, to enhance its three-sided marketplace, connecting delivery drivers, restaurants, and customers. Seeking to improve real-time data streaming and gain insights into customer behavior, Deliveroo transitioned to Flink, comparing it to alternatives like Apache Spark and Kafka Streams. Flink, with feature parity to their previous platform, offered stability and scalability. They initially experimented with Flink on Kubernetes but turned to the Amazon Managed Service for Flink (MSF) for enhanced support and maintenance.Engineers from Deliveroo, Felix Angell and Duc Anh Khu, emphasized the need for flexibility in data modeling to accommodate their fast-paced product development. However, flexibility can be complex, often requiring data model adjustments. They expressed the desire for a self-serve configuration feature in MSF, allowing easy customization of low-level settings and auto-scaling based on application metrics. This move to Flink and MSF has empowered Deliveroo to focus on core responsibilities like continuous integration and delivery while efficiently managing their data processing needs.Learn more from The New Stack about Apache Flink and AWS:Kinesis, Kafka and Amazon Managed Service for Apache FlinkApache Flink for Real Time Data AnalysisApache Flink for Unbounded Data Streams
9/20/2023 • 20 minutes, 38 seconds
A Microservices Outcome: Testing Boomed
Over the past five to ten years, the testing of microservices has seen significant growth. This surge in testing can be attributed to the increasing adoption of microservices and Kubernetes, which signify a shift away from monolithic application architectures. Bruno Lopes, a leader at Kubernetes company incubator Kubeshop, noted this trend. Kubeshop has initiated six Kubernetes projects, including TestKube, a Kubernetes native testing framework led by Lopes.This rise in testing is making it more accessible to a wider audience and is enhancing the developer experience through automation. Developers now have more time to focus on innovation rather than manual testing. However, there is often a disconnect between development and testing, as developers move quickly, outpacing organizational adaptation to modern testing methods.Lopes emphasized the importance of testing before production deployment and advocated for creating production-resembling testing environments that allow for rapid deployment without waiting for manual tests. This approach is particularly critical for Site Reliability Engineering (SRE) teams who need to respond quickly to issues and minimize downtime for customers. In some cases, it's necessary to run tests within Kubernetes itself, a concept that may take time for companies to fully embrace as the developer experience continues to improve.Learn more from The New Stack about Kubernetes, Testing and TestKube:Testkube: A Cloud Native Testing Framework for KubernetesTop 5 Challenges in Modern Kubernetes TestingWhy You Should Start Testing in the Cloud Native Way
9/15/2023 • 21 minutes, 45 seconds
Kinesis, Kafka and Amazon Managed Service for Apache Flink
Apache Flink is an open-source framework and distributed processing engine designed for data analytics. It excels at handling tasks such as data joins, aggregations, and ETL (Extract, Transform, Load) operations. Moreover, it supports advanced real-time techniques like complex event processing.In this episode, Deepthi Mohan and Nagesh Honnalii from AWS discussed Apache Flink and the Amazon Managed Service for Apache Flink (MSF) with our host, Alex Williams. MSF is a service that caters to customers with varying infrastructure preferences. Some prefer complete control, while others want AWS to handle all infrastructure-related aspects.Use cases for MSF can be grouped into three categories. First, there's streaming ETL, which involves tasks like log aggregation for later auditing. Second, it supports real-time analytics, enabling customers to create dashboards for tasks like fraud detection. Third, it handles complex event processing, where data from multiple sources is joined and aggregated to extract meaningful insights.The origins of MSF trace back to the evolution of real-time data services within AWS. In 2013, AWS introduced Amazon Kinesis, while the open-source community developed Apache Kafka. These services paved the way for MSF by highlighting the need for real-time data processing.To provide more flexibility, AWS launched Kinesis Data Analytics in 2016, allowing customers to write code in JVM-based languages like Java and Scala. In 2018, AWS decided to incorporate Apache Flink into its Kinesis Data Analytics offering, leading to the birth of MSF.Today, thousands of customers use MSF, and AWS continues to enhance its offerings in the real-time data processing space, including the launch of Amazon MSK (Managed Streaming for Apache Kafka). To align with its foundation on Flink, AWS rebranded Kinesis Data Analytics for Apache Flink to Amazon Managed Service for Apache Flink, making it clearer for customers.Learn more from The New Stack about AWS and Apache Flink:Apache Flink for Real Time Data AnalysisApache Flink for Unbounded Data Streams3 Reasons Why You Need Apache Flink for Stream Processing
9/12/2023 • 27 minutes, 7 seconds
What You Can Expect from a Developer Conference These Days
Modern developer conferences like the upcoming Infobip Shift Conference in Croatia are centered around themes. At this particular event for developers, you can expect a lot of focus to be on the developer experience and artificial intelligence (AI).Ivan Burazin, Chief Development Experience Officer at InfoBip, joined us on the show and emphasizes that developers spend a substantial portion of their time not coding, often losing 50 to 70% of their productive hours to non-coding activities, such as setting up environments, running tests, and building code. This highlights the importance of improving the developer experience to enhance productivity.The developer experience has both internal and external dimensions. Externally, it impacts customer experience, while internally, it influences development velocity. A better developer experience translates to faster and more efficient coding.The Shift Conference will feature talks on six stages, one of which will focus on the developer experience, addressing its internal and external aspects. Additionally, AI will take center stage at another segment of the conference.Although there may not be an abundance of true AI experts taking the stage, the focus will be on how individuals and companies can leverage AI to create products and services. It's recognized that AI will play a pivotal role in the future of every industry, and the conference aims to explore practical applications and strategies for integrating AI into various businesses.Overall, the Shift Conference aims to address the challenges developers face in optimizing their productivity and explore the growing importance of AI in shaping the future of businesses and products.Learn more from The New Stack about the developer experience and InfoBip Shift:7 Principles and 10 Tactics to Make You a 10x DeveloperThe Challenges of Marketing Software Tools to DevelopersA Guide to Better Developer Experience
9/6/2023 • 24 minutes, 41 seconds
Apache Flink for Real Time Data Analysis
This episode delves into Apache Flink, a versatile platform for executing both batch and real-time streaming data analysis tasks. This session marks the beginning of a three-part series unveiling Amazon Web Services' (AWS) new managed service built on Flink. Future episodes will explore this service in detail and examine customer experiences.The podcast features insights from Danny Cranmer, a principal engineer at AWS and an Apache Flink PMC and Committer, along with Hong Teoh, a software development engineer at AWS.Flink stands out as a high-level framework for defining data analytics jobs, accommodating both batch and streaming data sets. It offers APIs for building analysis jobs in various languages, including Java, Python, and SQL. Flink also provides a distributed job execution engine with fault tolerance and horizontal scaling capabilities.One prominent use case is Extract-Transform-Load (ETL), where raw data is swiftly processed for specific workloads. Flink excels in delivering low-latency transformations for unbounded data streams. Additionally, Flink supports event-driven applications, responding immediately to triggers such as user requests for weather data.Flink ensures exactly-once processing, critical for scenarios like financial transactions. It employs checkpoints to maintain data integrity in case of node failures.The podcast also touches on AWS's role in supporting the open-source Flink project and the future outlook for this powerful data processing framework.Learn more from The New Stack about Apache Flink:3 Reasons Why You Need Apache Flink for Stream ProcessingApache Flink for Unbounded Data Streams8 Real-Time Data Best Practices
9/5/2023 • 23 minutes, 52 seconds
The First Thing to Tell an LLM
In an interview with The New Stack, renowned technologist Adrian Cockcroft discussed the process of fine-tuning Large Language Models (LLMs) through prompt engineering. Cockcroft, known for his roles at Netflix and Amazon Web Services, explained how to obtain tailored programming advice from an LLM. By crafting specific prompts like asking the model to provide code in the style of a certain expert programmer, such as Java's James Gosling, users can guide the AI's output.Prompt engineering involves setting up conversations to bias the AI's responses. These prompts are becoming more advanced with plugins and loaded information that shape the model's behavior before use. Cockcroft highlighted the concept of fine-tuning, where models are adapted beyond what a prompt can contain. Companies are incorporating vast amounts of their internal data, like wiki pages and corporate documents, to train the model to understand their specific domain and processes.Cockcroft pointed out the efficacy of ChatGPT within certain tasks, illustrated by his experience using it for data analysis and programming assistance. He also discussed the growing need for improved results from LLMs, which has led to the demand for vector databases. These databases store word meanings as vectors with associated weights, enabling fuzzy matching for enhanced information retrieval from LLMs. In essence, Cockcroft emphasized the multifaceted process of shaping and optimizing LLMs through prompt engineering and fine-tuning, reflecting the evolving landscape of AI-human interactions.Learn more from The New Stack about LLMs and Prompt Engineering:Top 5 Large Language Models and How to Use Them EffectivelyThe Pros (And Con) of Customizing Large Language ModelsPrompt Engineering: Get LLMs to Generate the Content You WantDeveloper Tips in AI Prompt Engineering
8/30/2023 • 28 minutes, 49 seconds
So You Want to Learn DevOps
TechWorld with Nana is one of the most popular resources for people looking to get into or progress a DevOps career. Nana Janashia, the creator of TechWorld with Nana, is a DevOps trainer and consultant who joined us to discuss why DevOps is needed now more than ever and how this is the perfect time to begin a career in DevOps.Host Alex Williams and Nana go over the key concepts of DevOps. Then they talk about how the complexity of tools can sidetrack and complicate the learning process for those new to DevOps and why focusing on concepts rather than tools the way to go. Before wrapping up the conversation, they even talk about the best ways for people to get involved who are new to DevOps.Nana's journey into DevOps commenced during her time as an engineer in Austria, where she began exploring Kubernetes. As inquiries from colleagues poured in, she recognized her knack for demystifying complex topics, catalyzing her passion for teaching. Viewers attest to switching to DevOps careers after watching her videos.Throughout the conversation, we learned how people can discover the world of DevOps through TechWorld with Nana as an expert guide. With a large YouTube audience, online courses, workshops, and corporate training, Nana has empowered countless individuals in advancing their DevOps expertise. The six-month boot camps from TechWorld with Nana encompass a comprehensive curriculum, starting with fundamentals and culminating in hands-on programming abilities, Python automation, configuration management, and Prometheus-based monitoring.Nana underscores that DevOps, still a relatively nascent profession, suffers from role ambiguity both among engineers and within companies aspiring to implement it. This confusion stems from differing workflows and environments when engineers switch jobs. Nana's insights bring clarity to these challenges, acknowledging the evolving chaos of the DevOps culture and its driving force for innovation in managing intricate distributed technologies.Learn more about DevOps from TNS, Roadmap (our sister site), and TechWorld with Nana:TechWorld with Nana - DevOps BootcampTechWorld with Nana - DevSecOps BootcampDevOps Learning RoadmapDevOps News, Trends, and Analysis
8/24/2023 • 29 minutes, 36 seconds
Open Source AI and The Llama 2 Kerfuffle
Explore the complex intersection of AI and open source with insights from experts in this illuminating discussion. Amanda Brock, CEO of OpenUK, reveals the challenges in labeling AI as open source amidst legal ambiguities. The dialogue, led by TNS host Alex Williams, delves into the evolution of open source licensing, its departure from traditional models, and the complications arising from applying open source principles to AI, which encompasses sensitive data governed by privacy laws.The focus turns to "Llama 2," a contentious example where Meta labeled their language model as open source, sparking confusion. Notable guests Erica Brescia, Managing Director at Redpoint Ventures, and Steven Vaughan-Nichols, founder of Open Source Watch, weigh in on this topic. Brock emphasizes that AI's complexity prevents it from aligning with the Open Source Definition, necessitating a clear distinction between open innovation and open source.Amidst these debates, the Open Source Initiative (OSI) is crafting a new definition tailored for AI, sparking anticipation and discussion about its implications. The necessity for an evolved understanding of open source and its licenses is underscored, as the rapid evolution of technology challenges established norms. The journey concludes with reflections on vendors transitioning from open source licenses to Server Side Public License (SSPL) due to cloud-related considerations, raising questions about the future of open source in a dynamically changing tech landscape.Learn more from The New Stack about open source and AI:Open Source May Yet Eat Google's and OpenAI's AI LunchOpen Source Movement Emerging in AI To Counter GreedHow AI Can Learn from the Struggles of Open Source
8/18/2023 • 35 minutes, 19 seconds
PromptOps: How Generative AI Can Help DevOps
Discover how large language models and generative AI are revolutionizing DevOps with PromptOps. The company, initially known as CtrlStack, introduces its unique process engine that comprehends human requests, reads knowledge bases, and generates code on the fly to accomplish tasks. Dev Nag, the CEO, explains how PromptOps saves users time and money by automating routine operations in this podcast episode with The New Stack.Dev Nag is joined by GK Brar, PromptOps' founding engineer, and our host Joab Jackson as they delve into the concept of generative AI and its potential benefits for DevOps. Traditionally, DevOps tasks often involve repetitive troubleshooting and reporting, making automation essential. PromptOps specializes in intent matching, understanding nuanced requests and providing the right solutions.Notably, PromptOps employs generative AI offline to prepare for automating common actions and enhancing the user experience. Unlike others, PromptOps aims beyond simple enhancements. It aspires to transform the entire DevOps landscape by leveraging this groundbreaking technology.Tune in to the podcast to gain deeper insights into this transformative approach that PromptOps brings to DevOps thanks to the power and possibilities of generative AI.Learn more from The New Stack about DevOps and PromptOps:DevOps News, Trends, Analysis and ResourcesHow to Use ChatGPT for IT Security AuditWhat We Learned from Building a Chatbot
8/11/2023 • 12 minutes, 57 seconds
Where Does WebAssembly Fit in the Cloud Native World?
In this episode, Matt Butcher, CEO of Fermyon Technologies, discusses the potential impact of the component model on WebAssembly (Wasm) and its integration into the cloud-native landscape. WebAssembly is a binary instruction format enabling code to run anywhere, written in developers' preferred languages. The component model aims to provide a common way for WebAssembly libraries to express their needs and connect with other modules, reducing the barriers and maintenance of existing libraries. Butcher believes this model could be a game changer, allowing new languages to compile WebAssembly and utilize existing libraries seamlessly.WebAssembly also shows promise in delivering on the long-awaited potential of serverless computing. Unlike traditional virtual machines and containers, WebAssembly boasts a rapid startup time and addresses various developer challenges. Butcher states that developers have been eagerly waiting for a platform with these characteristics, hinting at a potential resurgence of serverless. He clarifies that WebAssembly is not a "Kubernetes killer" but can coexist with container technologies, evident from the Kubernetes ecosystem's interest in supporting WebAssembly.The episode explores further developments in WebAssembly and its potential to play a central role in the cloud-native ecosystem.Learn more from The New Stack about WebAssembly and Fermyon Technologies:WebAssembly Overview, News, and TrendsWebAssembly vs. KubernetesFermyon Cloud: Save Your WebAssembly Serverless Data Locally
8/3/2023 • 27 minutes, 24 seconds
The Cloud Is Under Attack. How Do You Secure It?
Building and deploying applications in the cloud offers significant advantages, primarily driven by the scalability it provides. Developers appreciate the speed and ease with which cloud-based infrastructure can be set up, allowing them to scale rapidly as long as they have the necessary resources. However, the very scale that makes cloud computing attractive also poses serious risks.The risk lies in the potential for developers to make mistakes in application building, which can lead to widespread consequences when deployed at scale. Cloud-focused attacks have seen a significant increase, tripling from 2021 to 2022, as reported in the Cloud Risk Report by Crowdstrike.The challenges in securing the cloud are exacerbated by its relative novelty, with organizations still learning about its intricacies. The newer generation of adversaries is adept at exploiting cloud weaknesses and finding ways to attack multiple systems simultaneously. Cultural issues within organizations, such as the tension between security professionals and developers, can further complicate cloud protection.To safeguard cloud infrastructure, best practices include adopting the principle of least privilege, regularly evaluating access rights, and avoiding hard-coding credentials. Ongoing hygiene and assessments are crucial in ensuring that access levels are appropriate and minimizing risks of cloud-focused attacks.Overall, understanding and addressing the risks associated with cloud deployments are vital as cloud-native adversaries grow increasingly sophisticated. Implementing proper security measures, along with staying up-to-date on runtime security and avoiding misconfigurations, are essential in safeguarding cloud-based applications and data.Elia Zaitsev of CrowdStrike joined TNS host Heather Joslyn for this conversation on the heels of the release of their Cloud Risk Report.Learn more from The New Stack about cloud security and CrowdStrike:Cloud-Focused Attacks Growing More Frequent, More Brazen5 Best Practices for DevSecOps Teams to Ensure ComplianceWhat Is DevSecOps?
7/28/2023 • 25 minutes, 27 seconds
Platform Engineering Not Working Out? You're Doing It Wrong.
In this episode of The New Stack Makers, Purnima Padmanabhan, a senior vice president at VMware, discusses three common mistakes organizations make when trying to move faster in meeting customer needs. The first mistake is equating application modernization with solely moving to the cloud, often resulting in a mere lift and shift of applications, without reaping the full benefits. The second mistake is a lack of automation, particularly in operations, which hinders the development process's speed. The third mistake involves adding unnecessary complexity by adopting new technologies or procedures, which slows down developers.As a solution, Padmanabhan introduces the concept of platform engineering, which not only accelerates development but also reduces toil for operations engineers and architects. However, many organizations struggle with implementing it effectively, as they often approach platform engineering in fragmented ways, investing in separate components without fully connecting them.To succeed in adopting platform engineering, Padmanabhan emphasizes the need for a mindset shift. The platform team must treat platform engineering as a continuously evolving product rather than a one-time delivery, ensuring that service-level agreements are continuously met, and regularly updating and improving features and velocity. The episode discusses the benefits of a well-implemented "golden path" for entire organizations and provides insights on how to start a platform engineering team.Learn more from The New Stack about Platform Engineering and VMware:Platform Engineering Overview, News and TrendsPlatform Engineers: Developers Are Your CustomersOpen Source Platform Engineering: A Decade of Cloud Foundry
7/27/2023 • 25 minutes, 30 seconds
What Developers Need to Know About Business Logic Attacks
In this episode of The New Stack Makers, Peter Klimek, director of technology in the Office of the CTO at Imperva, discusses the vulnerability of business logic in a distributed, cloud-native environment. Business logic refers to the rules and processes that govern how applications function and how users interact with them and other systems. Klimek highlights the increasing attacks on APIs that exploit business logic vulnerabilities, with 17% of attacks on APIs in 2022 coming from malicious bots abusing business logic.The attacks on business logic take various forms, including credential stuffing attacks, carding (testing stolen credit cards), and newer forms like influence fraud, where algorithms are manipulated to deceive platforms and users. Klimek emphasizes that protecting business logic requires a cross-functional approach involving developers, operations engineers, security, and fraud teams.To enhance business logic security, Klimek recommends conducting a threat modeling exercise within the organization, which helps identify potential risk vectors. Additionally, he suggests referring to the Open Web Application Security Project (OWASP) website's list of automated threats as a checklist during the exercise.Ultimately, safeguarding business logic is crucial in securing cloud-native environments, and collaboration among various teams is essential to effectively mitigate potential threats and attacks.More from The New Stack, Imperva, and Peter Klimek:Why Your APIs Aren’t Safe — and What to Do about ItZero-Day Vulnerabilities Can Teach Us About Supply-Chain SecurityGraphQL APIs: Greater Flexibility Breeds New Security Woes
7/26/2023 • 20 minutes, 36 seconds
Why Developers Need Vector Search
In this episode of The New Stack Makers podcast, the focus is on the challenges of handling unstructured data in today's data-rich world and the potential solutions offered by vector databases and vector searches. The use of relational databases is limited when dealing with text, images, and voice data, which makes it difficult to uncover meaningful relationships between different data points.Vector databases, which facilitate vector searches, have become increasingly popular for addressing this issue. They allow organizations to store, search, and index data that would be challenging to manage in traditional databases. Semantic search and Large Language Models have sparked interest in vector databases, providing developers with new possibilities.Beyond standard applications like information search and recommendation bots, vector searches have also proven useful in combating copyright infringement. Social media companies like Facebook have pioneered this approach by using vectors to check copyrighted media uploads.Vector databases excel at finding similarities between data objects, as they operate in vector spaces and perform approximate nearest neighbor searches, sacrificing a bit of accuracy for increased efficiency. However, developers need to understand their specific use cases and the scale of their applications to make the most of vector databases and search.Frank Liu, the director of operations at Zilliz, advised listeners to educate themselves about vector databases, vector search, and machine learning to leverage the existing ecosystem of tools effectively. One notable indexing strategy for vectors is Hierarchical Navigable Small Worlds (HNSW), a graph-based algorithm created by Yury Malkov, a distinguished software engineer at VerSE Innovation who also joined us along with Nils Reimers of Cohere.It's crucial to view vector databases and search as additional tools in the developer's toolbox rather than replacements for existing database management systems or document databases. The ultimate goal is to build applications focused on user satisfaction, not just optimizing clicks. To delve deeper into the topic and explore the gaps in current tooling, check out the full episode.Listen on PoduramaLearn more about vector databases at thenewstack.ioVector Databases: What Devs Need to Know about How They WorkVector Primer: Understand the Lingua Franca of Generative AIHow Large Language Models Fuel the Rise of Vector Databases
7/18/2023 • 27 minutes, 34 seconds
How Byteboard’s CEO Decided to Fix the Broken Tech Interview
Sargun Kaur, co-founder of Byteboard, aims to revolutionize the tech interview process, which she believes is flawed and ineffective. In an interview with The New Stack for our Tech Founder Odyssey podcast series, Kaur compared assessing technical skills during interviews to evaluating the abilities of basketball star Steph Curry by asking him to draw plays on a whiteboard instead of watching him perform on the court. Kaur, a former employee of Symantec and Google, became motivated to change the interview process after a talented engineer she had coached failed a Google interview due to its impractical format.Kaur believes that traditional tech interviews overly emphasize theoretical questions that do not reflect real-world software engineering tasks. This not only limits the talent pool but also leads to mis-hires, where approximately one in four new employees is unsuitable for their roles or teams. To address these issues, Kaur co-founded Byteboard in 2018 with Nicole Hardson-Hurley, another former Google employee. Byteboard offers project-based technical interviews, adopted by companies like Dropbox, Lyft, and Robinhood, to enhance the efficiency and fairness of their hiring processes. In recognition of their work, Kaur and Hardson-Hurley received Forbes magazine's "30 Under 30" award for enterprise technology.Kaur's journey into the tech industry was unexpected, considering her initial disinterest in her father's software engineering career. However, exposure to programming and shadowing a female engineer at Microsoft sparked her curiosity, leading her to study computer science at the University of California, Berkeley. Overcoming initial challenges as a minority in the field, Kaur eventually joined Google as an engineer, content with the work environment and mentorship she received. However, her dissatisfaction with the interview process prompted her to apply to Google's Area 120 project incubator, leading to the creation of Byteboard. Kaur's experience with Byteboard's development and growth taught her valuable lessons about entrepreneurship, the power of founders in fundraising meetings, and the potential impact of AI on tech hiring processes.Check out more episodes in The Tech Founder Odyssey series:A Lifelong ‘Maker’ Tackles a Developer Onboarding ProblemHow Teleport’s Leader Transitioned from Engineer to CEOHow 2 Founders Sold Their Startup to Aqua Security in a Year
7/13/2023 • 37 minutes, 14 seconds
A Lifelong ‘Maker’ Tackles a Developer Onboarding Problem
Shanea Leven, co-founder and CEO of CodeSee, shared her journey as a tech founder in an episode of the Tech Founder Odyssey podcast series. Despite coming to programming later than many of her peers, Leven always had a creative spark and a passion for making things. She initially pursued fashion design but taught herself programming in college and co-founded a company building custom websites for book authors. This experience eventually led her to a job at Google, where she worked in product development.While at Google, Leven realized the challenge of deciphering legacy code and onboarding developers to it. Inspired by a presentation by Bret Victor, she came up with the idea for CodeSee—a developer platform that helps teams understand and review code bases more effectively. She started working on CodeSee in 2019 as a side project, but it soon received venture capital funding, allowing her to quit her job and focus on the startup full-time.Leven candidly discussed the challenges of juggling a day job and a startup, particularly after receiving funding. She also shared advice on raising money from venture capitalists and building a company culture.Listen to the full episode and check out more installments from The Tech Founder Odyssey.How Teleport’s Leader Transitioned from Engineer to CEOHow 2 Founders Sold Their Startup to Aqua Security in a YearHow Solvo’s Co-Founder Got the ‘Guts’ to Be an Entrepreneur
7/7/2023 • 29 minutes, 25 seconds
5 Steps to Deploy Efficient Cloud Native Foundation AI Models
In deploying cloud-native sustainable foundation AI models, there are five key steps outlined by Huamin Chen, an R&D professional at Red Hat's Office of the CTO. The first two steps involve using containers and Kubernetes to manage workloads and deploy them across a distributed infrastructure. Chen suggests employing PyTorch for programming and Jupyter Notebooks for debugging and evaluation, with Docker community files proving effective for containerizing workloads.The third step focuses on measurement and highlights the use of Prometheus, an open-source tool for event monitoring and alerting. Prometheus enables developers to gather metrics and analyze the correlation between foundation models and runtime environments.Analytics, the fourth step, involves leveraging existing analytics while establishing guidelines and benchmarks to assess energy usage and performance metrics. Chen emphasizes the need to challenge assumptions regarding energy consumption and model performance.Finally, the fifth step entails taking action based on the insights gained from analytics. By optimizing energy profiles for foundation models, the goal is to achieve greater energy efficiency, benefitting the community, society, and the environment.Chen underscores the significance of this optimization for a more sustainable future.Learn more at thenewstack.ioPyTorch Takes AI/ML Back to Its Research, Open Source RootsPyTorch Lightning and the Future of Open Source AIJupyter Notebooks: The Web-Based Dev Tool You've Been SeekingKnow the Hidden Costs of DIY Prometheus
6/29/2023 • 16 minutes, 27 seconds
A Good SBOM is Hard to Find
The concept of a software bill of materials (SBOM) aims to provide consumers with information about the components inside a software, enabling better assessment of potential security issues. Justin Hutchings, Senior Director of Product Management at GitHub, emphasizes the importance of SBOMs and their potential to facilitate patching without relying solely on the vendor. He spoke with Alex Williams in this episode of The New Stack Makers.Creating a comprehensive SBOM poses challenges. Each software package is unique, such as an Android application that combines the developer's code with numerous open-source dependencies obtained through Maven packages. The SBOM should ideally serve as a machine-readable inventory of all these dependencies, enabling developers to evaluate their security.Hutchings notes that many SBOMs fall short in being fully machine-readable, and the vulnerability landscape is even more problematic. To achieve the standards Hutchings envisions, several actions are necessary. For instance, certain programming languages make it difficult to inspect build contents, while the lack of a centralized distribution point for dependencies in languages like C and C++ complicates the enumeration and standardization of machine-readable names and versions. Addressing these issues across the entire software supply chain is imperative.SBOMs hold potential for enhancing software security, but the current state of implementation and machine-readability needs improvement, particularly concerning diverse programming languages and dependency management.Learn more at thenewstack.ioCreating a 'Minimum Elements' SBOM Document in 5 MinutesEnhance Your SBOM Success with SLSAHow to Create a Software Bill of Materials
6/22/2023 • 25 minutes, 40 seconds
The Developer's Career Path: Discover's Approach
Angel Diaz, Vice President of Technology, Capabilities, and Innovation at Discover Financial Services, spoke with TNS Host Alex Williams at the Open Source Summit in Vancouver, BC. Diaz emphasizes the importance of learning and collaboration among software engineers. He leads The Discover Technology Academy, a community of 15,000 engineers, which he describes as a place where craftsmen come together rather than an ivory tower institution.Developers and engineers at Discover define and develop processes for software development. They start their journey by contributing atomic elements of knowledge, such as articles, blogs, videos, and tutorials, and then democratize that knowledge. Open source principles, communities, guilds, and established practices play a vital role in their work and discovery process.Discover's developer experience revolves around the concept of the golden path, which goes beyond consuming content and includes aspects like code, automation, and setting up development environments. Pair programming and a cultural approach to learning are also incorporated into Discover's talent system.Diaz highlights that Discover's work extends beyond their financial services company, as they share their knowledge and open source work with the external community through platforms like technology.discovered.com. This enables engineers to gain merit badges, such as maintainers or contributors, and showcase their expertise on professional platforms like LinkedIn.Learn more at thenewstack.ioThe Future of Developer CareersPlatform Engineer vs Software EngineerHow Donating Open Source Code Can Advance Your Career
6/21/2023 • 14 minutes, 26 seconds
The Risks of Decomposing Software Components
The Linux Foundation's Open Source Security Foundation (OSSF) is addressing the challenge of timely software component updates to prevent security vulnerabilities like Log4J. In an interview with Alex Williams of The New Stack at the Open Source Summit in Vancouver, Omkhar Arasaratnam, the new general manager of OSSF, and Brian Behlendorf, CTO of OSSF, discuss the importance of making software secure from the start and the need for rapid response when vulnerabilities occur. In this conversation, they highlight the significance of Software Bill of Materials (SBOMs), which provide a complete list of software components and supply chain relationships. SBOMs offer data that can aid decision-making and enable reputation tracking of repositories. The interview also touches on the issues with package managers and the quantification of software vulnerability risks. Overall, the goal is to improve the efficiency and effectiveness of software component updates and leverage data to enhance security in enterprise and production environments.Learn more from The New Stack:Creating a 'Minimum Elements' SBOM Document in 5 MinutesEnhance Your SBOM Success with SLSA
6/14/2023 • 19 minutes, 20 seconds
How Apache Airflow Better Manages ML Pipelines
Apache Airflow is an open-source platform for building machine learning pipelines. It allows users to author, schedule, and monitor workflows, making it well-suited for tasks such as data management, model training, and deployment. In a discussion on The New Stack Makers, three technologists from Amazon Web Services (AWS) highlighted the improvements and ease of use in Apache Airflow.Dennis Ferruzzi, a software developer at AWS, is working on updating Airflow's logging and metrics backend to the OpenTelemetry standard. This update will provide more granular metrics and better visibility into Airflow environments. Niko Oliveria, a senior software development engineer at AWS, focuses on reviewing and merging pull requests as a committer/maintainer for Apache Airflow. He has worked on making Airflow a more pluggable architecture through the implementation of AIP-51.Raphaël Vandon, also a senior software engineer at AWS, is contributing to performance improvements and leveraging async capabilities in AWS Operators, which enable seamless interactions with AWS. The simplicity of Airflow is attributed to its Python base and the operator ecosystem contributed by companies like AWS, Google, and Databricks. Operators are like building blocks, each designed for a specific task, and can be chained together to create workflows across different cloud providers.The latest version, Airflow 2.6, introduces sensors that wait for specific events and notifiers that act based on workflow success or failure. These additions aim to simplify the user experience. Overall, the growing community of contributors continues to enhance Apache Airflow, making it a popular choice for building machine learning pipelines.Check out the full article on The New Stack:How Apache Airflow Better Manages Machine Learning Pipelines
6/8/2023 • 17 minutes, 3 seconds
Generative AI: What's Ahead for Enterprises?
In this episode featuring Nima Negahban, CEO of Kinetica, the potential impact of generative AI tools like ChatGPT on businesses and organizations is discussed. Negahban highlights the transformative potential of generative AI when combined with data analytics. One use case he mentions is an "Alexa for all your data," where real-time queries can be made about store performance or product underperformance in specific weather conditions. This could provide organizations with a new level of visibility into their operations.Negahban identifies two major challenges in the generative AI space. The first is security, especially when using internal data to train AI models. The second challenge is ensuring accuracy in AI outputs to avoid misleading information. However, he emphasizes that generative AI tools, such as GitHub Copilot, can bring a new expectation of efficiency and innovation for developers.The future of generative AI in the enterprise involves discovering how to orchestrate these models effectively and leverage them with organizational data. Negahban mentions the growing interest in vector search and vector database capabilities to generate embeddings and perform embedding search. Kinetica's processing engine, coupled with OpenAI technology, aims to enable ad hoc querying against natural language without extensive data preparation, indexing, or engineering.Check out the episode to hear more about how the integration of generative AI and data analytics presents exciting opportunities for businesses and organizations, providing them with powerful insights and potential for creativity and innovation.Read more about Generative AI on The New StackIs Generative AI Augmenting Our Jobs, or About to Take Them?Generative AI: How to Choose the Optimal DatabaseHow Will Generative AI Change the Tech Job Market?Generative AI: How Companies Are Using and Scaling AI Models
6/7/2023 • 19 minutes, 27 seconds
Don't Force Containers and Disrupt Workflows
In this episode of The New Stack Makers from KubeCon EU 2023, Rob Barnes, a senior developer advocate at HashiCorp, discusses how their networking service, Consul, allows users to incorporate containers or virtual machines into their workflows without imposing container usage. Consul, an early implementation of service mesh technology, offers a full-featured control plane with service discovery, configuration, and segmentation functionalities. It supports various environments, including traditional applications, VMs, containers, and orchestration engines like Nomad and Kubernetes.Barnes explains that Consul can dictate which services can communicate with each other based on rules. By leveraging these capabilities, HashiCorp aims to make users' lives easier and software more secure.Barnes emphasizes that there are misconceptions about service mesh, with some assuming it is exclusively tied to container usage. He clarifies that service mesh adoption should be flexible and meet users wherever they are in their technology stack. The future of service mesh lies in educating people about its role within the broader context and addressing any knowledge gaps.Join Rob Barnes and our host, Alex Williams, in exploring the evolving landscape of service mesh and understanding how it can enhance workflows.Find out more about HashiCorp or the biggest news from KubeCon on The New Stack:HashiCorp Vault Operator Manages Kubernetes SecretsHow HashiCorp Does Site Reliability EngineeringA Boring Kubernetes Release
5/25/2023 • 12 minutes, 37 seconds
AI Talk at KubeCon
What did software engineers at KubeCon say about how AI is coming up in their work? That's a question we posed Taylor Dolezal, head of ecosystem for the Cloud Native Computing Foundation at KubeCon in Amsterdam. Dolezal said AI did come up in conversation."I think that when it's come to this, typically with KubeCons, and other CNCF and LF events, there's always been one or two topics that have bubbled to the top," Dolezal said.At its core, AI surfaces a data issue for users that correlates to data sharing issues, said Dolezal in this latest episode of The New Stack Makers.Read more about AI and Kubernetes on The New Stack:3 Important AI/ML Tools You Can Deploy on KubernetesFlyte: An Open Source Orchestrator for ML/AI WorkflowsOvercoming the Kubernetes Skills Gap with ChatGPT Assistance
5/24/2023 • 16 minutes, 49 seconds
A Boring Kubernetes Release
Kubernetes release 1.27 is boring, says Xander Grzywinski, a senior product manager at Microsoft.It's a stable release, Grzywinski said on this episode of The New Stack Makers from KubeCon Europe in Amsterdam."It's reached a level of stability at this point," said Grzywinski. "The core feature set has become more fleshed out and fully realized.The release has 60 total features, Grzywinski said. The features in 1.27 are solid refinements of features that have been around for a while. It's helping Kubernetes be as stable as it can be.Examples?It has a better developer experience, Grzywinski said. Storage primitives and APIs are more stable.
5/22/2023 • 15 minutes, 3 seconds
How Teleport’s Leader Transitioned from Engineer to CEO
The mystery and miracle of flight sparked Ev Kontsevoy’s interest in engineering as a child growing up in the Soviet Union.“When I was a kid, when I saw like airplane flying over, I was having a really hard time not stopping and staring at it until it's gone,” said Kontsevoy, co-founder and CEO of Teleport, said in this episode of the Tech Founders Odyssey podcast series. “I really wanted to figure out how to make it fly.”Inevitably, he said, the engineering path led him to computers, where he was thrilled by the power he could wield through programming. “You're a teenager, no one really listens to you yet, but you tell a computer to go print number 10 ... and then you say, do it a million times. And the stupid computer just prints 10 million. You feel like a magician that just bends like machines to your will.”In this episode of the series, part of The New Stack Makers podcast, Kontsevoy discussed his journey to co-founding Teleport, an infrastructure access platform, with TNS co-hosts Colleen Coll and Heather Joslyn.
5/4/2023 • 33 minutes, 35 seconds
Developer Tool Integrations with AI -- The AWS Approach
Developer tool integration and AI differentiate workflows to achieve that "fluid" state developers strive for in their work.Amazon CodeCatalyst and Amazon CodeWhisperer exemplify how developer workflows are accelerating and helping to create these fluid states. That's a big part of the story we hear from Harry Mower, director AWS DevOps Services, and Doug Seven, director, Software Development, AWS CodeWhisperer, from our recording in Seattle earlier in April for this week's AWS Developer Innovation Day.CodeCatalyst serves as an end-to-end integrated DevOps toolchain that provides developers with everything they need to go from planning through to deployment, Mower said. CodeWhisperer is an AI coding companion that generates whole-line and full-line function code recommendations in an integrated development environment (IDE).CodeWhisperer is part of the IDE, Seven said. The acceleration is two-fold. CodeCatalyst speeds the end-to-end integration process, and CodeWhisper accelerates writing code through generative AI.
4/27/2023 • 21 minutes, 20 seconds
CircleCI CTO on How to Quickly Recover From a Malicious Hack
Just as everyone was heading out to the New Year's holidays last year, CTO Rob Zuber got a surprise of a most unwelcome sort. A customer alerted CircleCI to suspicious GitHub OAuth activity. Although the scope of the attack appeared limited, there was still no telling if other customers of the DevOps-friendly continuous integration and continuous delivery platform were impacted.This notification kicked off a deeper review by CircleCI’s security team with GitHub, and they rotated all GitHub OAuth tokens on behalf of their customers. On January 4, the company also made the difficult but necessary decision to alert customers of this “security instance,” asking them to immediately rotate any and all stored secrets and review internal logs for any unauthorized access.In this latest episode of The New Stack Makers podcast, we discuss with Zuber the attack and how CircleCI responded. We also talk about what other companies should do to avoid the same situation, and what to do should it happen again.
4/20/2023 • 23 minutes, 43 seconds
What Are the Next Steps for Feature Flags?
Feature flags, the toggles in software development that allow you to turn certain features on or off for certain customers or audiences, offer release management at scale, according to Karishma Irani, head of product at LaunchDarkly.But they also help unleash innovation, as she told host Heather Joslyn of The New Stack in this episode of The New Stack Makers podcast. And that points the way to a future where the potential for easy testing can inspire new features and products, Irani said.“We've observed that when the risk of releasing something is lowered, when the risk of introducing bugs in production or breaking, something is reduced, is lowered, our customers feel organically motivated to be more innovative and think about new ideas and take risks,” she said.
4/12/2023 • 27 minutes, 45 seconds
KubeCon + CloudNativeCon EU 2023: Hello Amsterdam
Hoi Europe and beyond!Once again it is time for cloud native enthusiasts and professionals to converge and discuss cloud native computing in all its efficiency and complexity. The Cloud Native Computing Foundation's KubeCon+CloudNativeCon 2023 is being held later this month in Amsterdam, April 18 - 21, at the Rai Convention Centre.In this latest edition of The New Stack podcast, we spoke with two of the event's co-chairs who helped define this year's themes for the show, which is expected to draw over 9,000 attendees: Aparna Subramanian, Shopify's Director of Production Engineering for Infrastructure; and Cloud Native Infra and Security Enterprise Architect Frederick Kautz.
4/5/2023 • 25 minutes, 9 seconds
The End of Programming is Nigh
s the end of programming nigh?If you ask Matt Welsh, he'd say yes. As Richard McManus wrote on The New Stack, Welsh is a former professor of computer science at Harvard who spoke at a virtual meetup of the Chicago Association for Computing Machinery (ACM), explaining his thesis that ChatGPT and GitHub Copilot represent the beginning of the end of programming.Welsh joined us on The New Stack Makers to discuss his perspectives about the end of programming and answer questions about the future of computer science, distributed computing, and more.Welsh is now the founder of fixie.ai, a platform they are building to let companies develop applications on top of large language models to extend with different capabilities.For 40 to 50 years, programming language design has had one goal. Make it easier to write programs, Welsh said in the interview.Still, programming languages are complex, Welsh said. And no amount of work is going to make it simple.
3/29/2023 • 31 minutes, 42 seconds
How 2 Founders Sold Their Startup to Aqua Security in a Year
Speed is a recurring theme in this episode of The Tech Founder Odyssey. Also, timing.Eilon Elhadad and Eylam Milner, who met while serving in the Israeli military, discovered that source code leak was a hazardous side effect of businesses’ need to move fast and break things in order to stay competitive.“Every new business challenge leads to a new technological solution,” said Elhadad in this episode of The New Stack's podcast series. “The business challenge was to deliver product faster to the business; the solution was to build off the supply chain. And then it leads to a new security attack surface.”Discovering this problem, and finding a solution to it, put Milner and Elhadad in the right place at the right time — just as the tech industry was beginning to rally itself to deal with this issue and give it a name: software supply chain security.It led them to co-found Argon Security, which was acquired by Aqua Security in late 2021, Elhadad told The New Stack, a year after Argon started.
3/22/2023 • 23 minutes, 13 seconds
Why Your APIs Aren’t Safe — and What to Do About It
Given the vulnerability of so many systems, it’s not surprising that cyberattacks on applications and APIs increased 82% in 2022 compared to the previous year, according to a report released this year by Imperva’s global threat researchers.What might rattle even the most experienced technologists is the sheer scale of those attacks. Digging into the data, Imperva, an application and data security company, found that the largest layer seven, distributed denial of service (DDoS) attack it mitigated during 2022 involved — you might want to sit down for this — more than 3.9 million API requests per second.“Most developers, when they think about their APIs, they’re usually dealing with traffic that’s maybe 1,000 requests per second, not too much more than that. Twenty thousand, for a larger API,” said Peter Klimek, director of technology at Imperva, in this episode of The New Stack Makers podcast. “So, to get to 3.9 million, it’s really staggering.”Klimek spoke to Heather Joslyn of TNS about the special challenges of APIs and cybersecurity and steps organizations can take to keep their APIs safe.The episode was sponsored by Imperva.
3/21/2023 • 24 minutes, 33 seconds
Unix Creator Ken Thompson to Keynote Scale Conference
The 20th Annual Southern California Linux Expo (SCALE) runs Thursday through Sunday at the Pasadena Convention Center in Pasadena, Ca., featuring keynotes from notables such as Ken Thompson, the creator of Unix, said Ilan Rabinovich, one of the co-founders and conference chair for the conference on this week's edition of The New Stack Makers. "Honestly, most of the speakers we've had, you know, we got at SCALE in the early days, we just, we, we emailed them and said: 'Would you come to speak at the event?' We ran a call for proposals, and some of them came in as submissions, but a lot of it was just cold outreach. I don't know if that succeeded, because that's the state of where the community was at the time and there wasn't as much demand or just because or out of sheer dumb luck. I assure you, it wasn't skill or any sort of network that we like, we just, you know, we just we managed to, we managed to do that. And that's continued through today. When we do our call for papers, we get hundreds and hundreds of submissions, and that makes it really hard to choose from." Rethinking Web Application Firewalls Thompson, who turned 80 on February 4 (Happy Birthday, Mr. Thompson), created Unix at Bell Labs. He worked with people like Robert Griesemer and Rob Pike on developing the Go programming language and other projects over the years, including Plan 9, UTF-8, and more. Rabinovich is pretty humble about the keynote speakers that the conference attracts. He and the conference organizers scoured the Internet and found Thompson's email, who said he'd love to join them. That's how they attracted Lawrence Lessig, the creator of the Creative Commons license, who spoke at SCALE12x in 2014 about the legal sides of open source, content sharing, and free software. "I wish I could say, we have this very deep network of connections," Rabinovich said. "It's just, these folks are surprisingly approachable, despite, you know, even after years and years of doing amazing work." SCALE is the largest community-run open-source and free software conference in North America, with roots befitting an event that started with a group of college students wanting to share their learnings about Linux. Rabinovitch was one of those college students attending UCSB, the University of California, Santa Barbara. "A lot of the history of SCALE comes from the LA area back when open source was still relatively new and Linux was still fairly hard to get up and running," Rabinovitch said. "There were LUGS (Linux User Groups) on every corner. I think we had like 25 LUGS in the LA area at one point. And so so there was a vibrant open source community.' Los Angeles's freeways and traffic made it difficult to get the open source community together. So they started LUGFest. They held the day-long event at a Nortel building until the telco went belly up. So, as open source people tend to do, they decided to scale, so to speak, the community gatherings. And so SCALE came to be – led by students like Rabinovitch. The conference started with a healthy community of 200 to 250 people. By the pandemic, 3,500 people were attending. For more about SCALE, listen to the full episode of The New Stack Makers wherever you get your podcasts.
3/8/2023 • 19 minutes, 34 seconds
How Solvo’s Co-Founder Got the ‘Guts’ to Be an Entrepreneur
When she was a student in her native Israel, Shira Shamban was a self-proclaimed “geek.” But, unusually for a tech company founder and CEO, not a computer geek. Shamban was a science nerd, with her sights set on becoming a doctor. But first, she had to do her state-mandated military service. And that’s where her path diverged. In the military, she was not only immersed in computers but spent years working in intelligence; she stayed in the service for more than a decade, eventually rising to become head of an intelligence sector for the Israeli Defense Forces. At home, she began building her own projects to experiment with ideas that could help her team. “So that kind of helped me not to be intimidated by technology, to learn that I can learn anything I want by myself,” said Shamban, co-founder of Solvo, a company focused on data and cloud infrastructure security. “And the most important thing is to just try out things that you learn.” To date, Solvo has raised about $11 million through investors like Surround Ventures, Magenta Venture Partners, TLV Partners and others. In this episode of The New Stack Makers podcast series The Tech Founder Odyssey, Shamban talked to Heather Joslyn and Colleen Coll of TNS about her journey.In-Person TeamworkShamban opted to stay in the technology world, nurturing a desire to eventually start her own company. It was during a stint at Dome9, a cloud security company, that she met her future Solvo co-founder, David Hendri — and built a foundation for entrepreneurship. “After that episode, I got the guts,” she said. “Or I got stupid enough.” Hendri, now Solvo’s chief technology officer, struck Shamban as having the right sensibility to be a partner in a startup. At Dome9, she said, “very often, I used to stay up late in the office, and I would see him as well. So we'd grab something to eat.” Their casual conversations quickly revealed that Hendri was often staying late to troubleshoot issues that were not his or his team’s responsibility, but simply things that someone needed to fix. That sense of ownership, she realized, “is exactly the kind of approach one would need to bring to the table in a startup.” The mealtime chats that started Solvo have carried over into its current organizational culture. The company employs 20 people; workers based in Tel Aviv are expected to come to the office four days a week. Hendri and Shamban started their company in the auspicious month of March 2020, just as the Covid-19 pandemic started. While many companies have moved to all-remote work, Solvo never did. “We knew we wanted to sit together in the same room, because the conversations you have over a cup of coffee are not the same ones that you have on a chat, and on Slack,” the CEO said. “So that was our decision. And for a long time, it was an unpopular decision.” As the company scales, finding employees who align with its culture can make recruiting tricky, Shamban said. It's not only about your technical expertise, it's also about what kind of person you are,” she said. “Sometimes we found very professional people that we didn't think would make a good fit to the culture that we want to build. So we did not hire them. And in the boom times, when it was really hard to hire engineers. “These were tough decisions. But we had to make them because we knew that building a culture is easier in a way than fixing a culture. Listen to the full episode to hear more about Shamban's journey.
3/1/2023 • 28 minutes, 20 seconds
Ambient Mesh: No Sidecar Required
At Cloud Native Security Con, we sat down with Solo.io's Marino Wijay and Jim Barton, who discussed how service mesh technologies have matured, especially now with the removal of sidecars in Ambient Mesh that it developed with Google. Ambient Mesh is "a new proxy architecture that, according to the Solo.io site, "moves the proxy to the node level for mTLS and identity. It also allows a policy-enforcement policy to manage Layer 7 security filters and policies. A sidecar is a mini-proxy, a mini-firewall, like an all-in-one router, said Wijay, who does developer relations and advocacy for Solo. A sidecar receives instructions from an upstream control plane. "Now, one of the things that we started to realize with different workloads and different patterns of communication is that not all these workloads need a sidecar or can take advantage of the sidecar," Wijay said. "Some better operate without the sidecar." Ambient Mesh reflects the maturity of service mesh and the difference between day one and day two operations, said Barton, a field engineer with Solo. "Day one operations are a lot about understanding concepts, enabling developers, initial configurations, that sort of thing," Barton said. "The community is really much more focused and Ambient Mesh is a good example of this on day two concerns. How do I scale this? How do I make it perform in large environments? How can I expand this across clusters, clusters in multiple zones in multiple regions, that sort of thing? Those are the kinds of initiatives that we're really seeing come to the forefront at this point." With the maturity of service mesh comes the users. In the context of security, that means the developer security operations person, Barton said. It's not the developer's job to connect services. Their job is to build out the services. "It's up to the platform operator, or DevSecOps engineers to create that, that fundamental plane or foundation for where you can deploy your services, and then provide the security on top of it," Barton said. The engineers then have to configure it and think it through. "How do I know who's doing what and who's talking to who, so that I can start forming my zero trust posture?," Barton said.
2/22/2023 • 14 minutes, 22 seconds
2023 Hotness: Cloud IDEs, Web Assembly, and SBOMs
Here's a breakdown of what we cover: Cloud IDEs will mature as GitHub's Codespaces platform gains acceptance through its integration into the GitHub service. Other factors include new startups in the space, such as GitPod, which offers a secure, cloud-based IDE, and Uptycs, which uses telemetry data to lock-down developer environments. "So I think you'll, you're just gonna see more people exposed to it, and they're gonna be like, 'holy crap, this makes my life a lot easier '." FinOps reflects the more stringent views on managing costs, focusing on the efficiency of resources that a company provides for developers. The focus also translates to the GreenOps movement with its emphasis on efficiency. Software bill of materials (SBOMs) will continue to mature with Sigstore as the project with the fastest expected adoption. Witness, from Telemetry Project, is another project. The SPDX community has been at the center of the movement for over a decade now before people cared about it. GitOps and Open Telemetry: This year, KubeCon submissions topics on GitOps were super high. OpenTelemetry is the second most popular project in the CNCF, behind Kubernetes. Platform engineering is hot. Anisczyk cites Backstage, a CNCF project, as one he is watching. It has a healthy plugin extension ecosystem and a corresponding large community. People make fun of Jenkins, but Jenkins is likely going to be around as long as Linux because of the plugin community. Backstage is going along that same route. WebAssembly: "You will probably see an uptick in edge cases, like smaller deployments as opposed to full-blown cloud-based workloads. Web Assembly will mix with containers and VMs. "It's just the way that software works." Kubernetes is part of today's distributed fabric. Linux is now everywhere. Kubernetes is going through the same evolution. Kubernetes is going into airplanes, cars, and fast-food restaurants. "People are going to focus on the layers up top, not necessarily like, the core Kubernetes project itself. It's going to be all the cool stuff built on top."
2/16/2023 • 19 minutes, 4 seconds
Generative AI: Don't Fire Your Copywriters Just Yet
Everyone in the community was surprised by ChatGPT last year, which a web service responded to any and all user questions with a surprising fluidity. ChatGPT is a variant of the powerful GPT-3 large language model created by OpenAI, a company owned by Microsoft. It is still a demo though it is pretty clear that this type of generative AI will be rapidly commercialized. Indeed Microsoft is embedding the generative AI in its Bing Search service, and Google is building a rival offering. So what are smaller businesses to do to ensure their messages are heard to these machine learning giants? For this latest podcast from The New Stack, we discussed these issues with Ryan Johnston, chief marketing officer for Writer. Writer has enjoyed an early success in generative AI technologies. The company's service is dedicated to a single mission: making sure its customers' content adheres to the guidelines set in place. This can include features such as ensuring the language in the copy matches the company's own designated terminology, or making sure that a piece of content covers all the required topic points, or even that a press release has quotes that are not out of scope with the project mission itself. In short, the service promises "consistently on-brand content at scale," Johnston said. "It's not taking away my creativity. But it is doing a great job of figuring out how to create content for me at a faster pace, [content] that actually sounds like what I want it to sound like." For our conversation, we first delved into how the company was started, its value proposition ("what is it used for?") and what role that AI plays in the company's offering. We also delve a bit into the technology stack Writer deploys to offer these services, as well as what material the Writer may require from their customers themselves to make the service work. For the second part of our conversation, we turn our attention to how other companies (that are not search giants) can get their message across in the land of large language models, and maybe even find a few new sources of AI-generated value along the way. And, for those public-facing businesses dealing with Google and Bing, we chat about how they should they refine their own search engine optimization (SEO) strategies to be best represented in these large models? One point to consider: While AI can generate a lot of pretty convincing text, you still need a human in the loop to oversee the results, Johnston advised. "We are augmenting content teams copywriters to do what they do best, just even better. So we're scaling the mundane parts of the process that you may not love. We are helping you get a first draft on paper when you've got writer's block," Johnston said. "But at the end of the day, our belief is there needs to be a great writer in the driver's seat. [You] should never just be fully reliant on AI to produce things that you're going to immediately take to market."
2/9/2023 • 23 minutes, 29 seconds
Feature Flags are not Just for Devs
The story goes something like this: There's this marketing manager who is trying to time a launch. She asks the developer team when the service will be ready. The dev team says maybe a few months. Let's say three months from now in April. The marketing manager begins prepping for the release. The dev team releases the services the following week. It's not an uncommon occurrence. Edith Harbaugh is the co-founder and CEO of LaunchDarkly, a company she launched in 2014 with John Kodumal to solve these problems with software releases that affect organizations worldwide. Today, LaunchDarkly has 4,000 customers and an annual return revenue rate of $100 million. We interviewed Harbaugh for our Tech Founder Odyssey series on The New Stack Makers about her journey and LaunchDarkly's work. The interview starts with this question about the timing of dev releases and the relationship between developers and other constituencies, particularly the marketing organization. LaunchDarkly is the number one feature management company, Harbaugh said. "Their mission is to provide services to launch software in a measured, controlled fashion. Harbaugh and Kodumal, CTO, founded the company on the premise that software development and releasing software is arduous. "You wonder whether you're building the right thing," Harbaugh said, who has worked as both an engineer and a product manager. "Once you get it out to the market, it often is not quite right. And then you just run this huge risk of how do you fix things on the fly." Feature flagging was a technique that a lot of software companies did. Harbaugh worked at Tripit, a travel service, where they used feature flags as did companies such as Atlassian, where Kodumal had developed software. "So the kernel of LaunchDarkly, when we started in 2014, was to make this technique of feature flagging into a movement called feature management, to allow everybody to build better software faster, in a safer way." LaunchDarkly allows companies to release features however granular an organization wants, allowing a developer to push a release into production in different pieces at different times, Harbaugh said. So, a marketing organization can send a release out even after the developer team has released it into production. "So, for example, if, we were running a release, and we wanted somebody from The New Stack to see it first, the marketing person could turn it on just for you." Harbaugh describes herself as a huge geek. But she also gets it in a rare way for geeks and non-geeks alike. She and Kodumal took a concept used effectively by develops, transforming it into a service that provides feature management for a broader customer base, like the marketer wanting to push releases out in a granular way for a launch on the East Coast that is pre-programmed with feature flags in advance from the company office the previous day in San Francisco. The idea is novel, but like many intelligent, technical founders, Harbaugh's journey reflects her place today. She's a leader in the space, and a fun person to talk to, so we hope you enjoy this latest episode in our tech founder series from The New Stack Makers.
2/2/2023 • 26 minutes, 45 seconds
Port: Platform Engineering Needs a Holistic Approach
By now, almost everyone agreed platform engineering is probably a good idea, in which an organizations builds an internal development platform to empower coders and speed application releases. So, for this latest edition of The New Stack podcast, we spoke with one of the pioneers in this space, Zohar Einy, CEO of Port, to see how platform engineering would work in your organization. TNS Editor Joab Jackson hosted this conversation. Port offers what it claims is the world's first low code platform for developers. Rethinking Web Application Firewalls With Port, an organization can build a software catalogue of approved tools, import its own data model, and set up workflows. Developers can consume all the resources they need through a self-service catalogue, without needing the knowledge how to set up a complex application, like Kubernetes. The DevOps and platform teams themselves maintain the platform. Application owners aren't the only potential users of a self-service catalogues, Einy points out in our convo. DevOps and system administration teams can also use the platform. A DevOps teams can set up automations "to make sure that [developers are] using the platform with the right mindset that fits with their organizational standards in terms of compliance, security, and performance aspects." Even machines themselves could benefit from a self-service platform, for those who are looking to automate deployments as much as possible. Einy offered an example: A CI/CD process could create a build process on its own. If it needs to check the maturity level of some tool, it can do so through an API call. If it's not adequately certified, the developer is notified, but if all the tools are sufficiently mature than the automated process can finish the build without further developer intervention. Another possible process that could be automated would be the termination of permissions when their deadline has passed. Think about an early-warning system for expired digital certificates. "So it's a big driver for both for cost reduction and security best practices," Einy said. Too Many Choices, Not Enough Code But what about developer choice? Won't developers feel frustrated when barred from using the tools they are most fond of? But this freedom to use any tool available was what led us to the current state of overcomplexity in full-stack development, Einy responded. This is why the role of "full-stack developer" seems like an impossible, given all the possible permutations at each layer of the stack. Like the artist who finds inspiration in a limited palette, the developer should be able to find everything they need in a well-curated platform. "In the past, when we talked about 'you-build-it-you-own-it', we thought that the developer needs to know everything about anything, and they have the full ownership to choose anything that they want. And they got sick of it, right, because they needed to know too much," Einy said. "So I think we are getting into a transition where developers are OK with getting what they need with a click of a button because they have so much work on their own." In this conversation, we also discussed measuring success, the role of access control in DevOps, and open source Backstage platform, and its recent inclusion of paid plug-ins. Give it a listen!
1/25/2023 • 21 minutes, 27 seconds
Platform Engineering Benefits Developers, and Companies Too
In this latest episode of The New Stack Makers podcast, we delve more deeply into the emerging practice of platform engineering. The guests for this show are Aeris Stewart, community manager at platform orchestration provider Humanitec and Michael Galloway, an engineering leader for infrastructure software provider HashiCorp. TNS Features Editor Heather Joslyn hosted this conversation. Although the term has been around for several years, platform engineering caught the industry's attention in a big way last September, when Humanitec published a report that identified how widespread the practice was quickly becoming, citing its use by Nike, Starbucks, GitHub and others. Right after the report was released, Stewart provided an analysis for TNS arguing that platform engineering solved the many issues that another practice, DevOps, was struggling with. "Developers don’t want to do operations anymore, and that’s a bad sign for DevOps," Stewart wrote. The post stirred a great deal of conversation around the success of DevOps. Platform engineering is "a discipline of designing and building tool chains and workflows that enable developer self service," Stewart explained. The purpose is to give the developers in your organization a set of standard tools that will allow them to do their job — write and fix apps — as quickly as possible. The platform provides the tools and services "that free up engineering time by reducing manual toil cognitive load," Galloway added. But platform engineering also has an advantage for the business itself, Galloway elaborated. With an internal developer platform in place, a business can scale up with "reliability, cost efficiency and security," Galloway said. Before HashiCorp, Galloway was an engineer at Netflix, and there he saw the benefits of platform engineering for both the dev and the business itself. "All teams were enabled to own the entire lifecycle from design to operation. This is really central to how Netflix was able to scale," Galloway said. A platform engineering team created a set of services that made it possible for Netflix engineers to deliver code "without needing to be continuous delivery experts." The conversation also touched on the challenges of implementing platform engineering, and what metrics you should use to quantify its success. And because platform engineering is a new discipline, we also discussed education and community. Humanitec's debut PlatformCon drew over 6,000 attendees last June (and Platform 2023 has just been scheduled for June). There is also a platform engineering Slack channel, which has drawn over 8,000 participants thus far. "I think the community is playing a really big role right now, especially as a lot of organizations' awareness of platform engineering is just starting," Stewart said. "There's a lot of knowledge that can be gained by building a platform that you don't necessarily want to learn the hard way."
1/18/2023 • 24 minutes, 31 seconds
What’s Platform Engineering? And How Does It Support DevOps?
Platform engineering “is the art of designing and binding all of the different tech and tools that you have inside of an organization into a golden path that enables self service for developers and reduces cognitive load,” said Kaspar Von Grünberg, founder and CEO of Humanitec, in this episode of The New Stack Makers podcast. This is structure is important for individual contributors, Grünberg said, as well as backend engineers: “if you look at the operation teams, it reduces their burden to do repetitive things. And so platform engineers build and design internal developer platforms, and help and serve users. “ This conversation, hosted by Heather Joslyn, TNS features editor, dove into platform engineering: what it is, how it works, the problems it is intended to solve, and how to get started in building a platform engineering operation in your organization. It also debunks some key fallacies around the concept. This episode was sponsored by Humanitec.The Limits of ‘You Build It, You Run It’The notion of “you build it, you run it” — first coined by Werner Vogels, chief technology officer of [sponsor_inline_mention slug="amazon-web-services-aws" ]Amazon,[/sponsor_inline_mention] in a 2006 interview — established that developers should “own” their applications throughout their entire lifecycle. But, Grünberg said, that may not be realistic in an age of rapidly proliferating microservices and multiple, distributed deployment environments. “The scale that we're operating today is just totally different,” he said. “The applications are much more complex.” End-to-end ownership, he added, is “a noble dream, but unfair towards the individual contributor. We're asking developers to do so much at once. And then we're always complaining that the output isn't there or not delivering fast enough. But we're not making it easy for them to deliver.” Creating a “golden path” — though the creation by platform teams of internal developer platforms (IDPs) — can not only free developers from unnecessary cognitive load, Grünberg said, but also help make their code more secure and standardized. For Ops engineers, he said, the adoption of platform engineering can also help free them from doing the same tasks over and over. “If you want to know whether it's a good idea to look at platform engineering, I recommend go to your service desk and look at the tickets that you're receiving,” Grünberg said. “And if you have things like, ‘Hey, can you debug that deployment?’ and ‘Can you spin up in a moment all these repetitive requests?’ that's probably a good time to take a step back and ask yourself, ‘Should the operations people actually spend time doing these manual things?’”The Biggest Fallacies about Platform EngineeringFor organizations that are interested in adopting platform engineering, the Humanitec CEO attacked some of the biggest misconceptions about the practice. Chief among them: failing to treat their platform as a product, in the same way a company would begin creating any product, by starting with research into customer needs. “If you think about how we would develop a software feature, we wouldn't be sitting in a room and taking some assumptions and then building something,” he said. “We would go out to the user, and then actually interview them and say, ‘Hey, what's your problem? What's the most pressing problem?’” Other fallacies embraced by platform engineering newbies, he said, are “visualization” — the belief that all devs need is another snazzy new dashboard or portal to look at — and believing the platform team has to go all-in right from the start, scaling up a big effort immediately. Such an effort, he said is “doomed to fail.” Instead, Grünberg said, “I'm always advocating for starting really small, come up with what's the most lowest common tech denominator. Is that containerization with EKS? Perfect, then focus on that." And don’t forget to give special attention to those early adopters, so they can become evangelists for the product. “make them fans, prioritize the right way, and then show that to other teams as a, ‘Hey, you want to join in? OK, what's the next cool thing we could build?’” Check out the entire episode for much more detail about platform engineering and how to get started with it.
1/11/2023 • 23 minutes, 24 seconds
What LaunchDarkly Learned from 'Eating Its Own Dog Food'
Feature flags — the on/off toggles, written in conditional statements, that allow organizations greater control over the user experience once code has been deployed — are proliferating and growing more complex, and demand robust feature management, said Karishma Irani, head of product at LaunchDarkly, in this episode of The New Stack Makers. In a November survey by LaunchDarkly, which queried more than 1,000 DevOps professionals, 69% of participants said that feature flags are “must-have, mission-critical and/or high priority” for their organizations. “Feature management, we believe, is a modern practice that's becoming more and more common with companies that want to deploy more frequently, innovate faster, and just keep a healthy engineering team,” Irani said. The idea of feature management, Irani said, is to “maximize value while minimizing risk.” LaunchDarkly uses its own software, she said, and eating its own dog food, as the saying goes, has paid off in gaining insights into user needs. As part of LaunchDarkly’s virtual conference Trajectory in November, Irani joined Heather Joslyn, features editor of The New Stack, for a wide-ranging conversation about the latest developments in feature management. This episode of Makers was sponsored by LaunchDarkly.Automating ApprovalsAs an example of the benefits of having first-hand knowledge of how their company's products are used, Irani pointed to an internal project in mid-2022. When the company migrated from [sponsor_inline_mention slug="mongodb" ]MongoDB[/sponsor_inline_mention] to CockroachDB, it used new capabilities in its Feature Workflows product, which allow users to define a workflow that can schedule the gradual release of a feature flag for a future date and time, and automate approval requests. “All of these async processes around approvals schedules, they're critical to releasing software, but they do slow you down and add more potential for manual error or human error,” Irani said. “And so our goal with Feature Workflows was to essentially automate the entire process of a feature release.”Overhauling ExperimentationThis past June, the company also revised its Experimentation offering, she said. Led by James Frost, LaunchDarkly’s head of experimentation, the team did “a complete overhaul of our stats engine, they enhanced the integration path of our customers’ existing data sets and metrics,” Irani said. “They redesigned our UX and the codified model and experimentation best practices into the product itself.” For instance, a new metric import API helps prevent the problem of multiple teams or users within a company using different tools for A/B and other experiments. It “significantly cuts down on manual duplicate work when importing metrics for experimentation,” said Irani. “So you can get set up faster.” Another addition to the Experimentation product is a sample ratio mismatch test, she said, so “you can be confident that all of your experiments are correctly allocating traffic to each variant.” These innovations, along with new capabilities to the company’s Core Flagging Platform, are in general availability. On the horizon — and now available through LaunchDarkly’s early access program, is Accelerate, which lets users track and visualize key engineering metrics, such as deployment frequency, release frequency, lead time for code changes, and flag coverage. “I'm sure you've caught on already,” Irani said, “but a few of these are Dora metrics, which obviously are extremely critical to our users.” Check out the entire episode for more details on what’s new from LaunchDarkly and the problems that innovators in the feature management space still need to solve.
1/4/2023 • 28 minutes, 37 seconds
Hazelcast and the Benefits of Real Time Data
In this latest podcast from The New Stack, we interview Manish Devgan, chief product officer for Hazelcast, which offers a real time stream processing engine. This interview was recorded at KubeCon+CloudNativeCon, held last October in Detroit. "'Real time' means different things to different people, but it's really a business term," Devgan explained. In the business world, time is money, and the more quickly you can make a decision, using the right data, the more quickly one can take action. Although we have many "batch-processing" systems, the data itself rarely comes in batches, Devgan said. "A lot of times I hear from customers that are using a batch system, because those are the things which are available at that time. But data is created in real time sensors, your machines, espionage data, or even customer data — right when customers are transacting with you." What is a Real Time Data Processing Engine? A real time data processing engine can analyze data as it is coming in from the source. This is different from traditional approaches that store the data first, then analyze it later. Bank loans may is example of this approach. With a real time data processing engine in place, a bank can offer a loan to a customer using an automated teller machine (ATM) in real time, Devgan suggested. "As the data comes in, you can actually take action based on context of the data," he argued. Such a loan app may combine real-time data from the customer alongside historical data stored in a traditional database. Hazelcast can combine historical data with real time data to make workloads like this possible. In this interview, we also debated the merits of Kafka, the benefits of using a managed service rather than running an application in house, Hazelcast's users, and features in the latest release of the Hazelcast platform.
12/28/2022 • 14 minutes, 31 seconds
Hachyderm.io, from Side Project to 38,000+ Users and Counting
Back in April, Kris Nóva, now principal engineer at GitHub, started creating a server on Mastodon as a side project in her basement lab. Then in late October, Elon Musk bought Twitter for an eye-watering $44 billion, and began cutting thousands of jobs at the social media giant and making changes that alienated longtime users. And over the next few weeks, usage of Nóva’s hobby site, Hachyderm.io, exploded. “The server started very small,” she said on this episode of The New Stack Makers podcast. “And I think like, one of my friends turned into two of my friends turned into 10 of my friends turned into 20 colleagues, and it just so happens, a lot of them were big names in the tech industry. And now all of a sudden, I have 30,000 people I have to babysit.” Though the rate at which new users are joining Hachyderm has slowed down in recent days, Nóva said, it stood at more than 38,000 users as of Dec. 20. Hachyderm.io is still run by a handful of volunteers, who also handle content moderation. Nóva is now seeking nonprofit status for it with the U.S. Internal Revenue Service, with intentions of building a new organization around Hachyderm. This episode of Makers, hosted by Heather Joslyn, TNS features editor, recounts Hachyderm’s origins and the challenges involved in scaling it as Twitter users from the tech community gravitated to it. Nóva and Joslyn were joined by Gabe Monroy, chief product officer at DigitalOcean, which has helped Hachyderm cope with the technical demands of its growth spurt.HugOps and Solving Storage IssuesSuddenly having a social media network to “babysit” brings numerous challenges, including the technical issues involved in a rapid scale up. Monroy and Nóva worked on Kubernetes projects when both were employed at Microsoft, “so we’re all about that horizontal distribution life.” But the Mastodon application’s structure proved confounding. “Here I am operating a Ruby on Rails monolith that's designed to be vertically scaled on a single piece of hardware,” Nóva said. “And we're trying to break that apart and run that horizontally across the rack behind me. So we got into a lot of trouble very early on by just taking the service itself and starting to decompose it into microservices.” Storage also rapidly became an issue. “We had some non-enterprise but consumer-grade SSDs. And we were doing on the order of millions of reads and writes per day, just keeping the Postgres database online. And that was causing cascading failures and cascading outages across our distributed footprint, just because our Postgres service couldn't keep up.” DigitalOcean helped with the storage issues; the site now uses a data center in Germany, whose servers DigitalOcean manages. (Previously, its servers had been living in Nóva’s basement lab.) Monroy, longtime friends with Nóva, was an early Hachyderm user and reached out when he noticed problems on the site, such as when he had difficulty posting videos and noticed other people complaining about similar problems. “This is a ‘success failure’ in the making here, the scale of this is sort of overwhelming,” Monroy said. “So I just texted Nóva, ‘Hey, what's going on? Anything I could do to help?’ “In the community, we like to talk about the concept of HugOps, right? When people are having issues on this stuff, you reach out, try and help. You give a hug. And so, that was all I did. Nóva is very crisp and clear: This is what I got going on. These are the issues. These are the areas where you could help.”Sustaining ‘the NPR of Social Media’One challenge in particular has nudged Nóva to seek nonprofit status: operating costs. “Right now, I'm able to just kind of like eat the cost myself,” she said. “I operate a Twitch stream, and we're taking the proceeds of that and putting it towards operating service.” But that, she acknowledges, won’t be sustainable as Hachyderm grows. “The whole goal of it, as far as I'm concerned, is to keep it as sustainable as possible,” Nóva said. “So that we're not having to offset the operating costs with ads or marketing or product marketing. We can just try to keep it as neutral and, frankly, boring as possible — the NPR of social media, if you could imagine such a thing.” Check out the full episode for more details on how Hachyderm is scaling and plans for its future, and Nóva and Monroy’s thoughts about the status of Twitter. Feedback? Find me at @hajoslyn on Hachyderm.io.
12/22/2022 • 26 minutes, 32 seconds
Automation for Cloud Optimization
During the pandemic, many organizations sped up their move to the cloud — without fully understanding the costs, both human and financial, they would pay for the convenience and scalability of a digital transformation. “They really didn’t have a baseline,” said Mekka Williams, principal engineer, at Spot by NetApp, in this episode of The New Stack Makers podcast. “And so the those first cloud bills, I'm sure were shocking, because you don't get a cloud bill, when you run on your on-premises environment, or even your private cloud, where you've already paid the cost for the infrastructure that you're using. What’s especially worrisome is that many of those costs are simply wasted, Williams said. “Most of the containerized applications running in Kubernetes clusters are running underutilized,” she said. “And anything that's underutilized in the cloud equates to waste. And if we want to be really lean and clean and use resources in a very efficient manner, we have to have really good cloud strategy in order to do that.” This episode of The New Stack Makers, hosted by Heather Joslyn, TNS features editor, focused on CloudOps, which in this case stands for “cloud operations.” (It can also stand for “cloud optimization,” but more about that later.) The conversation was sponsored by Spot by NetApp. Automation for Cloud Optimization Many organizations that moved quickly to the cloud during the dog days of the pandemic have begun to revisit the decisions they made and update their strategies, Williams said. “We see some organizations that are trying to modernize their applications further, to make better use of the services that are available in the cloud,” she said. “The cloud is getting more complex as they grow and mature in their journey. “And so they're looking for ways to simplify their operations. And as always keep their costs down. Keep things simple for their DevOps and SRE, to is not incur additional technical debt, but still make the most make the best use out of their cloud, wherever they are.” Automation holds the key to CloudOps — both definitions — according to Williams. For starters, it makes teams more efficient. “The less tasks that your workforce have to perform manually, the more time they have to spend focused on business logic and being innovative,” Williams said. “Automation also helps you with repeatability. And it's less error-prone, and it helps you standardize. Really good automation simplifies your environment greatly.” Automating repetitive tasks can also help prevent your site reliability engineers (SREs) from burnout, she said. Practicing “good data hygiene,” Williams said, also helps contain costs and reduce toil: “Making sure you're using the right tier of data, making sure you're not over-provisioned. And the type of storage you need, you don't need to pay top dollar for high-performing storage, if it's just backup data that doesn't get accessed that often.” Such practices are “good to know on-premises, but these are imperative to know when you're in the cloud,” she said, in order to reduce waste. During this episode, Williams pointed to solutions in the Spot by Netapp portfolio that use automation to help make the most of cloud infrastructure, such as its flagship product, Elastigroup, which takes advantage of excess capacity to scale workloads. In June, Spot by NetApp acquired Instaclustr, a solution for managing open source database and streaming technologies. The company recognizes the growing importance of open source for enterprises. “We're paying attention to trends for cloud applications,” Williams said, “and we're growing the portfolio to address the needs that are top of mind for those customers.” Check out the entire episode to learn more about CloudOps.
12/20/2022 • 22 minutes, 47 seconds
Redis Looks Beyond Cache Toward Everything Data
Redis, best known as a data cache or real-time data platform, is evolving into much more, Tim Hall, chief of product at the company told The New Stack in a recent TNS Makers podcast. Redis is an in-memory database or memory-first database, which means the data lands there and people are using us for both caching and persistence. However, these days, the company has a number of flexible data models, but one of the brand promises of Redis is developers can store the data as they're working with it. So as opposed to a SQL database where you might have to turn your data structures into columns and tables, you can actually store the data structures that you're working with directly into Redis, Hall said. Primary Database? “About 40% of our customers today are using us as a primary database technology,” he said. “That may surprise some people if you're sort of a classic Redis user and you knew us from in-memory caching, you probably didn't realize we added a variety of mechanisms for persistence over the years.” Meanwhile, to store the data, Redis does store it on disk, sort of behind the scenes while keeping a copy in memory. So if there's any sort of failure, Redis can recover the data off of disk and replay it into memory and get you back up and running. That's a mechanism that has been around about half a decade now. Yet, Redis is playing what Hall called the ‘long game', particularly in terms of continuing to reach out to developers and showing them what the latest capabilities are. “If you look at the top 10 databases on the planet, they've all moved into the multimodal category. And Redis is no different from that perspective” Hall said. “So if you look at Oracle it was traditionally a relational database, Mongo is traditionally JSON documents store only, and obviously Redis is a key-value store. We've all moved down the field now. Now, why would we do that? We're all looking to simplify the developer’s world, right?” Yet, each vendor is really trying to leverage their core differentiation and expand out from there. And the good news for Redis is speed is its core differentiation. “Why would you want a slow data platform? You don't, Hall said. “So the more that we can offer those extended capabilities for working with things like JSON, or we just launched a data structure called t-digest, that people can use along and we've had support for Bloom filter, which is a probabilistic data structure like all of these things, we kind of expand our footprint, we're saying if you need speed, and reducing latency, and having high interactivity is your goal Redis should be your starting point. If you want some esoteric edge case functionality where you need to manipulate JSON in some very strange way, you probably should go with Mongo. I probably won't support that for a long time. But if you're just working with the basic data structures, you need to be able to query, you need to be able to update your JSON document. Those straightforward use cases we support very, very well, and we support them at speed and scale.” Customer View As a Redis customer, Alain Russell, CEO at Blackpepper, a digital e-commerce agency in Auckland, New Zealand, said his firm has undergone the same transition. “We started off as a Redis as a cache, that helped us speed up traditional data that was slower than we wanted it,” he said. “And then we went down a cloud path a couple of years ago. Part of that migration included us becoming, you know, what's deemed as ‘cloud native.’ And we started using all of these different data stores and data structures and dealing with all of them is actually complicated. You know, and from a developer perspective, it can be a bit painful.” So, Blackpepper started looking for how to make things simpler, but also keep their platform very fast and they looked at the Redis Stack. “And honestly, it filled all of our needs in one platform. And we're kind of in this path at the moment, we were using the basics of it. And we're very early on in our journey, right? We're still learning how things work and how to use it properly. But we also have a big list of things that we're using other data stores for traditional data, and working out, okay, this will be something that we will migrate to, you know, because we use persistent heavily now, in Redis.” Twenty-year-old Blackpepper works with predominantly traditional retailers and helps them in their omni-channel journey. Commercial vs. Open Source Hall said there are three modes of access to the Redis technology: the Redis open source project, the Redis Stack – which the company recommends that developers start with today -- and then there's Redis Enterprise Edition, which is available as software or in the cloud. “It's the most popular NoSQL database on the planet six years running,” Hall said. “And people love it because of its simplicity.” Meanwhile, it takes effort to maintain both the commercial product and the open source effort. Allen, who has worked at Hortonworks, InfluxData, said “Not every open source company is the same in terms of how you make decisions about what lands in your commercial offering and what lands in open source and where the contributions come from and who's involved.” For instance, “if there was something that somebody wanted to contribute that was going to go against our commercial interest, we probably not would not merge that,” Hall said. Redis was run by project founder Salvatore Sanfilippo, for many, many years, and he was the sole arbiter of what landed and what did not land in Redis itself. Then, over the last couple of years, Redis created a core steering committee. It's made up of one individual from AWS, one individual from Alibaba, and three Redis employees who look after the contributions that are coming in from the Redis open source community members who want to contribute those things. “And then we reconcile what we want from a commercial interest perspective, either upstream, or things that, frankly, may have been commoditized and that we want to push downstream into the open source offering, Hall said. “And so the thing that you're asking about is sort of my core existential challenge all the time, that is figuring out where we're going from a commercial perspective. What do we want to land there first? And how can we create a conveyor belt of commercial opportunity that keeps us in business as a software company, creating differentiation against potential competitors show up? And then over time, making sure that those things that do become commoditized, or maybe are not as differentiating anymore, I want to release those to the open source community. But this upstream/downstream kind of challenge is something that we're constantly working through.” Blackpepper was an open source Redis user initially, but they started a journey where they used Memcached to speed up data. Then they migrated to Redis when they moved to the AWS cloud, Russell said. Listen to the Podcast The Redis TNS Makers podcast goes on to look at the use of AI/ML in the platform, the acquisition of RESP.app, the importance of JSON and RediSearch, and where Redis is headed in the future.
12/14/2022 • 40 minutes, 40 seconds
Couchbase’s Managed Database Services: Computing at the Edge
Let’s say you’re a passenger on a cruise ship. Floating in the middle of the ocean, far from reliable Wi-Fi, you wear a device that lets you into your room, that discreetly tracks your move from the bar to the dinner table to the pool and delivers your drink order wherever you are. You can buy sunscreen or toothpaste or souvenirs in the ship’s stores without touching anything. If you’re a Carnival Cruise Lines passenger, this is reality right now, in part because of the company’s partnership with Couchbase, according to Mark Gamble, product and solutions marketing director, Couchbase. Couchbase provides a cloud native, no SQL database technology that's used to power applications for customers including Carnival but also Amadeus, Comcast, LinkedIn, and Tesco. In Carnival’s case, Gamble said, “they run an edge data center on their ships to power their Ocean Medallion application, which they are super proud of. They use it a lot in their ads, because it provides a personalized service, which is a differentiator for them to their customers.” In this episode of The New Stack Makers, Gamble spoke to Heather Joslyn, features editor of TNS, about edge computing, 5G, and Couchbase Capella, its Database as a Service (DBaaS) offering for enterprises. This episode of Makers was sponsored by Couchbase.5G and Offline-First AppsThe goal of edge computing, Gamble told our podcast audience, is bring data and compute closer to the applications that consume it. This speeds up data processing, he said, “because data doesn't have to travel all the way to the cloud and back.” But it also has other benefits “This serves to make applications more reliable, because local data processing sort of removes internet slowness and outages from the equation,” he said. The innovation of 5G networks has also had a big impact on reducing latency and increasing uptime, Gamble said. “To compare with 4G, things like the average round trip data travel time between the device, and the cell tower is like 15 milliseconds. And with 5G, that latency drops to like two milliseconds. And 5G can support they say, a million devices, within a third of a mile radius, way more than what's possible with 4G.” But 5G, Gamble said, “really requires edge computing to realize its its full potential.” Increasingly, he said, Couchbase hears interest from its customers in building “offline-first” applications, which can run even in Wi-Fi dead zones. The use cases, he said, are everywhere: “When I pass a fast food restaurant, it's starting to become more common, where you'll see that, instead of just a box you're talking to, there's a person holding a tablet, and they walk down the line, and they're taking orders. And as they come closer to the restaurant, it syncs up with the kitchen. They find that just a better, more efficient way to serve customers. And so it becomes a competitive differentiator forum.” As part of Couchbase’s Capella product, it recently announced Capella App Service, a new capability for mobile developers, is a fully managed backend designed for mobile, Internet of Things (IoT) and edge applications. “Developers use it to access and sync data between the Database as a Service and their edge devices, as well as it handles authenticating and managing mobile and edge app users,” he said. Used in conjunction with Couchbase Lite, a lightweight, embedded NoSQL database used with mobile and IoT devices, Capella App Services synchronizes the data between backend and edge devices. Even for workers in remote areas, “eventually, you have to make sure that data updates are shared with the rest of the ecosystem,” Gamble said. “ And that's what App Services is meant to do, as conductivity allows — so during network disruptions in areas with no internet, apps will still continue to operate.” Check out the rest of the conversation to learn more about edge computing and the challenges Gamble thinks still need to be addressed in that space.
12/7/2022 • 25 minutes, 46 seconds
Open Source Underpins A Home Furnishings Provider’s Global Ambitions
Wayfair describes itself as the “the destination for all things home: helping everyone, anywhere create their feeling of home.” It provides an online platform to acquire home furniture, outdoor decor and other furnishings. It also supports its suppliers so they can use the platform to sell their home goods, explained Natali Vlatko, global lead, open source program office (OSPO) and senior software engineering manager, for Wayfair as the featured guest in Detroit during KubeCon + CloudNativeCon North America 2022. “It takes a lot of technical, technical work behind the scenes to kind of get that going,” Vlatko said. This is especially true as Wayfair scales its operations worldwide. The infrastructure must be highly distributed, relying on containerization, microservices, Kubernetes, and especially, open source to get the job done. “We have technologists throughout the world, in North America and throughout Europe as well,” Vlatko said. “And we want to make sure that we are utilizing cloud native and open source, not just as technologies that fuel our business, but also as the ways that are great for us to work in now.” Open source has served as a “great avenue” for creating and offering technical services, and to accomplish that, Vlatko amassed the requite tallent, she said. Vlatko was able to amass a small team of engineers to focus on platform work, advocacy, community management and internally on compliance with licenses. About five years ago when Vlatko joined Wayfair, the company had yet to go “full tilt into going all cloud native,” Vlatko said. Wayfair had a hybrid mix of on-premise and cloud infrastructure. After decoupling from a monolith into a microservices architecture “that journey really began where we understood the really great benefits of microservices and got to a point where we thought, ‘okay, this hybrid model for us actually would benefit our microservices being fully in the cloud,” Vlatko said. In late 2020, Wayfair had made the decision to “get out of the data centers” and shift operations to the cloud, which was completed in October, Vlatko said. The company culture is such that engineers have room to experiment without major fear of failure by doing a lot of development work in a sandbox environment. “We've been able to create production environments that are close to our production environments so that experimentation in sandboxes can occur. Folks can learn as they go without actually fearing failure or fearing a mistake,” Vlatko said. “So, I think experimentation is a really important aspect of our own learning and growth for cloud native. Also, coming to great events like KubeCon + CloudNativeCon and other events [has been helpful]. We're hearing from other companies who've done the same journey and process and are learning from the use cases.”
12/1/2022 • 16 minutes, 3 seconds
ML Can Prevent Getting Burned For Kubernetes Provisioning
In the rush to create, provision and manage Kubernetes, often left out is proper resource provisioning. According to StormForge, a company paying, for example, a million dollars a month on cloud computing resources is likely wasting $6 million a year of resources on the cloud on Kubernetes that are left unused. The reasons for this are manifold and can vary. They include how DevOps teams can tend to estimate too conservatively or aggressively or overspend on resource provisioning. In this podcast with StormForge’s Yasmin Rajabi, vice president of product management, and Patrick Bergstrom CTO, we look at how to properly provision Kubernetes resources and the associated challenges. The podcast was recorded live in Detroit during KubeCon + CloudNativeCon Europe 2022. Rethinking Web Application Firewalls Almost ironically, the most commonly used Kubernetes resources can even complicate the ability to optimize resources for applications.The processes typically involve Kubernetes resource requests and limits, and predicting how the resources might impact quality of service for pods. Developers deploying an application on Kubernetes often need to set CPU-request, memory-request and other resource limits. “They are usually like ‘I don't know — whatever was there before or whatever the default is,’” Rajabi said. “They are in the dark.” Sometimes, developers might use their favorite observability tool and say “‘we look where the max is, and then take a guess,’” Rajabi said. “The challenge is, if you start from there when you start to scale that out — especially for organizations that are using horizontal scaling with Kubernetes — is that then you're taking that problem and you're just amplifying it everywhere,” Rajabi said. “And so, when you've hit that complexity at scale, taking a second to look back and ‘say, how do we fix this?’ you don't want to just arbitrarily go reduce resources, because you have to look at the trade off of how that impacts your reliability.” The process then becomes very hit or miss. “That's where it becomes really complex, when there are so many settings across all those environments, all those namespaces,” Rajabi said. “It's almost a problem that can only be solved by machine learning, which makes it very interesting.” But before organizations learn the hard way about not automating optimizing deployments and management of Kubernetes, many resources — and costs — are bared to waste. “It's one of those things that becomes a bigger and bigger challenge, the more you grow as an organization,” Bergstrom said. Many StormForge customers are deploying into thousands of namespaces and thousands of workloads. “You are suddenly trying to manage each workload individually to make sure it has the resources and the memory that it needs,” Bergstrom said. “It becomes a bigger and bigger challenge.” The process should actually be pain free, when ML is properly implemented. With StormForge’s partnership with Datadog, it is possible to apply ML to collect historical data, Bergstrom explained. “Then, within just hours of us deploying our algorithm into your environment, we have machine learning that's used two to three weeks worth of data to train that can then automatically set the correct resources for your application. This is because we know what the application is actually using,” Bergstrom said. “We can predict the patterns and we know what it needs in order to be successful.”
11/30/2022 • 15 minutes, 49 seconds
What’s the Future of Feature Management?
Feature management isn’t a new idea but lately it’s a trend that’s picked up speed. Analysts like Forrester and Gartner have cited adoption of the practice as being, respectively, “hot” and “the dominant approach to experimentation in software engineering.” A study released in November found that 60% of 1,000 software and IT professionals surveyed started using feature flags only in the past year, according to the report sponsored by LaunchDarkly, the feature management platform and conducted by Wakefield Research. At the heart of feature management are feature flags, which give organizations the ability to turn features on and off, without having to re-deploy an entire app. Feature flags allow organizations test new features, and control things like access to premium versions of a customer-facing service. An overall feature management practice that includes feature flags allows organizations “to release progressively any new feature to any segment of users, any environment, any cohort of customers in a controlled manner that really reduces the risk of each release,” said Ravi Tharisayi, senior director of product marketing at LaunchDarkly, in this episode of The New Stack Makers podcast. Tharisayi talked to The New Stack’s features editor, Heather Joslyn, about the future of feature management, on the eve of the company’s latest Trajectory user conference. This episode of Makers was sponsored by LaunchDarkly.Streamlining Management, Saving MoneyThe participants in the new survey worked at companies of at least 200 employees, and nearly all of them that use feature flags — 98%— said they believe they save their organizations money and demonstrate a return on investment. Furthermore, 70% said that their company views feature management as either a mission-critical or a high-priority investment. Fielding the annual survey, Tharisayi said, has offered a window into how organizations are using feature flags. Fifty-five percent of customers in the 2022 survey said they use feature flags as long-term operational controls — for API rate limiting, for instance, to prioritize certain API calls in high-traffic situations. The second most common use, the survey found — cited by 47% of users — was for entitlements, “managing access to different types of plans, premium plans versus other plans, for example,” Tharisayi said. “This is really a powerful capability because of this ability to allow product managers or other personas to manage who has access to certain features to certain plans, without having to have developers be involved,” he said. “Previously, that required a lot of developer involvement.”Experimentation, Metrics, Cultural ShiftsLaunchDarkly, Tharisayi said, has been investing in and improving its platform’s experimentation and measurement capabilities: “At the core of that is this notion that experimentation can be a lot more successful when it's tightly integrated to the developer workflow.” As an example, he pointed to CCP Games, makers of the gaming platform EVE Online, which serves millions of players. “They were recently thinking through how to evolve their recommendation engine, because they wanted this engine to recommend actions for their gamers that will hopefully increase their ultimate North Star metric,” its tracking of how much time gamers spend with their games. By using LaunchDarkly’s platform, CCP was able to run A/B tests and increase gamers’ session lengths and engagement. ”So that's the kind of capability that we think is going to be an increasing priority,” Tharisayi said. As feature management matures and standardizes, he said, he pointed to the adoption of DevOps as a model and cautionary tale. ”When it comes to cultural shifts, like DevOps or feature management that require teams to work in a different way, oftentimes there can be early success with a small team,” Tharisayi said “But then there can be some cultural and process barriers as you're trying to standardize to the team level and multi-team level, before figuring out the kinks in deploying it at an organization-wide level.” He added, “that's one of the trends that we observed a little bit in this survey, is that there are some cultural elements to getting success at scale, with something like feature management and the opportunity as an industry to support organizations as they're making that quest to standardize a practice like this, like any other cultural practice.” Check out the full episode for more on the survey and on what’s next for feature management.
DETROIT — Rob Skillington’s grandfather was a civil engineer, working in an industry that, in over a century, developed processes and know-how that enabled the creation of buildings, bridges and road. “A lot of those processes matured to a point where they could reliably build these things,” said Skillington, co-founder and chief technology officer at Chronosphere, an observability platform. “And I think about observability as that same maturity of engineering practice. When it comes to building software that actually is useful in the world, it is this process that helps you actually achieve the deployment and operation of these large scale systems that we use every day.” Skillington spoke about the evolution of observability, and his company’s recent donation of an open source project to Prometheus, in this episode of The New Stack Makers podcast. Heather Joslyn, features editor of TNS, hosted the conversation. This On the Road edition of The New Stack Makers was recorded at KubeCon + CloudNativeCon North America, in the Motor City. The episode was sponsored by Chronosphere.A Donation to the Prometheus ProjectHelping observability practices grow as mature and reliable as civil engineering rules that help build sturdy skyscrapers is a tough task, Skillington suggested. In the cloud era, he said, “you have to really prepare the software for a whole set of runtime environments. And so the challenges around that is really about making it consistent, well understood and robust.” At KubeCon in late October, Chronosphere and PromLabs (founded by Julius Volz, creator of Prometheus) announced that they had donated their open source project PromLens to the Prometheus project, the open source monitoring and alerts primitive. The donation is a way of placing a bet on a tool that integrates well with Kubernetes. “There's this real yearning for essentially a standard that can be built upon by everyone in the industry, when it comes to these core primitives, essentially,” Skillington said. “And Prometheus is one of those primitives. We want to continue to solidify that as a primitive that stands the test of time.” “We can't build a self-driving car if we're always building a different car,” he added. PromLens builds Prometheus queries in a sort of integrated development environment (IDE), Skillington said. It also makes it easier for more people in an organization to create queries and understand the meaning and seriousness of alerts. The PromLens tool breaks queries into a visual format, and allows users to edit them through a UI. “Basically, it's kind of like a What You See Is What You Get editor, or WYSIWYG editor, for Prometheus queries,” Skillington said. “Some of our customers have tens of thousands of these alerts to find in PromQL, which is the query language for Prometheus,” he noted. “Having a tool like an integrated development environment — where you can really understand these complex queries and iterate faster on, setting these up and getting back to your day job — is incredibly important.” Check out the full episode for more on PromLens and the current state of observability.
11/23/2022 • 15 minutes, 31 seconds
How Boeing Uses Cloud Native
In this latest podcast from The New Stack, we spoke with Ricardo Torres, who is the chief engineer of open source and cloud native for aerospace giant Boeing. Torres also joined the Cloud Native Computing Foundation in May to serve as a board member. In this interview, recorded at KubeCon+CloudNativeCon last month, Torres speaks about Boeing's use of open source software, as well as its adoption of cloud native technologies. While we may think of Boeing as an airplane manufacturer, it would be more accurate to think of the company as a large-scale system integrator, one that uses a lot of software. So, like other large-scale companies, Boeing sees a distinct advantage in maintaining good relations with the open source community. "Being able to leverage the best technologists out there in the rest of the world is of great value to us strategically," Torres said. This strategy allows Boeing to "differentiate on what we do as our core business rather than having to reinvent the wheel all the time on all of the technology." Like many other large companies, Boeing has created an open source office to better work with the open source community. Although Boeing is primarily a consumer of open source software, it still wants to work with the community. "We want to make sure that we have a strategy around how we contribute back to the open source community, and then leverage those learnings for inner sourcing," he said. Boeing also manages how it uses open source internally, keeping tight controls on the supply chain of open source software it uses. "As part of the software engineering organization, we partner with our internal IT organization, to look at our internet traffic and assure nobody's going out and downloading directly from an untrusted repository or registry. And then we host instead, we have approved sources internally." It's not surprising that Boeing, which deals with a lot of government agencies, embraces the practice of using software bills of material (SBOMs), which provide a full listing of what components are being used in a software system. In fact, the company has been working to extend the comprehensiveness of SBOMs, according to Torres. " I think one of the interesting things now is the automation," he said of SBOMs. "And so we're always looking to beef up the heuristics because a lot of the tools are relatively naïve, and that they trust that the dependencies that are specified are actually representative of everything that's delivered. And that's not good enough for a company like Boeing. We have to be absolutely certain that what's there is exactly what did we expected to be there."Cloud Native ComputingWhile Boeing builds many systems that reside in private data centers, the company is also increasingly relying on the cloud as well. Earlier this year, Boeing had signed agreements with the three largest cloud service providers (CSPs): Amazon Web Services, Microsoft Azure and the Google Cloud Platform. "A lot of our cloud presence is about our development environments. And so, you know, we have cloud-based software factories that are using a number of CNCF and CNCF-adjacent technologies to enable our developers to move fast," Torres said.
11/23/2022 • 12 minutes, 4 seconds
Case Study: How Dell Technologies Is Building a DevRel Team
DETROIT — Developer relations, or DevRel to its friends, is not only a coveted career path but also essential to helping developers learn and adopt new technologies. That guidance is a matter of survival for many organizations. The cloud native era demands new skills and new ways of thinking about developers and engineers’ day-to-day jobs. At Dell Technologies, it meant responding to the challenges faced by its existing customer base, which is “very Ops centric — server admins, system admins,” according to Brad Maltz, of Dell. With the rise of the DevOps movement, “what we realized is our end users have been trying to figure out how to become infrastructure developers,” said Maltz, the company’s senior director of DevOps portfolio and DevRel. “They've been trying to figure out how to use infrastructure as code Kubernetes, cloud, all those things.” “And what that means is we need to be able to speak to them where they want to go, when they want to become those developers. That’s led us to build out a developer relations program ... and in doing that, we need to grow out the community, and really help our end users get to where they want to.” In this episode of The New Stack’s Makers podcast, Maltz spoke to Heather Joslyn, TNS features editor, about how Dell has, since August, been busy creating a DevRel team to aid its enterprise customers seeking to adopt DevOps as a way of doing business. This On the Road edition of Makers, recorded at KubeCon + CloudNativeCon North America in the Motor City, was sponsored by Dell Technologies. Recruiting Influencers Maltz, an eight-year veteran of Dell, has moved quickly in assembling his team, with three hires made by late October and a fourth planned before year’s end. That’s lightning fast, especially for a large, established company like Dell, which was founded in 1984. “There's two ways of building a DevOps team,” he said. “One way is to actually kind of go and try to homegrow people on the inside and get them more presence in the community. That's the slower road. “But we decided we have to go and find industry influencers that believe in our cause, that believe in the problem space that we live in. And that's really how we started this: we went out to find some very, very strong top talent in the industry and bring them on board.” In addition to spreading the DevOps solutions gospel at conferences like KubeCon, Maltz’s vision for the team is currently focused on social media and building out a website, developer.dell.com, which will serve as the landing page for the company’s DevRel knowledge, including links to community, training, how-to videos and an API marketplace. In building the team, the company made an unorthodox choice. “We decided to put Dev Rel into product management on the product side, not marketing,” Maltz said. “The reason we did that was we want the DevRel folks to really focus on community contributions, education, all that stuff. “But while they're doing that, their job is to bring the data back from those discussions they're having in the field back to product management, to enable our tooling to be able to satisfy some of those problems that they're bringing back so we can start going full circle.” Facing the Limits of ‘Shift Left’ The roles that Dell’s DevRel team is focusing on in the DevOps culture are site reliability engineers (SREs) and platform engineers. These not only align with its traditional audience of Ops engineers, but reflect a reality Dell is seeing in the wider tech world. “The reality is, application developers don't want to shift left, they don't want to operate. They don't want they want somebody else to take it, and they want to keep developing,” Maltz said. “where DevOps has transitioned for us is, how do we help those people that are kind of that operator turning into infrastructure developer fit into that DevOps culture?” The rise of platform engineering, he suggested, is a reaction to the endless choices of tools available to developers these days. “The notion is developers in the wild are able to use any tool on any cloud with any language, and they can do whatever they want. That's hard to support,” he said. “That's where DevOps got introduced, and was to basically say, Hey, we're gonna put you into a little bit of a box, just enough of a box that we can start to gain control and get ahead of the game. The platform engineering team, in this case, they're the ones in charge of that box.” But all of that, Maltz said, doesn’t mean that “shift left” — giving devs greater responsibility for their applications — is dead. It simply means most organizations aren’t ready for it yet: “That will take a few more years of maturity within these DevOps operating models, and other things that are coming down the road.” Check out the full episode for more from Maltz, including new solutions from Dell aimed at platform engineers and SREs and collaborations with Red Hat OpenShift.
11/22/2022 • 13 minutes, 32 seconds
Kubernetes and Amazon Web Services
Cloud giant Amazon Web Services manages the largest number of Kubernetes clusters in the world, according to the company. In this podcast recording, AWS Senior Engineer Jay Pipes discusses AWS' use of Kubernetes, as well as the company's contribution to the Kubernetes code base. The interview was recorded at KubeCon North America last month.The Difference Between Kubernetes and AWSKubernetes is an open source container orchestration platform. AWS is one of the largest providers of cloud services. In 2021, the company generated $61.1 billion in revenue, worldwide. AWS provides a commercial Kubernetes service, called the Amazon Elastic Kubernetes Service (EKS). It simplifies the Kubernetes experience by adding a control plane and worker nodes. In addition to providing a commercial Kubernetes service, AWS supports the development of Kubernetes, by dedicating engineers to the work on the open source project. "It's a responsibility of all of the engineers in the service team to be aware of what's going on and the upstream community to be contributing to that upstream community, and making it succeed," Pipes said. "If the upstream open source projects upon which we depend are suffering or not doing well, then our service is not going to do well. And by the same token, if we can help that upstream project or project to be successful, that means our service is going to be more successful."What is Kubernetes in AWS?In addition to EKS, AWS has also a number of other tools to help Kubernetes users. One is Karpenter, an open-source, flexible, high-performance Kubernetes cluster autoscaler built with AWS. Karpenter provides more fine-grained scaling capabilities, compared to Kubernetes' built-in Cluster Autoscaler, Pipes said. Instead of using Cluster Autoscaler, Karpenter deploys AWS' own Fleet API, which offers superior scheduling capabilities. Another tool for Kubernetes users is cdk8s, which is an open-source software development framework for defining Kubernetes applications and reusable abstractions using familiar programming languages and rich object-oriented APIs. It is similar to the AWS Cloud Development Kit (CDK), which helps users deploy applications using AWS CloudFormation, but instead of the output being a CloudFormation template, the output is a YAML manifest that can be understood by Kubernetes.AWS and KubernetesIn addition to providing open source development help to Kubernetes, AWS has offered to help defray the considerable expenses of hosting the Kubernetes development and deployment process. Currently, the Kubernetes upstream build process is hosted on the Google Cloud Platform, and artifact registry is hosted in Google's container registry, and totals about 1.5TB worth of storage. Each month, AWS alone was paying $90-$100,000 a month for egress costs, just to have the Kubernetes code on an AWS-hosted infrastructure, Pipes said. AWS has been working on a mirror of the Kubernetes assets that would reside on the company's own cloud servers, thereby eliminating the Google egress costs typically borne by the Cloud Native Computing Foundation. "By doing that we completely eliminate the egress costs out of Google data centers and into AWS data centers," Pipes said.
11/17/2022 • 30 minutes, 42 seconds
Case Study: How SeatGeek Adopted HashiCorp’s Nomad
LOS ANGELES — Kubernetes, the open source container orchestrator, may have a big footprint in the cloud native world, but some organizations are doing just fine without it. Take, for example, SeatGeek, which runs a mobile application that serves as a primary and secondary market for event tickets. For cloud infrastructure, the 12-year-old company’s workloads — which include non-containerized applications — have largely run on Amazon Web Services. A few years ago, it turned to HashiCorp’s Nomad, a scheduler built for running for apps whether they’re containerized or not. “In the beginning, we had a platform that an engineer would deploy something to but it was very constrained. We could only give them certain number of options that they could use, as very static experience,” said Jose Diaz-Gonzalez, a staff engineer at SeatGeek, in this episode of The New Stack Makers podcast. “If they want to scale, an application required manual toil on the platform team side, and then they can do some work. And so for us, we wanted to expose more of the platform to engineers and allow them to have more control over what it is that they were shipping, how that runtime environment was executed, and how they scale their applications.” This On the Road episode of Makers, recorded here during HashiConf, HashiCorp’s annual user conference, featured a case study of SeatGeek’s adoption of Nomad and the HashiCorp Cloud Platform. The conversation was hosted by Heather Joslyn, features editor of TNS. This episode was sponsored by HashiCorp. Nomad vs. Kubernetes: Trade-Offs SeatGeek essentially runs the back office for ticket sales for its partners, including Broadway productions and NFL teams like Dallas Cowboys, providing them with “something like a software as a service,” said Diaz-Gonzalez. “All of those installations, they're single tenant, but they run roughly the same way for every single customer. And then on the consumer side we run a ton of different services and microservices and that sort of thing.” Though the workloads run in different languages or on different frameworks, he said, they are essentially homogeneous in their deployment patterns; SeatGeek deploys to Windows and Linux containers on the enterprise side, and to Linux on the consumer, and deploys to both the U.S. and European Union regions. It began using Nomad to give developers more control over their applications; previously, the deployment experience had been very constrained, Diaz-Gonzalez said, resulting in what he called “a very static experience. “To scale an application required manual toil on the platform team side, and then they can do some work,” he said. “And so for us, we wanted to expose more of the platform to engineers and allow them to have more control over what it is that they were shipping, how that how that runtime environment was executed and how they scale their applications.” Now, he said, SeatGeek uses Nomad ‘to provide basically the entire orchestration layer for our deployments Foregoing Kubernetes (K8s) does have its drawbacks. The cloud native ecosystem is largely built around products meant to run with K8s, rather than Nomad. The ecosystem built around HashiCorp’s product is “a much smaller community. If we need support, we lean heavily on HashiCorp Enterprise. And we're willing, on the support team, to answer questions. But if we need support on making some particular change, or using some certain feature, we might be one of the few people starting to use that feature.” “That said, it's much easier for us to manage and support Nomad and its integration with the rest of our platform, because it's so simple to run.” To learn more about SeatGeek’s cloud journey and the challenges it faced — such as dealing with security and policy — check out the full episode.
11/16/2022 • 13 minutes, 19 seconds
OpenTelemetry Properly Explained and Demoed
OpenTelemetry project offers vendor-neutral integration points that help organizations obtain the raw materials — the "telemetry" — that fuel modern observability tools, and with minimal effort at integration time. But what does OpenTelemetry mean for those who use their favorite observability tools but don’t exactly understand how it can help them? How might OpenTelemetry be relevant to the folks who are new to Kuberentes (the majority of KubeCon attendees during the past years) and those who are just getting started with observability? Austin Parker, head of developer relations, Lightstep and Morgan McLean, director of product management, Splunk, discuss during this podcast at KubeCon + CloudNativeCon 2022 how the OpenTelemetry project has created demo services to help cloud native community members better understand cloud native development practices and test out OpenTelemetry, as well as Kubernetes, observability software, etc. At this conjecture in DevOps history, there has been considerable hype around observability for developers and operations teams, and more recently, much attention has been given to helping combine the different observability solutions out there in use through a single interface, and to that end, OpenTelemetry has emerged as a key standard. DevOps teams today need OpenTelemetry since they typically work with a lot of different data sources for observability processes, Parker said. “If you want observability, you need to transform and send that data out to any number of open source or commercial solutions and you need a lingua franca to to be consistent. Every time I have a host, or an IP address, or any kind of metadata, consistency is key and that's what OpenTelemetry provides.” Additionally, as a developer or an operator, OpenTelemetry serves to instrument your system for observability, McLean said. “OpenTelemetry does that through the power of the community working together to define those standards and to provide the components needed to extract that data among hundreds of thousands of different combinations of software and hardware and infrastructure that people are using,” McLean said. Observability and OpenTelemetry, while conceptually straightforward, do require a learning curve to use. To that end, the OpenTelemetry project has released a demo to help. It is intended to both better understand cloud native development practices and to test out OpenTelemetry, as well as Kubernetes, observability software, etc.,the project’s creators say. OpenTelemetry Demo v1.0 general release is available on GitHub and on the OpenTelemetry site. The demo helps with learning how to add instrumentation to an application to gather metrics, logs and traces for observability. There is heavy instruction for open source projects like Prometheus for Kubernetes and Jaeger for distributed tracing. How to acquaint yourself with tools such as Grafana to create dashboards are shown. The demo also extends to scenarios in which failures are created and OpenTelemetry data is used for troubleshooting and remediation. The demo was designed for the beginner or the intermediate level user, and can be set up to run on Docker or Kubernetes in about five minutes. “The demo is a great way for people to get started,” Parker said. “We've also seen a lot of great uptake from our commercial partners as well who have said ‘we'll use this to demo our platform.’”
11/15/2022 • 18 minutes, 16 seconds
The Latest Milestones on WebAssembly's Road to Maturity
DETROIT — Even in the midst of hand-wringing at KubeCon + CloudNativeCon North America about how the global economy will make it tough for startups to gain support in the near future, the news about a couple of young WebAssembly-centric companies was bright. Cosmonic announced that it had raised $8.5 million in a seed round led by Vertex Ventures. And Fermyon Technologies unveiled both funding and product news: a $20 million A Series led by Insight Partners (which also owns The New Stack) and the launch of Fermyon Cloud, a hosted platform for running WebAssembly (Wasm) microservices. Both Cosmonic and Fermyon were founded in 2021. “A lot of people think that Wasm is this maybe up and coming thing, or it's just totally new thing that's out there in the future,” noted Bailey Hayes, a director at Cosmonic, in this episode of The New Stack Makers podcast. But the future is already here, she said: “It's one of technology's best kept secrets, because you're using it today, all over. And many of the applications that we use day-to-day — Zoom, Google Meet, Prime Video, I mean, it really is everywhere. The thing that's going to change for developers is that this will be their compilation target in their build file.” In this On the Road episode of Makers, recorded at KubeCon here in the Motor City, Hayes and Kate Goldenring, a software engineer at Fermyon, spoke to Heather Joslyn, TNS’ features editor, about the state of WebAssembly. This episode was sponsored by the Cloud Native Computing Foundation (CNCF). Wasm and Docker, Java, Python WebAssembly – the roughly five-year-old binary instruction format for a stack-based virtual machine, is designed to execute binary code on the web, lets developers bring the performance of languages like C, C++, and Rust to the web development area. At Wasm Day, a co-located event that preceded KubeCon, support for a number of other languages — including Java, .Net, Python and PHP — was announced. At the same event, Docker also revealed that it has added Wasm as a runtime that developers can target; that feature is now in beta. Such steps move WebAssembly closer to fulfilling its promise to devs that they can “build once, run anywhere.” “With Wasm, developers shouldn't need to know necessarily that it's their compilation target,” said Hayes. But, she added, “what you do know is that you're now able to move that Wasm module anywhere in any cloud. The same one that you built on your desktop that might be on Windows can go and run on an ARM Linux server.” Goldenring pointed to the findings of the CNCF’s “mini survey” of WebAssembly users, released at Wasm Day, as evidence that the technology’s user cases are proliferating quickly. “Even though WebAssembly was made for the web, the number one response —it was around a little over 60% — said serverless,” she noted. “And then it said, the edge and then it said web development, and then it said IoT, and the use cases just keep going. And that's because it is this incredibly powerful, portable target that you can put in all these different use cases. It's secure, it has instant startup time.” Worlds and Warg Craft The podcast guests talked about recent efforts to make it easier to use Wasm, share code and reuse it, including the development of the component model, which proponents hope will simplify how WebAssembly works outside the browser. Goldenring and Hayes discussed efforts now under construction, including “worlds” files and Warg, a package registry for WebAssembly. (Hayes co-presented at Wasm Day on the work being done on WebAssembly package management, including Warg.) A world file, Hayes said, is a way of defining your environment. "One way to think of it is like .profile, but for Wasm, for a component. And so it tells me what types of capabilities I need for my web module to run successfully in the runtime and can read that and give me the right stuff.” And as for Warg, Hayes said: “It's really a protocol and a set of APIs, so that we can slot it into existing ecosystems. A lot of people think of it as us trying to pave over existing technologies. And that's really not the case. The purpose of Warg is to be able to slot right in, so that you continue working in your current developer environment and experience and using the packages that you're used to. But get all of the advantages of the component model, which is this new specification we've been working on" at the W3C's WebAssembly Working Group. Goldenring added another finding from the CNCF survey: “Around 30% of people wanted better code reuse. That's a sign of a more mature ecosystem. So having something like Warg is going to help everyone who's involved in the server side of the WebAssembly space.” Listen to the full conversation to learn more about WebAssembly and how these two companies are tackling its challenges for developers.
11/10/2022 • 16 minutes, 9 seconds
Zero Trust Security and the HashiCorp Cloud Platform
Organizations are now, almost by default, now becoming multi-cloud operations. No cloud service offers the full breadth of what an enterprise may need, and enterprises themselves find themselves using more than one service, often inadvertently. HashiCorp is one company preparing enterprises for the challenges with managing more than a single cloud, through the use of a coherent set of software tools. To learn more, we spoke with Megan Laflamme, HashiCorp director of product marketing, at the HashiConf user conference, for this latest episode of The New Stack Makers podcast. We talked about zero trust computing, the importance identity and the general availability of HashiCorp Boundary single sign-on tool. "In the cloud operating model, the [security] perimeter is no longer static, and you move to a much more dynamic infrastructure environment," she explained.What is the HashiCorp Cloud Platform?The HashiCorp Cloud Platform (HCP) is a fully-managed platform offering HashiCorp software including Consul, Vault, and other services, all connected through HashiCorp Virtual Networks (HVN). Through a web portal or by Terraform, HCP can manage log-ins, access control, and billing across multiple cloud assets. The HashiCorp Cloud Platform now offers the ability to do single sign-on, reducing a lot of the headache of signing into multiple applications and services.What is HashiCorp Boundary?Boundary is the client that enables this “secure remote access” and is now generally available to users of the platform. It is a remote access client that manages fine-grained authorizations through trusted identities. It provides the session connection, establishment, and credential issuance and revocation. "With Boundary, we enable a much more streamlined workflow for permitting access to critical infrastructure where we have integrations with cloud providers or service registries," Laflamme said. The HCP Boundary is a fully managed version of HashiCorp Boundary that is run on the HashiCorp Cloud. With Boundary, the user signs on once, and everything else is handled beneath the floorboards, so to speak. Identities for applications, networks, and people are handled through HashiCorp Vault and HashiCorp Consul. Every action is authorized and documented. Boundary authenticates and authorizes users, by drawing on existing identity providers (IDPs) such as Okta, Azure Active Directory, and GitHub. Consul authenticates and authorizes access between applications and services. This way, networks aren’t exposed, and there is no need to issue and distribute credentials. Dynamic credential injection for user sessions is done with HashiCorp Vault, which injects single-use credentials for passwordless authentication to the remote host.What is Zero Trust Security?With zero trust security, users are authenticated at the service level, rather than through a centralized firewall, which becomes increasingly infeasible in multicloud designs. In the industry, there is a shift “from high trust IP based authorization in the more static data centers and infrastructure, to the cloud, to a low trust model where everything is predicated on identity,” Laflamme explained. This approach does require users to sign on to each individual service, in some form, which can be a headache to those (i.e. developers and system engineers) who sign on to a lot of apps in their daily routine.
11/9/2022 • 13 minutes, 55 seconds
How Do We Protect the Software Supply Chain?
DETROIT — Modern software projects’ emphasis on agility and building community has caused a lot of security best practices, developed in the early days of the Linux kernel, to fall by the wayside, according to Aeva Black, an open source veteran of 25 years. “And now we're playing catch up,“ said Black, an open source hacker in Microsoft Azure’s Office of the CTO “A lot of less than ideal practices have taken root in the past five years. We're trying to help educate everybody now.” Chris Short, senior developer advocate with Amazon Web Services (AWS), challenged the notion of “shifting left” and giving developers greater responsibility for security. “If security is everybody's job, it's nobody's job,” said Short, founder of the DevOps-ish newsletter. “We've gone through this evolution: just develop secure code, and you'll be fine,” he said. “There's no such thing as secure code. There are errors in the underlying languages sometimes …. There's no such thing as secure software. So you have to mitigate and then be ready to defend against coming vulnerabilities.” Black and Short talked about the state of the software supply chain’s security in an On the Road episode of The New Stack Makers podcast. Their conversation with Heather Joslyn, features editor of TNS, was recorded at KubeCon + CloudNativeCon North America here in the Motor City. This podcast episode was sponsored by AWS.‘Trust, but Verify’For our podcast guests, “trust, but verify” is a slogan more organizations need to live by. A lot of the security problems that plague the software supply chain, Black said, are companies — especially smaller organizations — “just pulling software directly from upstream. They trust a build someone's published, they don't verify, they don't check the hash, they don't check a signature, they just download a Docker image or binary from somewhere and run it in production.” That practice, Black said, “exposes them to anything that's changed upstream. If upstream has a bug or a network error in that repository, then they can't update as well.” Organizations, they said, should maintain an internal staging environment where they can verify code retrieved from upstream before pushing it to production — or rebuild it, in case a vulnerability is found, and push it back upstream. That build environment should also be firewalled, Short added: “Create those safeguards of, ‘Oh, you want to pull a package from not an approved source or not a trusted source? Sorry, not gonna happen.’” Being able to rebuild code that has vulnerabilities to make it more secure — or even being able to identify what’s wrong, and quickly — are skills that not enough developers have, the podcast guests noted. More automation is part of the solution, Short said. But, he added, by itself it's not enough. “Continuous learning is what we do here as a job," he said. "If you're kind of like, this is my skill set, this is my toolbox and I'm not willing to grow past that, you’re setting yourself up for failure, right? So you have to be able to say, almost at a moment's notice, ‘I need to change something across my entire environment. How do I do that?’”GitBOM and the ‘Signal-to-Noise Ratio’As both Black and Short said during our conversation, there’s no such thing as perfectly secure code. And even such highly touted tools as software bills of materials, or SBOMs, fall short of giving teams all the information they need to determine code’s safety. “Many projects have dependencies 10, 20 30 layers deep,” Black said. “And so if your SBOM only goes one or two layers, you just don't have enough information to know if as a vulnerability five or 10 layers down.” Short brought up another issue with SBOMs: “There's nothing you can act on. The biggest thing for Ops teams or security teams is actionable information.” While Short applauded recent efforts to improve user education, he said he’s pessimistic about the state of cybersecurity: “There’s not a lot right now that's getting people actionable data. It's a lot of noise still, and we need to refine these systems well enough to know that, like, just because I have Bash doesn't necessarily mean I have every vulnerability in Bash.” One project aimed at addressing the situation is GitBOM, a new open source initiative. “Fundamentally, I think it’s the best bet we have to provide really high fidelity signal to defense teams,” said Black, who has worked on the project and produced a white paper on it this past January. GitBOM — the name will likely be changed, Black said —takes the underlying technology that Git relies on, using a hash table to track changes in a project's code over time, and reapplies it to track the supply chain of software. The technology is used to build a hash table connecting all of the dependencies in a project and building what GItBOM’s creators call an artifact dependency graph. “We had a team working on it a couple of proof of concepts right now,” Black said. “And the main effect I'm hoping to achieve from this is a small change in every language and compiler … then we can get traceability across the whole supply chain.” In the meantime, Short said, there’s plenty of room for broader adoption of the best practtices that currently exist. “Security vendors, I feel, like need to do a better job of moving teams in the right direction as far as action,” he said. At DevOps Chicago this fall, Short said, he ran an open space session in which he asked participants for their pain points related to working with containers “And the whole room admitted to not using least privilege, not using policy engines that are available in the Kubernetes space,” he said. “So there's a lot of complexity that we’ve got to help people understand the need for it, and how to implement it.” Listen to whole podcast to learn more about the state of software supply chain security.
11/8/2022 • 21 minutes, 14 seconds
Ukraine Has a Bright Future
Ukraine has a bright future. It will soon be time to rebuild. But rebuilding requires more than the resources needed to construct a hydroelectric plant or a hospital. It involves software and an understanding of how to use it. Ihor Dvoretskyi, developer advocate at the Cloud Native Computing Foundation (CNCF), and Dima Zakhalyavko, board member at Razom for Ukraine, came to KubeCon in Detroit to discuss the push to provide training materials for Ukraine as they rebuild from the destruction caused by Russia's invasion. Razom, a nonprofit, amplifies the voices of Ukrainians in the United States and helps with humanitarian efforts and IT training. Razom formed before Russia's 2014 invasion of the Crimean peninsula of Ukraine, Zakhalyavko said. Since the full-scale invasion earlier this year, Razom has had an understandable increase in donations and volunteers helping in their efforts. Individual first aid kits for soldiers, tourniquets, and medics supplies are provided by Razom, but so is IT training, materials to train the next generation of IT, translated into Ukrainian. The Linux Foundation is participating with the Cloud Native Computing Foundation (CNCF) in participation with Razom for Ukraine on its Project Veteranius to provide access to technology education for Ukrainian veterans, their families, and Ukrainians in need. "We've realized that basically, we can benefit from the Linux Foundation training portfolio, including the most popular courses like the intro to Linux, or intro to Kubernetes, that can be pretty much easily translated to Ukrainian," Dvoretskyi said. "And in this way, we'll be able to offer the educational materials in their native language." Ukraine has a pretty bright future. "We just need to get through these difficult times," Dvoretskyi said. "But in the future, it's clear the tech industry in Ukraine is growing. Yeah. And people are needed for that." Every effort matters, Dvoretskyi said. "A strong, democratic Ukraine – that's essentially the vision – a European country, a truly European country, that is whole in terms of territorial integrity," Zakhalyavko said. "The future is in technology. And if we can help enable that – in any case, I think that's a win for Ukraine and the world. Technology can make the world a better place."
11/4/2022 • 15 minutes, 40 seconds
Redis is not just a Cache
Redis is not just a cache. It is used in the broader cloud native ecosystem, fits into many service-oriented architectures, and simplifies the deployment and development of modern applications, according to Madelyn Olson, a principal engineer at AWS, during an interview on the New Stack Makers at KubeCon North America in Detroit. Olson said that people have a primary backend database or some other workflow that takes a long time to run. They store the intermediate results in Redis, which provides lower latency and higher throughput. "But there are plenty of other ways you can use Redis," Olson said. "One common way is what I like to call it a data projection API. So you basically take a bunch of different sources of data, maybe a Postgres database, or some other type of Cassandra database, and you project that data into Redis. And then you just pull from the Redis instance. This is a really great, great use case for low latency applications." Redis creator Salvatore Sanfilippo's approach provides a lesson in how to contribute to open source, which Olson recounted in our interview. Olson said he was the only maintainer with write permissions for the project. That meant contributors would have to engage quite a bit to get a response from Sanfilippo. So Olson did what open source contributors do when they want to get noticed. She "chopped wood and carried water," a term that in open source reflects on working to take care of tasks that need attention. That helped Sanfilippo scale himself a bit and helped Olson get involved in the project. It is daunting to get into open source development work, Olson said. A new contributor will face people with a lot more experience and get afraid to open issues. But if a contributor has a use case and helps with documentation or a bug, then most open source maintainers are willing to help. "One big problem throughout open source is, they're usually resource constrained, right?," Olson said. "Open source is oftentimes a lot of volunteers. So they're usually very willing to get more people to help with the project." What's it like now working at AWS on open source projects? Things have changed a lot since Olson joined AWS in 2015, Olson said. APIs were proprietary back in those days. Today, it's almost the opposite of how it used to be. To keep something internal now requires approval, Olson said. Internal differentiation is not needed. For example, open source Redis is most important, with AWS on top as the managed service.
11/3/2022 • 15 minutes, 37 seconds
Case Study: How BOK Financial Managed Its Cloud Migration
LOS ANGELES — When you’re deploying a business-critical application to the cloud, it’s nice to not need the “war room” you’ve assembled to troubleshoot Day 1 problems. When BOK Financial, a financial services company that’s been moving apps to the cloud over the last three years, was launching its largest application on the cloud, its engineers supported it with a “war room type situation, monitoring everything” according to BOK’s Andrew Rau. “After the first day, the system just scaled like it was supposed to … and they're like, ‘OK, I guess we don't need this anymore.’” In this On the Road episode of The New Stack’s Makers podcast, Rau, BOK’s vice president and manager, cloud services, offered a case study about his organization’s cloud journey over the past four years, and the role HashiCorp’s Vault and Cloud Platform played in it. Rau spoke to Heather Joslyn, features editor of The New Stack, about the challenges of moving a very traditional organization in a highly regulated industry to the cloud while maintaining tight security and resilience. This episode of Makers, recorded in October at HashiConf in Los Angeles, was and sponsored by HashiCorp.Upskilling for ‘Everything as Code’In late 2019, Rau said, BOK Financial deployed one small application to the cloud, an initial step on its digital transformation journey. It’s been building out its cloud infrastructure ever since, and soon ran into the limits of each cloud provider’s native tooling. “Where we struggled was we didn't want to deploy and manage our clouds in different ways,” he said. “We didn't want our cloud engineers to know just one cloud provider, and their technology and their tech stack. So that's when we really started looking at how else can we do this. And that's when Terraform was a great option for us.” In 2020, BOK Financial began using HashCorp’s open source Terraform to automate the creation of cloud infrastructure. “We made a conscious effort to really focus on automation,” Rau said. “We didn't want to do things manually, which is really that traditional data center, how we've done things for decades. In tandem with adopting Terraform, BOK Financial’s teams began using GitOps processes for CI/CD. But doing “everything as code,” as Rau put it, “required a lot of upskilling for some of our staff, because they've never done version control or automation capabilities. So in addition to learning Terraform, and these other cloud concepts, they had to learn all of that.” The challenge, though, has been worth it: “It's really empowered us to move a lot faster, and give our application teams the ability to deploy at their pace, versus waiting on other teams.”Seeking Automated SecurityIt took about a year, Rau said, to get BOK Financial’s developers comfortable using Terraform, largely because many were new to version control procedures and strategies. Because the company works in a highly regulated industry, handling customers’ financial data, security is of utmost importance. “We had users credentials for our clouds, and we had them separated out based on the type of deployment that [developers] were doing,” said Rau. “But it wasn't easy for us to rotate those credentials on a frequent basis. And so we really felt the need that we want to make these short, limited tokens, no more than an hour for that deployment. And so that's where we looked at Vault.” HashiCorp’s secrets storage and management tool proved an easy add-on with Terraform. “That's really given us the ability to have effectively no credentials — long-lived credentials — out there,” Rau said. “And secure our environment even more.” And because BOK’s teams don’t want to manage Vault and its complexities themselves, it has opted for HashiCorp Cloud Platform to manage it. For other organizations on a cloud native journey, Rau recommended taking time to do things right. “We went back to rework some things periodically, because we learned something too late,” he said. Also, he advised, keep stakeholders in the loop: “You need to stay in front of the communication with business partners, IT leaders, that it's going to take longer to set this up. But once you do, it's incredible.” Check out the podcast to learn more about BOK Financial's cloud native transformation.
11/2/2022 • 13 minutes, 34 seconds
Devs and Ops: Can This Marriage Be Saved?
DETROIT — Are we still shifting left? Is it realistic to expect developers to take on the burdens of security and infrastructure provisioning, as well as writing their applications? Is platform engineering the answer to saving the DevOps dream? Bottom line: Do Devs and Ops really talk to each other — or just passive-aggressively swap Jira tickets? These are some of the topics explored by a panel, “Devs and Ops People: It’s Time for Some Kubernetes Couples Therapy,” convened by The New Stack at KubeCon + CloudNativeCon North America, here in the Motor City, on Thursday. Panelists included Saad Malik, chief technology officer and co-founder of Spectro Cloud; Viktor Farcic, developer advocate at Upbound; Liz Rice, chief open source officer at Isolalent, and Aeris Stewart, community manager at Humanitec. The latest TNS pancake breakfast was hosted by Alex Williams, The New Stack’s founder and publisher, with Heather Joslyn, TNS features editor, fielding questions from the audience. The event was sponsored by Spectro Cloud. Alleviating Cognitive Load for Devs A big pain point in the DevOps structure — the marriage of frontend and backend in cross-functional teams — is that all devs aren’t necessarily willing or able to take on all the additional responsibilities demanded of them. A lot of organizations have “copy-pasted this one size fits all approach to DevOps,” said Stewart. “If you look at the tooling landscape, it is rapidly growing not just in terms of the volume of tools, but also the complexity of the tools themselves,” they said. “And developers are in parallel expected to take over an increasing amount of the software delivery process. And all of this, together, is too much cognitive load for them.” This situation also has an impact on operations engineers, who must help alleviate developers’ burdens. “It’s causing a lot of inefficiencies of these organizations,” they added, “and a lot of the same inefficiencies that DevOps was supposed to get rid of.” Platform engineering — in which operations engineers provide devs with an internal developer platform that abstracts away some of the complexity — is “a sign of hope,” Stewart said, for organizations for whom DevOps is proving tough to implement. The concept behind DevOps is “about making teams self-sufficient, so they have full control of their application, right from the idea until it is running in production,” said Farcic. But, he added, “you cannot expect them to have 17 years of experience in Kubernetes, and AWS and whatnot. And that's where platforms come in. That's how other teams, who have certain expertise, provide services so that those … developers and operators can actually do the work that they're supposed to do, just as operators today are using services from AWS to do their work. So what AWS for Ops is to Ops, to me, that's what internal developer platforms are to application developers.” Consistency vs. Innovation Platform engineering has been a hot topic in DevOps circles (and at KubeCon) but the definition remains a bit fuzzy, the panelists acknowledged. (“In a lot of organizations, ‘platform engineering’ is just a fancy new way of saying ‘Ops,’” said Rice.) The audience served up questions to the panel about the limits of the DevOps model and how platform engineering fits into that discussion. One audience member asked about balancing the need to provide a consistent platform to an organization’s developers while also allowing devs to customize and innovate. Malik said that both consistency and innovation are possible in a platform engineering structure. “An organization will decide where they want to be able to provide that abstraction,” he said, adding, “When they think about where they want to be as a whole, they could think about, Hey, when we provide our platform, we're going to be providing everything from security to CI/CD from GitHub, from repository management, this is what you will get if you use our IDP or platform itself. But “there are going to be unique use cases,” Malik added, such as developers who are building a new blockchain technology or running WebAssembly. “I think it's okay to give those development teams the ability to run their own platform, as long as you tell them, these are the areas that you have to be responsible for,” he said. “ You're responsible for your own security, your own backup, your own retention capabilities.” One audience member mentioned “Team Topologies,” a 2019 engineering management book by Manuel Pais and Matthew Skelton, and asked the panel if platform engineering is related to DevOps in that it’s more of an approach to engineering management than a destination. “Platform engineering is in the budding stage of its evolution,” said Stewart. “And right now, it's really focused on addressing the problems that organizations ran into when they were implementing DevOps. They added, “I think as we see the community come together more and get more best practices about how to develop platform, you will see it become more than just a different approach to DevOps and become something more distinct. But I don't think it's there quite yet.” Check out the full panel discussion to hear more from our DevOps “counseling session.”
11/1/2022 • 42 minutes, 9 seconds
Latest Enhancements to HashiCorp Terraform and Terraform Cloud
What is Terraform?Terraform is HashiCorp’s flagship software. The open source tool provides a way to define IT resources — such as monitoring software or cloud services — in human-readable configuration files. These files, which serve as blueprints, can then be used to automatically provision the systems themselves. Kubernetes deployments, for instance, can be streamlined through Terraform. "Terraform basically translates what your configuration was codified in by your configuration, and provisions it to that desired end state," explained Meghan Liese, [sponsor_inline_mention slug="hashicorp" ]HashiCorp[/sponsor_inline_mention] vice president of product and partner marketing in this podcast and video recording, recorded at the company's user conference, HashiConf 2022, held this month in Los Angeles. For this interview, Liese discusses the latest enhancements to Terraform, and Terraform Cloud, a managed service offering that is part of the HashiCorp Cloud Platform. [Embed Podcast]Why Should Developers be Interested in Terraform?Typically, the DevOps teams, or system administrators, use Terraform to provision infrastructure, but there is also growing interest to allow developers to do it themselves, in a self-service fashion, Liese explained. Multicloud skills are in short supply, concluded the 2022 HashiCorp State of Cloud Strategy Survey, so making the provision process easier could help more developers, the company reckons. A Terraform self-service model, which was introduced earlier this year, could “cut down on the training an organization would need to do to get developers up to speed on using the infrastructure-as-code software,” Liese said. In this “no code” setup, developers can pick from a catalog of no-code-ready modules, which can be deployed directly to workspaces. No need to learn the HCL configuration language. And the administrators will no longer have to answer the same “how-do-I-do-this-in-HCL?” queries. The new console interface aims to greatly expand the use of Terraform. The company has been offering self-service options for a while, by way of an architecture that allows for modules to be reused through the private registry for Terraform Cloud and Terraform Enterprise.What is the Make Code Block and Why is it Important?The recent release of Terraform 1.3 came with the promise to greatly reduce the amount of code HCL jockeys must manage, through the improvement of the make code block. Actually, make has been available since Terraform 1.1, but some kinks were worked out for this latest release. What make does is provide the ability to refactor resources within a Terraform configuration file, moving large code blocks off as separate modules, where they can be discovered through a public or private registry.What is Continuous Validation?With the known state of a system captured on Terraform, it is a short step to check to ensure that the actual running system is identical to the desired state captured in HCL. Many times “drift” can occur, as administrators, or even the apps themselves, make changes to the system. Especially in regulated environments, such as hospitals, it is essential that a system is in a correct state. Earlier this year, HashiCorp added Drift Detection to Terraform Cloud to continuously check infrastructure state to detect changes and provide alerts and offer remediation if that option is chosen. Now, another update, Continuous validation expands these checks to include user assertions, or post-conditions, as well. One post-condition may be something like ensuring that certificates haven’t expired. If they do, the software can offer an alert to the admin to update the certs. Another condition might be to check for new container images, which may have been updated as a response to a security patch.
10/26/2022 • 17 minutes, 52 seconds
How ScyllaDB Helped an AdTech Company Focus on Core Business
GumGum is a company whose platform serves up online ads related to the context in which potential customers are already shopping or searching. (For instance: it will send ads for Zurich restaurants to someone who’s booked travel to Switzerland.) To handle that granular targeting, it relies on its proprietary machine learning platform, Verity. “For all of our publishers, we send a list of URLs to Verity,” according to Keith Sader, GumGum’s director of engineering. “Verity goes in and basically categorizes those URLs as different [internal bus] categories. So the IB has tons of taxonomies, based on autos, based upon clothing based upon entertainment. And then that's how we do our targeting.” Verity’s targeting data is stored in DynamoDB, but the rest of GumGum’s data is stored in managed MySQL and its daily tracking data is stored in ScyllaDB, a database designed for data-intensive applications. Scylla, Sader said, helps his company avoid serving audiences the same ads over and over again, by keeping track of which ads customers have already seen. “That’s where Scylla comes into the picture for us,” he said. “Scylla is our rate limiter on ad serving.” In this episode of The New Stack’s Makers podcast, Sader and Dor Laor, CEO and co-founder of Scylla, told how GumGum has used ScyllaDB shift more IT resources to its core business and keep it from repeating ads to audiences that have already seen them, no matter where they travel. This case study episode of Makers, hosted Heather Joslyn, TNS features editor, was sponsored by ScyllaDB. ‘Where Do We Spend Our Limited Funds?’ Before adding ScyllaDB to its stack, Sader said, “We had a Cassandra-based system that some very smart people put in. But Cassandra relies upon you to have an engineering staff to support it. “That’s great. But like many types of systems, managing Cassandra databases is not really what our business makes money at.” GumGum was hosting its Cassandra database, installed on Amazon Web Services, by itself — and the drain on resources brought the company’s teams to a crossroards, Sader said. “Where do we spend our limited funds? Do we spend it on Cassandra maintenance? Or do we hire someone to do it for us? And that’s really what determined the switch away from a sort of self-installed, self-managed Cassanda to another provider.” A core issue for GumGum, Sader said, was making sure that it wasn’t over-serving consumers, even as they moved around the globe. “If you see an ad in one place, we need to make sure, if you fly across the country, you don’t see it agin,” he said. That’s an issue Cassandra solved for his company, he said. Because ScyllaDB is a drop-in replacement for Apache Cassandra, it also helped prevent over-serving in all regions of the globe — thus preventing GumGum from losing money. In addition to managing its database for GumGum and other customers, Laor said that an advantage ScyllaDB brings is an “always on” guarantee. “We have a big legacy of infrastructure that's supposed to be resilient,” he said. “For example, every implementation of ours has consistent configurable consistency, so you can have multiple replicas.” Laor added, “Many many times organizations have multiple data centers. Sometimes it's for disaster recovery, sometimes it's also to shorten the latency and be closer to the client.” Replica databases located in data centers that are geographically distributed, he said, protect against failure in any one data center. Seeing Results Bringing ScyllaDB to GumGum was not without challenges, both Sader and Laor said. When ScyllaDB is added to an organization’s stack, Laor said, it likes to start with as small a deployment as possible. “But in the GumGum case, all of these clients were new processes,” Laor said. So hundreds or thousands of processes, all trying to connect to the database, it's really a connection storm.” Scylla’s team created a private version of its database to work on the problem and eventually solved it: “We had to massage the algorithm and make sure that all of the [open source] code committers upstream are summing it up.” It ultimately designed an admission control mechanism that measures the amount of parallel requests that the distributed database is handling, and to slow down requests that arrived for the first time from a new process. “We tried to have the complexity on our end,” Laor said. GumGum has seen the results of handing off that complexity and toil to a managed database. “We have pretty much reduced our entire operations effort with Scylla, to almost nothing,” Sader said. He added, “We're coming into our busy point of the year, ads really get picked up in Q4. So we reach out so we go, ‘Hey, we need more nodes in these regions, can you make that happen for us?’ They go, ‘Yep.’ Give us the things, we pay the money. And it happens.” In 2021, Sader said, “we increased our volume by probably 75% plus 50%, over our standard. The toughest thing to do in this industry is make things look easy. And Scylla helped us make ad serving look easy.” Check out the podcast to get more detail about GumGum’s move to a managed database.
10/20/2022 • 26 minutes, 51 seconds
Terraform's Best Practices and Pitfalls
Wix is a cloud-based development site for making HTML 5 websites and mobile sites with drag and drop tools. It is suited for the beginning user or the advanced developer, said Hila Fish, senior DevOps engineer for Wix, in an interview for The New Stack Makers at HashiCorp’s HashiConf Global conference in Los Angeles earlier this month. Our questions for Fish focused on Terraform, the open source infrastructure-as-code software tool: How has Terraform evolved in uses since Fish started using it in 2018?How does Wix make the most of Terraform to scale its infrastructure?What are some best practices Wix has used with Terraform?What are some pitfalls to avoid with Terraform?What is the approach to scaling across teams and avoiding refactoring to keep the integrations elegant and working Fish started using Terraform in an ad-hoc manner back in 2018. Over time she has learned how to use it for scaling operations. “If you want to scale your infrastructure, you need to use Terraform in a way that will allow you to do that,” Fish said. Terraform can be used ad-hoc to create a machine as a resource, but scale comes with enabling infrastructure that allows the engineers to develop templates that get reused across many servers. “You need to use it in a way that will allow you to scale up as much as you can,” Fish said. Fish said best practices come from how to structure the Terraform code base. Much of it comes down to the teams and how Terraform gets implemented. Engineers each have their way of working. Standard practices can help. In onboarding new teams, a structured code base can be beneficial. New teams onboard and use models already in the code base. And what are some of the pitfalls of using Terraform? We get to that in the recording and more about integrations, why Wix is still on version 0.13, and some new capabilities for developers to use Terraform. Users have historically needed to learn HashiCorp configuration language (HCL) to use the HashiCorp configuration language. At Wix, Fish said, the company is implementing Terraform on the backend with a UI that developers can use without needing to learn HCL.
10/19/2022 • 14 minutes, 14 seconds
How Can Open Source Help Fight Climate Change?
DUBLIN — The mission of Linux Foundation Energy — a collaborative, international effort by power companies to help move the world away from fossil fuels — has never seemed more urgent. In addition to the increased frequency and ferocity of extreme weather events like hurricanes and heat waves, the war between Russia and Ukraine has oil-dependent countries looking ahead to a winter of likely energy shortages. “I think we need to go faster,” said Benoît Jeanson, an enterprise architect at RTE, the French electricity transmission system operator. He aded, “What we are doing with the Linux Foundation Energy is really something that will help for the future, and we need to go faster and faster. For this On the Road episode of The New Stack’s Makers podcast, recorded at Open Source Summit Europe here, we were joined by two guests who work in the power industry and whose organizations are part of LF Energy. In addition to Jeanson, this episode featured Jonas van den Bogaard, a solution architect and open source ambassador at Alliander, an energy network company that provides energy transport and distribution to a large part of the Netherlands. Van den Bogaard also serves on the technical advisory council of LF Energy. Heather Joslyn, features editor of TNS, hosted this conversation.18 Open Source ProjectsLF Energy, started in 2018, now includes 59 member organizations, including cloud providers Google and Microsoft, enterprises like General Electric, and research institutions like Stanford University. It currently hosts 18 open source projects; the podcast guests encouraged listeners to check them out and contribute to them. Among them: OpenSTEF, automated machine learning pipelines to deliver accurate forecasts of the load on the energy grid 48 hours ahead of time. “It gives us the opportunity to take action in time to prevent the maximum grid capacity [from being] reached,” said van den Bogaard. “That’s going to prevent blackouts and that sort of thing. And also, another side: it makes us able to add renewable energies to the grid.” Jeanson said that the open source projects aim to cover “every level of the stack. We also have tools that we want to develop at the substation level, in the field.” Among them: OperatorFabric, Written in Java and based on the Spring framework, OperatorFabric is a modular, extensible platform for systems operators, including several features aimed at helping utility operators. It helps operators coordinate the many tasks and alerts they need to keep track of by aggregate notifications from several applications into a single screen. “Energy is of importance for everyone,” said van den Bogaard. “And especially moving to more cleaner and renewable energy is key for us all. We have great minds all around the world. And I really believe that we can achieve that. The best way to do that is to combine the efforts of all those great minds. Open source can be a great enabler of that.”Cultural Education NeededBut persuading decision-makers in the power industry to participate in building the next generation of open source solutions can be a challenge, van den Bogaard acknowledged. “You see, that the energy domain has been there for a long time, and has been quite stable, up to like 10 years ago.” he said. In such a tradition-bound culture, change is hard. In the cloud era, he added, a lot of organizations “need to digitalize and focus more on it and those capabilities are new. And also, open source, for in that matter is also a very new concept.” One obstacle in the energy industry taking more advantage of open source tools, Jeanson noted, is security: “Some organizations still see open source to be a potential risk.” Getting them on board, he said, requires education and training. He added, “vendors need to understand that open source is an opportunity that they should not be afraid of. That we want to do business with them based on open source. We just need to accelerate the momentum. Check out the whole episode to learn more about LF Energy’s work.
10/18/2022 • 12 minutes, 49 seconds
KubeCon+CloudNativeCon 2022 Rolls into Detroit
It's that time of the year again, when cloud native enthusiasts and professionals assemble to discuss all things Kubernetes. KubeCon+CloudNativeCon 2023 is being held later this month in Detroit, October 24-28. In this latest edition of The New Stack Makers podcast, we spoke with Priyanka Sharma, general manager of the Cloud Native Computing Foundation — which organizes KubeCon —and CERN computer engineer and KubeCon co-chair Ricardo Rocha. For this show, we discussed what we can expect from the upcoming event. This year, there will be a focus on Kubernetes in the enterprise, Sharma said. "We are reaching a point where Kubernetes is becoming the de facto standard when it comes to container orchestration. And there's a reason for it. It's not just about Kubernetes. Kubernetes spawned the cloud native ecosystem and the heart of the cloud native movement is building fast, resiliently observable software that meets customer needs. So ultimately, it's making you a better provider to your customers, no matter what kind of business you are." Of this year's topics, security will be a big theme, Rocha said. Technologies such as Falco and Cilium will be discussed. Linux kernel add-on eBPF is popping up in a lot of topics, especially around networking. Observability and hybrid deployments also weigh heavily on the agenda. "The number of solutions [around Hybrid] are quite large, so it's interesting to see what people come up with," he said. In addition to KubeCon itself, this year there are a number of co-located events, held during or before the conference itself. Some of them hosted by CNCF while others are hosted by other companies such as Canonical. They include the Network Application Day, BackstageCon, CloudNative eBPF Day, CloudNativeSecurityCon, CloudNative WASM Day, Data-on-Kubernetes Day, EnvoyCon, gRPCConf, KNativeCon, Spinnaker Summit, Open Observability Day, Cloud Native Telco Day, Operator Day, The Continuous Delivery Summit, among others. What's amazing is not only the number of co-located events, but the high quality of talks being held there. "Co-located events are a great way to know what's exciting to folks in the ecosystem right now," Sharma said. "Cloud native has really become the scaffolding of future progress. People want to build on cloud native, but have their own focus areas." WebAssembly (WASM) is a great example of this. "In the beginning, you wouldn't have thought of WebAssembly as part of the cloud native narrative, but here we are," Sharma said. "The same thinking from professionals who conceptualized cloud native in the beginning are now taking it a step further." "There's a lot of value in co-located events, because you get a group of people for a longer period in the same room, focusing on one topic," Rocha said. Other topics discussed in the podcast include the choice of Detroit as a conference hub, the fun activities that CNCF have planned in between the technical sessions, surprises at the keynotes, and so much more! Give it a listen.
10/13/2022 • 27 minutes, 1 second
Armon Dadgar on HashiCorp's Practitioner Approach
Armon Dadgar and Mitchell Hashimoto are long-time open source practitioners. It's that practitioner focus they established as core to their approach when they started HashiCorp about ten years ago. Today, HashiCorp is a publicly traded company. Before they started HashiCorp, Dadgar and Hashimoto were students at the University of Washington. Through college and afterward, they cut their teeth on open source and learning how to build software in open source. HashiCorp's business is an outgrowth of the two as practitioners in open source communities, said Dadgar, co-founder and CTO of HashiCorp, in an interview at the HashiConf conference in Los Angeles earlier this month. Both of them wanted to recreate the asynchronous collaboration that they loved so much about the open source projects they worked on as practitioners, Dadgar said. They knew that they did not want bureaucracy or a hard-to-follow roadmap. Dadgar cited Terraform as an example of their approach. Terraform is Hashicorp's open-source, infrastructure-as-code, software tool and reflects the company's model to control its core while providing a good user experience. That experience goes beyond community development and into the application architecture itself. "If you're a weekend warrior, and you want to contribute something, you're not gonna go read this massively complicated codebase to understand how it works, just to do an integration," Dadgar said." So instead, we built a very specific integration surface area for Terraform." The integration is about 200 lines of code, Dadgar said. They call the integration their core plus plugin model, with a prescriptive scaffold, examples of how to integrate, and the SDK. Their "golden path" to integration is how the company has developed a program that today has about 2,500 providers. The HashiCorp open source model relies on its core and plugin model. On Twitter, one person asked why doesn't HashiCorp be a proprietary company. Dadgar referred to HashiCorp's open source approach when asked that question in our interview. "Oh, that's an interesting question," Dadgar said. "You know, I think it'd be a much harder, company to scale. And what I mean by that is, if you take a look at like a Terraform community or Vault – there's thousands of contributors. And that's what solves the integration problem. Right? And so if you said, we were proprietary, hey, how many engineers would it take to build 2000 TerraForm integrations? It'd be a whole lot more people that we have today. And so I think fundamentally, what open source helps you solve is the fact that, you know, modern infrastructure has this really wide surface area of integration. And I don't think you can solve that as a proprietary business." "I don't think we'd be able to have nearly the breadth of integration. We could maybe cover the core cloud providers. But you'd have 50 Terraform providers, not 2500 Terraform providers."
10/12/2022 • 17 minutes, 8 seconds
Making Europe’s ‘Romantic’ Open Source World More Practical
DUBLIN — Europe's open source contributors, according to The Linux Foundation's first-ever survey of them released in September, are driven more by idealism than their American counterparts. The data showed that social reasons for contributing to open source projects were more often cited by Europeans than by Americans, who were more likely to say they participate in open source for professional advancement. A big part of Gabriele (Gab) Columbro's mission as the general manager of the new Linux Foundation Europe, will be to marry Europe's "romantic" view of open source to greater commercial opportunities, Columbro told The New Stack's Makers podcast. The On the Road episode of Makers, recorded in Dublin at Open Source Summit Europe, was hosted by Heather Joslyn, TNS's features editor. Columbro, a native of Italy who also heads FINOS, the fintech open source foundation. recalled his own roots as an individual contributor to the Apache project, and cited what he called "a very grassroots, passion, romantic aspect of open source" in Europe By contrast, he noted, "there is definitely a much stronger commercial ecosystem in the United States. But the reality is that those two, you know, natures of open source are not alternatives." Columbro said he sees advantages in both the idealistic and the practical aspects of open source, along with the notion in the European Union and other countries in the region that the Internet and the software that supports it have value as shared resources. "I'm really all about marrying sort of these three natures of open source: the individual-slash-romantic nature, the commercial dynamics, and the public sector sort of collective value," he said.A 'Springboard' for Regional ProjectsEurope sits thousands of miles away from the headquarters of the FAANG tech behemoths — Facebook, Apple, Amazon, Netflix and Google. (Columbro, in fact, is still based in Silicon Valley, though he says he plans to return to Europe at some point.) For individual developers, he said, Linux Foundation Europe will help give regional projects increased visibility and greater access to potential contributors. Contributing a project to Linux Foundation Europe, he said, is "a powerful way to potentially supercharge your project." He added, "I think any developer should consider this as a potential springboard platform for the technology, not just to be visible in Europe, but then hopefully, beyond." The European organization's first major project, the OpenWallet Foundation, will aim to help create a template for developers to build digital wallets. "I find it very aligned with not only the vision of the Linux Foundation that is about not only creating successful open source projects but defining new markets and new commercial ecosystems around these open source projects." It's also, Columbro added, "very much aligned with the sort of vision of Europe of creating a digital commons, based on open source whereby they can achieve a sort of digital independence."Europe's Turmoil Could Spark InnovationAs geopolitical and economic turmoil roils several nations in Europe, Columbro suggested that open source could see a boom if the region's companies start cutting costs. He places his hopes on open source collaboration to help reconcile some differences. "Certainly I do believe that open source has the potential to bring parties together, " Columbro said. Also, he noted, "generally we see open source and investment in open source to be counter-cyclical with the trends of investments in proprietary software. ... in other words, when there is more pressure, and when there is more pressure to reduce costs, or to, you know, reduce the workforce. "That’s when people are forced to look more seriously about ways to actually collaborate while still maintaining throughput and efficiency. And I think open source is the prime way to do so. Listen to this On the Road episode of Makers to learn more about Linux Foundation Europe.
10/11/2022 • 17 minutes, 18 seconds
After GitHub, Brian Douglas Builds a ‘Saucy’ Startup
Brian Douglas was “the Beyoncé of GitHub.” He jokingly crowned himself with that title during his years at that company, where he advocated for open source and a more inclusive community supporting it. His work there eventually led to his new startup, Open Sauced. Like the Queen Bey, Douglas’ mission is to empower a community. In his case, he’s seeking to support the open source community. With his former employer, GitHub, serving 4 million developers worldwide, the potential size of that audience is huge. In this episode of The Tech Founder Odyssey podcast, he shared why empowerment and breaking down barriers to make anyone “awesome” in open source was the motivation behind his startup journey. Beyoncé “has a superfan group, the Beyhive, that will go to bat for her,” Douglas pointed out. “So if Beyoncé makes a country song, the Beyhive is there supporting her country song. If she starts doing the house music, which is her latest album, [they] are there to the point where like, you cannot say bad stuff about, he pointed out,. So what I’m focused on is having a strong community and having strong ties.” Open Sauced, which launched in June, seeks to build open source intelligence platform to help companies to stay competitive. Its aim is to help give more potential open source contributors the information they need to get started with projects, and help maintain them over time The conversation was co-hosted by Colleen Coll and Heather Joslyn of The New Stack.Web 2.0 ‘Opened the World’Douglas’ introduction to tech started as a kid “cutting his teeth” on a Packard Bell and a shared computer at the community center inside his apartment complex, where he grew up outside of Tampa, Florida. “I don't know what computer was in there, but it ran DOS,” he said. “And I got to play, like, Wolfenstein and eventually Duke Nukem and stuff like that. So that was my first sort of like, touch of a computer and I actually knew what I was doing.” With his MBA in finance, the last recession in 2008 left only sales jobs available. But Douglas always knew he wanted to “build stuff.” “I've always been like a copy and paste [person] and loved playing DOS games,” he told The New Stack. “I eventually [created] a pretty nice MySpace profile. then someone told me ‘Hey, you know, you could actually build apps now.’ “And post Web 2.0. people have frameworks and rails and Django. You just have to run a couple scripts, and you've got a web page live and put that in Heroku, or another server, and you're good. And that opened the world.” Open Sauced began as a side project when he was director of developer advocacy at GitHub; He started working on the project full time in June, after about two years of tinkering with it. Douglas didn’t grow up with money, he said, so moving from as an employee to the risky life of a CEO seeking funding prompted him to create his own comprehensive strategy. This included content creation (including a podcast, The Secret Sauce), other marketing, and shipping frontend code. GitHub was very supportive of him spinning off Open Sauced as an independent startup, with colleagues assisting in refining his pitches to venture capital investors to raise funds. “At GitHub, they have inside of their employee employment contract a moonlight clause,” Douglas said. Which means, he noted, because the company is powered by open source, “basically, whatever you work on, as long as you're not competing directly against GitHub, rebuilding it from the ground up, feel free to do whatever you need to do moonlight.”Support for Blacks in TechOpen Sauced will also continue Douglas’ efforts to increase representation of Blacks in tech and open pathways to level up their skills, similar to his work at GitHub with the Employee Resource Group (ERG) the Blacktocats. “The focus there was to make sure that people had a home, like a community of belonging,” he said. “If you're a black employee at GitHub, you have a space and it was very helpful with things like 2020, during George Floyd. lt was the community [in which] we all supported each other during that situation.” Douglas’ mission to rid the effects of imposter syndrome and champion anyone interested in open source makes him sound more like an open source ”whisperer”’ than a Beyoncé. Whatever the title, his iconic pizza brand — the company’s web address is “opensauced.pizza” — was his version, he said, of creating album cover art before forming the band. His podcast’s tagline urges listeners to “stay saucy.” His plan for doing that at Open Sauced is to encourage new open source contributors. “It's nice to know that projects can now opt in … but as a first-time contributor, where do I start? We can show you, ‘Hey, this project had five contributions, they're doing a great job. Why don't you start here?’
10/7/2022 • 33 minutes, 49 seconds
The AWS Open Source Strategy
Amazon Web Services would not be what it is today without open source. "I think it starts with sustainability," said David Nalley, head of open source and marketing at AWS in an interview at the Open Source Summit in Dublin for The New Stack Makers. "And this really goes back to the origin of Amazon Web Services. AWS would not be what it is today without open source." Long-term support for open source is one of three pillars of the organization's open source strategy. AWS builds and innovates on top of open source and will maintain that approach for its innovation, customers, and the larger digital economy. "And that means that there's a long history of us benefiting from open source and investing in open source," Nalley said. "But ultimately, we're here for the long haul. We're going to continue making investments. We're going to increase our investments in open source." Customers' interest in open source is the second pillar of the AWS open source strategy. "We feel like we have to make investments on behalf of our customers," Nalley said. "But the reality is our customers are choosing open source to run their workloads on." [sponsor_note slug="amazon-web-services-aws" ][/sponsor_note] The third pillar focuses on advocating for open source in the larger digital economy. Notable is how much AWS's presence in the market played a part in Paul Vixie's decision to join the company. Vixie, an Internet pioneer, is now vice president of security and an AWS distinguished engineer who was also interviewed for the New Stack Makers podcast at the Open Source Summit. Nalley has his recognizable importance in the community. Nalley is the president of the Apache Software Foundation, one of the world's most essential open source foundations. The importance of its three-pillar strategy shows in many of the projects that AWS supports. AWS recently donated $10 million to the Open Source Software Supply Chain Foundation, part of the Linux Foundation. AWS is a significant supporter of the Rust Foundation, which supports the Rust programming language and ecosystem. It puts a particular focus on maintainers that govern the project. Last month, Facebook unveiled the PyTorch Foundation that the Linux Foundation will manage. AWS is on the governing board.
10/5/2022 • 14 minutes, 24 seconds
Paul Vixie: Story of an Internet Hero
Paul Vixie grew up in San Francisco. He dropped out of high school in 1980. He worked on the first Internet gateways at DEC and, from there, started the Internet Software Consortium (ISC), establishing Internet protocols, particularly the Domain Name System (DNS). Today, Vixie is one of the few dozen in the technology world with the title "distinguished engineer," working at Amazon Web Services as vice president of security, where he believes he can make the Internet a more safe place. As safe as before the Internet emerged. "I am worried about how much less safe we all are in the Internet era than we were before," Vixie said in an interview at the Open Source Summit in Dublin earlier this month for The New Stack Makers podcast. "And everything is connected, and very little is understood. And so, my mission for the last 20 years has been to restore human safety to pre-internet levels. And doing that at scale is quite the challenge. It'll take me a lifetime." So why join AWS? He spent decades establishing the ISC. He started a company called Farsight, which came out of ISC. He sold Farsight in November of last year when conversations began with AWS. Vixie thought about his mission to better restore human safety to pre-internet levels when AWS asked a question that changed the conversation and led him to his new role. "They asked me, what is now in retrospect, an obvious question, 'AWS hosts, probably the largest share of the digital economy that you're trying to protect," Vixie said. "Don't you think you can complete your mission by working to help secure AWS?' "The answer is yes. In fact, I feel like I'm going to get more traction now that I can focus on strategy and technology and not also operate a company on the side. And so it was a very good win for me, and I hope for them." Interviewing Vixie is such an honor. It's people like Paul who made so much possible for anyone who uses the Internet. Just think of that for a minute -- anyone who uses the Internet have people like Paul to thank. Thanks Paul -- you are a hero to many. Here's to your next run at AWS.
9/28/2022 • 28 minutes, 39 seconds
Deno's Ryan Dahl is an Asynchronous Guy
Ryan Dahl is the co-founder and creator of Deno, a runtime for JavaScript, TypeScript, and WebAssembly based on the V8 JavaScript engine and the Rust programming language. He is also the creator of Node.js. We interviewed Dahl for The New Stack Technical Founder Odyssey series. "Yeah, so we have a JavaScript runtime," Dahl said. "It's pretty similar in, in essence, to Node. It executes some JavaScript, but it's much more modern. " The Deno project started four years ago, Dahl said. He recounted how writing code helped him rethink how he developed Node. Dahl wrote a demo of a modern, server-side JavaSript runtime. He didn't think it would go anywhere, but sure enough, it did. People got pretty interested in it. Deno has "many, many" components, which serve as its foundation. It's written in Rust and C++ with a different type of event loop library. Deno has non-blocking IO as does Node. Dahl has built his work on the use of asynchronous technologies. The belief system carries over into how he manages the company. Dahl is an asynchronous guy and runs his company in such a fashion. As an engineer, Dahl learned that he does not like to be interrupted by meetings. The work should be as asynchronous as possible to avoid interruptions. Deno, the company, started during the pandemic, Dahl said. Everyone is remote. They pair program a lot and focus on short, productive conversations. That's an excellent way to socialize and look deeper into problems. How is for Dahl to go from programming to CEO? "I'd say it's relatively challenging," Dahl said. I like programming a lot. Ideally, I would spend most of my time in an editor solving programming problems. That's not really what the job of being a CEO is." Dahl said there's a lot more communication as the CEO operates on a larger scale. Engineering teams need management to ensure they work together effectively, deliver features and solve problems for developers. Overall, Dahl takes it one day at a time. He has no fundamental theory of management. He's just trying to solve problems as they come. "I mean, my claim to fame is like bringing asynchronous sockets to the mainstream with nonblocking IO and stuff. So, you know, asynchronous is deeply embedded and what I'm thinking about. When it comes to company organization, asynchronous means that we have rotating meeting schedules to adapt to people in different time zones. We do a lot of meeting recordings. So if you can't make it for whatever reason, you're not in the right time zone, you're, you know, you're, picking up your kids, whatever. You can go back and watch the recording. So we basically record every every meeting, we try to keep the meeting short. I think that's important because nobody wants to watch hours and hours of videos. And we use, we use chats a lot. And chat and email are forms of asynchronous communication where you don't need to kind of meet with people one on one. And yeah, I guess I guess the other aspect of that is just keeping meetings to a minimum. Like there's there's a few situations where you really need to get everybody in the room. I mean, there are certainly times when you need to do that. But I tried to avoid that as much as possible, because I think that really disrupts the flow of a lot of people working."
9/27/2022 • 20 minutes, 37 seconds
How Can Open Source Sustain Itself Without Creating Burnout?
The whole world uses open source, but as we’ve learned from the Log4j debacle, “free” software isn’t really free. Organizations and their customers pay for it when projects aren’t frequently updated and maintained. How can we support open source project maintainers — and how can we decide which projects are worth the time and effort to maintain? “A lot of people pick up open source projects, and use them in their products and in their companies without really thinking about whether or not that project is likely to be successful over the long term,” Dawn Foster, director of open source community strategy at VMware’s open source program office (OSPO), told The New Stack’s audience during this On the Road edition of The New Stack’s Makers podcast. In this conversation recorded at Open Source Summit Europe in Dublin, Ireland, Foster elaborated on the human cost of keeping open source software maintained, improved and secure — and how such projects can be sustained over the long term. The conversation, sponsored by Amazon Web Services, was hosted by Heather Joslyn, features editor at The New Stack. Assessing Project Health: the ‘Lottery Factor’ One of the first ways to evaluate the health of an open source project, Foster said, is the “lottery factor”: “It's basically if one of your key maintainers for a project won the lottery, retired on a beach tomorrow, could the project continue to be successful?” “And if you have enough maintainers and you have the work spread out over enough people, then yes. But if you're a single maintainer project and that maintainer retires, there might not be anybody left to pick it up.” Foster is on the governing board for an project called Community Health Analytics Open Source Software — CHAOSS, to its friends — that aims to provide some reliable metrics to judge the health of an open source initiative. The metrics CHAOSS is developing, she said, “help you understand where your project is healthy and where it isn't, so that you can decide what changes you need to make within your project to make it better.” CHAOSS uses tooling like Augur and GrimoireLab to help get notifications and analytics on project health. And it’s friendly to newcomers, Foster said. “We spend...a lot of time just defining metrics, which means working in a Google Doc and thinking about all of the different ways you might possibly measure something — something like, are you getting a diverse set of contributors into your project from different organizations, for example.” Paying Maintainers, Onboarding Newbies It’s important to pay open source maintainers in order to help sustain projects, she said. “The people that are being paid to do it are going to have a lot more time to devote to these open source projects. So they're going to tend to be a little bit more reliable just because they're they're going to have a certain amount of time that's devoted to contributing to these projects.” Not only does paying people help keep vital projects going, but it also helps increase the diversity of contributors, “because you by paying people salaries to do this work in open source, you get people who wouldn't naturally have time to do that. “So in a lot of cases, this is women who have extra childcare responsibilities. This is people from underrepresented backgrounds who have other commitments outside of work,” Foster said. “But by allowing them to do that within their work time, you not only get healthier, longer sustaining open source projects, you get more diverse contributions.” The community can also help bring in new contributors by providing solid documentation and easy onboarding for newcomers, she said. “If people don't know how to build your software, or how to get a development environment up and running, they're not going to be able to contribute to the project.” And showing people how to contribute properly can help alleviate the issue of burnout for project maintainers, Foster said: “Any random person can file issues and bug maintainers all day, in ways that are not productive. And, you know, we end up with maintainer burnout...because we just don't have enough maintainers," said Foster. “Getting new people into these projects and participating in ways that are eventually reducing the load on these horribly overworked maintainers is a good thing.” Listen or watch this episode to learn more about maintaining open source sustainability.
9/22/2022 • 17 minutes, 36 seconds
Charity Majors: Taking an Outsider's Approach to a Startup
In the early 2000s, Charity Majors was a homeschooled kid who’d gotten a scholarship to study classical piano performance at the University of Idaho. “I realized, over the course of that first year, that music majors tended to still be hanging around the music department in their 30s and 40s,” she said. “And nobody really had very much money, and they were all doing it for the love of the game. And I was just like, I don't want to be poor for the rest of my life.” Fortunately, she said, it was pretty easy at that time to jump into the much more lucrative tech world. “It was buzzing, they were willing to take anyone who knew what Unix was,” she said of her first tech job, running computer systems for the university. Eventually, she dropped out of college, she said, “made my way to Silicon Valley, and I’ve been here ever since.” Majors, co-founder and chief technology officer of the six-year-old Honeycomb.io, an observability platform company, told her story for The New Stack’s podcast series, The Tech Founder Odyssey, which spotlights the personal journeys of some of the most interesting technical startup creators in the cloud native industry. It’s been a busy year for her and the company she co-founded with Christine Yen, a colleague from Parse, a mobile application development company that was bought by Facebook. In May, O’Reilly published “Observability Engineering,” which Majors co-wrote with George Miranda and Liz Fong-Jones. In June, Gartner named Honeycomb.io as a Leader in the Magic Quadrant for Application Performance Monitoring and Observability. Thus far Honeycomb.io, now employing about 200 people, has raised just under $97 million, including a $50 million Series C funding round it closed in October, led by Insight Partners (which owns The New Stack). This Tech Founder Odyssey conversation was co-hosted by Colleen Coll and Heather Joslyn of TNS. ‘Rage-Driven Development’ Honeycomb.io grew from efforts at Parse to solve a stubborn observability problem: systems crashed frequently, and rarely for the same reasons each time. “We invested a lot in the last generation of monitoring technology, we had all these dashboards, we have all these graphs,” Majors said. “But in order to figure out what's going on, you kind of had to know in advance what was going to break.” Once Parse was acquired by Facebook, Majors, Yen and their teams began piping data into a Facebook tool called Scuba, which ”was aggressively hostile to users,” she recalled. But, “it did one thing really well, which is let you slice and dice in real time on dimensions that have very high cardinality,” meaning those that contain lots of unique terms. This set it apart from the then-current monitoring technologies, which were built around assessing low cardinality dimensions. Scuba allowed Majors’ organization to gain more control over its reliability problem. And it got her and Yen thinking about how a platform tool that could analyze high cardinality data about system health in real time. “Everything is a high cardinality dimension now,” Majors said. “And [with] the old generation of tools, you hit a wall really fast and really hard.” And so, Honeycomb.io was created to build that platform. “My entire career has been rage-driven development,” she said. “Like: sounds cool, I'm gonna go play with that. This isn't working — I'm gonna go fix it from anger.” A Reluctant CEO Yen now holds the CEO role at Honeycomb.io, but Majors wound up with the job for roughly the first half of the company’s life. Did Majors like being the boss? “Hated it,” she said. “Constitutionally what you want in a CEO is someone who is reliable, predictable, dependable, someone who doesn't mind showing up every Tuesday at 10:30 to talk to the same people. “I am not structured. I really chafe against that stuff.” However, she acknowledged, she may have been the right leader in the startup’s beginning: “It was a state of chaos, like we didn't think we were going to survive. And that's where I thrive.” Fortunately, in Honeycomb.io’s early days, raising money wasn’t a huge challenge, due to its founders’ background at Facebook. “There were people who were coming to us, like, do you want $2 million for a seed thing? Which is good, because I've seen the slides that we put together, and they are laughable. If I had seen those slides as an investor, I would have run the other way.” The “pedigree” conferred on her by investors due to her association with Facebook didn’t sit comfortably with her. “I really hated it,” she said. “Because I did not learn to be a better engineer at Facebook. And part of me kind of wanted to just reject it. But I also felt this like responsibility on behalf of all dropouts, and queer women everywhere, to take the money and do something with it. So that worked out.” Majors, a frequent speaker at tech conferences, has established herself as a thought leader in not only observability but also engineering management. For other women, people of color, or people in the tech field with an unconventional story, she advised “investing a little bit in your public speaking skills, and making yourself a bit of a profile. Being externally known for what you do is really helpful because it counterbalances the default assumptions that you're not technical or that you're not as good.” She added, “if someone can Google your name plus a technology, and something comes up, you're assumed to be an expert. And I think that that really works to people's advantage.“ Majors had a lot more to say about how her outsider perspective has shaped the way she approaches hiring, leadership and scaling up her organization. Check out this latest episode of the Tech Founder Odyssey.
9/21/2022 • 34 minutes, 17 seconds
How Idit Levine’s Athletic Past Fueled Solo.io‘s Startup
Idit Levine’s tech journey originated in an unexpected place: a basketball court. As a seventh grader in Israel, playing in hoops tournaments definitely sparked her competitive side. “I was basically going to compete with all my international friends for two minutes without parents, without anything,” Levine said. “I think it made me who I am today. It’s really giving you a lot of confidence to teach you how to handle situations … stay calm and still focus.” Developing that calm and focus proved an asset during Levine’s subsequent career in professional basketball in Israel, and when she later started her own company. In this episode of The Tech Founder Odyssey podcast series, Levine, founder and CEO of Solo.io, an application networking company with a $1 billion valuation, shared her startup story. The conversation was co-hosted by Colleen Coll and Heather Joslyn of The New Stack After finishing school and service in the Israeli Army, Levine was still unsure of what she wanted to do. She noticed her brother and sister’s fascination with computers. Soon enough, she recalled, “I picked up a book to teach myself how to program.” It was only a matter of time before she found her true love: the cloud native ecosystem. “It's so dynamic, there's always something new coming. So it's not boring, right? You can assess it, and it's very innovative.” Moving from one startup company to the next, then on to bigger companies including Dell EMC where she was chief technology officer of the cloud management division, Levine was happy seeking experiences that challenged her technically. “And at one point, I said to myself, maybe I should stop looking and create one.”Learning How to PitchWinning support for Solo.io demanded that the former hoops player acquire an unfamiliar skill: how to pitch. Levine’s company started in her current home of Boston, and she found raising money in that environment more of a challenge than it would be in, say, Silicon Valley. It was difficult to get an introduction without a connection, she said: “I didn't understand what pitches even were but I learned how … to tell the story. That helped out a lot.” Founding Solo.io was not about coming up with an idea to solve a problem at first. “The main thing at Solo.io, and I think this is the biggest point, is that it's a place for amazing technologists, to deal with technology, and, beyond the top of innovation, figure out how to change the world, honestly,” said Levine. Even when the focus is software, she believes it’s eventually always about people. “You need to understand what's driving them and make sure that they're there, they are happy. And this is true in your own company. But this is also [true] in the ecosystem in general.” Levine credits the company’s success with its ability to establish amazing relationships with customers – Solo.io has a renewal rate of 98.9% – using a very different customer engagement model that is similar to users in the open source community. “We’re working together to build the product.” Throughout her journey, she has carried the idea of a team: in her early beginnings in basketball, in how she established a “no politics” office culture, and even in the way she involves her family with Solo.io. As for the ever-elusive work/life balance, Levine called herself a workaholic, but suggested that her journey has prepared her for it: “I trained really well. Chaos is a part of my personal life.” She elaborated, “I think that one way to do this is to basically bring the company to [my] personal life. My family was really involved from the beginning and my daughter chose the logos. They’re all very knowledgeable and part of it.”
9/16/2022 • 34 minutes, 22 seconds
From DB2 to Real-Time with Aerospike Founder Srini Srinivasan
Aerospike Founder Srini Srinivasan had just finished his Ph.D. at the University of Wisconsin when he joined IBM and worked under Don Haderle, the creator of DB2, the first commercial relational database management system. Haderle became a major influencer on Srinivasan when he started Aerospike, a real-time data platform. To this day, Haderle is an advisor to Aerospike. "He was the first one I went back to for advice as to how to succeed," Srinivasan said in the most recent episode of The New Stack Maker series, "The Tech Founder Odyssey." A young, ambitious engineer, Srinivasan left IBM to join a startup. Impatient with the pace he considered slow, Srinivasan met with Haderle, who told him to go, challenge himself, and try new things that might be uncomfortable. Today, Srinivasan seeks a balance between research and product development, similar to the approach at IBM that he learned -- the balance between what is very hard and what's impossible. Technical startup founders find themselves with complex technical problems all the time. Srinivasan talked about inspiration to solve those problems, but what does inspiration mean at all? Inspiration is a complex topic to parse. It can be thought of as almost trivial or superficial to discuss. Srinivasan said inspiration becomes relevant when it is part of the work and how one honestly faces that work. Inspiration is honesty. "Because once one is honest, you're able to get the trust of the people you're working with," Srinivasan said. "So honesty leads to trust. Once you have trust, I think there can be a collaboration because now people don't have to worry about watching their back. You can make mistakes, and then you know that it's a trusted group of people. And they will, you know, watch your back. And then, with a team like that, you can now set goals that seem impossible. But with the combination of honesty and trust and collaboration, you can lead the team to essentially solve those hard problems. And in some cases, you have to be honest enough to realize that you don't have all the skills required to solve the problem, and you should be willing to go out and get somebody new to help you with that." Srinivasan uses the principles of honesty in Aerospike's software development. How does that manifest in the work Aerospike does? It leads to all kinds of insights about Unix, Linux, systems technologies, and everything built on top of the infrastructure. And that's the work Srinivasan enjoys so much – building foundational technology that may take years to build but over time, establishes the work that's important, scalable, and has great performance.
9/8/2022 • 28 minutes, 25 seconds
The Stone Ages of Open Source Security
Ask a developer about how they got into programming, and you learn so much about them. In this week's episode of The New Stack Makers, Chainguard founder Dan Lorenc said he got into programming halfway through college while studying mechanical engineering. "I got into programming because we had to do simulations and stuff in MATLAB," Lorenc said. And then I switched over to Python because it was similar. And we didn't need those licenses or whatever that we needed. And then I was like, Oh, this is much faster than you know, ordering parts and going to the machine shop and reserving time, so I got into it that way." It was three or four years ago that Lorenc got into the field of open source security. "Open source security and supply chain security weren't buzzwords back then," Lorenc said. "Nobody was talking about it. And I kind of got paranoid about it." Lorenc worked on the Minikube open source project at Google where he first saw how insecure it could be to work on open source projects. In the interview, he talks about the threats he saw in that work. It was so odd for Lorenc. State of art for open source security was not state of the art at all. It was the stone age. Lorenc said it felt weird for him to build the first release in MiniKube that did not raise questions about security. "But I mean, this is like a 200 megabyte Go binary that people were just running as root on their laptops across the Kubernetes community," Lorenc said. "And nobody had any idea what I put in there if it matched the source on GitHub or anything. So that was pretty terrifying. And that got me paranoid about the space and kind of went down this long rabbit hole that eventually resulted in starting Chainguard. Today, the world is burning down, and that's good for a security startup like Chainguard. "Yeah, we've got a mess of an industry to tackle here," Lorenc said. "If you've been following the news at all, it might seem like the software industry is burning on fire or falling down or anything because of all of these security problems. It's bad news for a lot of folks, but it's good news if you're in the security space." Good news, yes ,but how does it fit into a larger story? "Right now, one of our big focuses is figuring out how do we explain where we fit into the bigger landscape," Lorenc. said. "Because the security market is massive and confusing and full of vendors, putting buzzwords on their websites, like zero trust and stuff like that. And it's pretty easy to get lost in that mess. And so figuring out how we position ourselves, how we handle the branding, the marketing, and making it clear to prospective customers and community members, everything exactly what it is we do and what threats our products mitigate, to make sure we're being accurate there. And conveying that to our customers. That's my big focus right now."
8/30/2022 • 26 minutes, 23 seconds
Curating for the SRE Through Lessons Learned at Google News
In the early 1990s, many kids got into programming video games. Tina Huang enjoyed developing her GeoCities site but not making games. Huang loved automating her website. "It is not a lie to say that what got me excited about coding was automation," said Huang, co-founder of Transposit, in this week's episode of The New Stack Makers as part of our Tech Founder Series. "Now, you're probably going to think to yourself: 'what middle school kid likes automation?' " Huang loved the idea of automating mundane tasks with a bit of code, so she did not have to hand type – just like the Jetsons and Rosie the Robot -- the robot people want. There to fold your laundry but not take the joy away from what people like to do. Huang is like many of the founders we interview. Her job can be what she wants it to be. But Huang also has to take care of everything that needs to get done. All the work comes down to what the Transposit site says on the home page: Bring calm to the chaos. Through connected workflows, give TechOps and SREs visibility, context, and actionability across people, processes, and APIs. The statements reflect on her own experience in using automation to provide high-quality information. "I've always been swimming upstream against the tide when I worked at companies like Google and Twitter, where, you know, the tagline for Google News back then was "News by Robots," Huang said. "The ideal in their mind was how do you get robots to do all the news reporting. And that is funny because now I think we have a different opinion. But at the time, it was popular to think news by robots would be more factual, more Democratic." Huang worked on a project at Google exploring how to use algorithms to curate the first pass of curation for human editors to go in and then add that human touch to the news. The work reflected her love for long-form journalism and that human touch to information. Transport offers a similar next level of integration. Any RSS fans out there? Huang has a love/hate relationship with RSS. She loves it for what it can feed, but if the feed is not filtered, then it becomes overwhelming. Getting inundated with information happens when multiple integrations start to layer from Slack, for example, and other sources. "And suddenly, you're inundated with information because it was information designed for the consumption by machines, not at the human scale," Huang said. "You need that next layer of curation on top of it. Like how do you allow people to annotate that information? " Providing a choice in subscriptions can help. But at what level? And that's one of the areas that Huang hopes to tackle with Transposit."
8/24/2022 • 30 minutes, 27 seconds
A Technical Founder's Story: Jake Warner on Cycle.io
Welcome to the first in our series on The New Stack Makers about technical founders, those engineers who have moved from engineering jobs to running a company of their own. What we want to know is what that's like for the founder. How is it to be an engineer turned entrepreneur? We like to ask technologists about their first computer or when they started programming. We always find a connection to what the engineer does today. It's these kinds of questions you will hear us ask in the series to get more insight into everything that happens when the engineer is responsible for the entire organization. We've listened to feedback about what people want from this series. Here are a few of the replies we received to my tweet asking for feedback about the new series.If they have kids, how much work is taken on by their SO? Lots of technical founders are only able to do what they do because their partner is lifting a lot in the background — they hardly ever get the credits tho— Anaïs Urlichs ☀️ (@urlichsanais) August 4, 2022 I host the first four interviews. The New Stack's Colleen Coll and Heather Joslyn will co-host the following shows we run in the series. We interviewed Cycle.io Founder Jake Warner for the first episode in the series about how he went from downloading a virus on an inherited Windows 95 machine as a 10-year-old to leading a startup. "You know, I had to apologize to my Dad for needing to do a full reinstall on the family computer," Warner said. "But it was the fact that someone through just the use of a file could cause that much damage that started making me wonder, wow, there's a lot more to this than I thought." Warner was never much of a gamer. He preferred the chat rooms and conversation more so than playing Starcraft, the game he liked to talk about more than play. Warner met people in those chat rooms who preferred to talk about the game instead of playing it. He became friends with a group that liked playing games over the network hosted by Starcraft. Games that kids play all the time. They were learning about firewalls to attack each other virtually, between chat rooms, for example. "And because of that, that got me interested in all kinds of firewalls and security things, which led to getting into programming," Warner said. "And so it was, I guess, the point the to get back to your question, it started with a game, but very quickly went from a lot more than that. And now Warner is leading Cycle, which he and his colleagues have built from the ground up. For a long time, they marketed Cycle as a container orchestrator. Now they call Cycle a platform for building platforms – ironically similar to the story of a kid playing a game in a game. Warner has been leading a company that he described as a container orchestrator for some time. There is one orchestrator that enterprise engineers know well. And that's Kubernetes. Warner and his team realized that Cycle is different than a container orchestrator. So how to change the message? Knowing what to do is the challenge of any founder. And that's a big aspect of what we will explore in our series on technical founders. We hope you enjoy the interviews. Please provide feedback and your questions. They are always invaluable and serve as a way to draw thoughtful perspectives from the founders we interview.
8/17/2022 • 26 minutes, 59 seconds
Rethinking Web Application Firewalls
Web Application Firewalls (WAF) first emerged in the late 1990s as Web server attacks became more common. Today, in the context of cloud native technologies, there’s an ongoing rethinking of how a WAF should be applied. No longer is it solely static applications sitting behind a WAF, said Tigera CEO Ratan Tipirneni, President & CEO of Tigera in this episode of The New Stack Makers. “With cloud native applications and a microservices distributed architecture, you have to assume that something inside your cluster has been compromised,” Tipirneni said. “So just sitting behind a WAF doesn't give you adequate protection; you have to assume that every single microservice container is almost open to the Internet, metaphorically speaking. So then the question is how do you apply WAF controls? Today’s WAF has to be workload-centric, Tiperneni said. In his view, every workload has to have its own WAF. When a container launches, the WAF control is automatically spun up. So that way, even if something inside a cluster is compromised or exposes some of the services to the Internet, it doesn't matter because the workload is protected, Tiperneni said. So how do you apply this level of security? You have to think in terms of a workload-centric WAF.The Scenario The vulnerabilities are so numerous now and cloud native applications have larger attack surfaces with no way to mitigate vulnerabilities using traditional means, Tiperneni “It's no longer sufficient to throw out a report that tells you about all the vulnerabilities in your system,” Tiperneni said. “Because that report is not actionable. People operating the services are discovering that the amount of time and effort it takes to remediate all these vulnerabilities is incredible, right? So they're looking for some level of prioritization in terms of where to start.” And the onus is on the user to mitigate the problem, Tiperneni said. Those customers have to think about the blast radius of the vulnerability and its context in the system. The second part: how to manage the attack surface. In this world of cloud native applications, customers are discovering very quickly, that trying to protect every single thing, when everything has access to everything else is an almost impossible task, Tiperneni said. What’s needed is a way for users to control how microservices talk to each with permissions set for intercommunciation. In some cases, specific microservices should not be talking to each other at all. “So that is a highly leveraged activity and security control that can stop many of these attacks,” Tiperneni said. Even after all of that, the user still has to assume that attacks will happen, mainly because there's always the threat of an insider attack. And in that situation, the search is for patterns of anomalous behavior at the process level, at the file system level or the system call level to determine the baseline for standard behavior that can then tell the user how to identify deviations, Tiperneni said. Then it’s a matter of trying to tease out some signals, which are indicators of either an attack or of a compromise. “Maybe a simpler use case of that is to constantly be able to monitor and monitor at run time for known bad hashes or files or binaries, that are known to be bad,” Tipirneni said. The real challenge for companies is setting up the architecture to make microservices secure. There are a number of vectors the market may take. In the recording, Tipirneni talks about the evolution of WAF, the importance of observability and better ways to establish context with the services a company has deployed and the overall systems that companies have architected. “There is no single silver bullet,” Tipirneni said. “You have to be able to do multiple things to keep your application safe inside cloud native architectures.”
8/9/2022 • 27 minutes, 19 seconds
Passage: A Passwordless Service with Biometrics
Passage adds device native biometric authorization to web sites to allow passwordless security on devices with or without Touch ID. In this episode of The New Stack Makers, Passage Co-Founders Cole Hecht and Anna Pobletts talk about how the service works for developers to offer users its biometric service. Hecht and Pobletts have worked in product security for many years and the recurring problem is always password-based security. But there really is no great solution, Pobletts said. Multi-factor authentication adds security but the user experience is lacking. Magic links, adaptive MFA, and other techniques add a bit of improvement but are not a great balance of user experience and security. “Whereas biometrics is the only option we've ever seen that gives you both great security and great user experience right out of the box,” Pobletts. The goal for Hecht and Pobletts: offer developers what is challenging to implement themselves: a passwordless service with a high security level and a great user experience. Passage is built on WebAuthn, a Web protocol that allows a developer to connect Web sites with browsers and various devices through the authenticators on those devices, Pobletts said. “So that could be anything right now,” Pobletts said. “It's things like fingerprint readers and face identification. But in the future, it could be voice identification, or it could be, you know, your presence and things like that like it could be all sorts of stuff in the future. But ultimately, your device is generating a cryptographic key pair and storing the private key in the TPM of your device. The cool thing about this protocol is that your biometric data never leaves your device, it's a huge win for privacy. In that passage, your browser, no one ever actually sees your fingerprint data in any way.” It’s cryptographically secure under the hood with Passage as the platform on top, Pobletts said. WebAuthn is designed for single devices, Pobletts said. A developer authenticated one fingerprint, for example, to one device. But that does not work well on the Internet where a user may have a phone, a tablet, and a computer. Passage coordinates and orchestrates between different devices to give an easy experience. “So in my case, I have an iPhone, I do face ID,” said Hecht showing the service. “And then I'm going to be signed in on both devices automatically. So that's a great way to kind of give every user access to the site no matter what device they're on.” With Passage, the biometric is added to any device a user adds, Hecht said. Passage handles the multidevice orchestration. Use cases? “FinTech people like the security properties of it, they kind of like that cool, shiny user experience that they want to deliver to their end users,” Hecht said. And then any website or business that cares about conversions is kind of a general term. People who want signups, who are trying to measure success by the number of people registering and creating accounts, are signing up. “Passage has a really nice story for that because we cut out so much friction around those conversion points.”
8/2/2022 • 11 minutes, 21 seconds
What Does Kubernetes Cost You?
In this episode of The New Stack’s On the Road show at Open Source Summit in Austin, Webb Brown, CEO and co-founder of KubeCost, talked with The New Stack about opening up the black box on how much Kubernetes is really costing. Whether we’re talking about cloud costs in general or the costs specifically associated with Kubernetes, the problem teams complain about is lack of visibility. This is a cliche complaint about AWS, but it gets even more complicated once Kubernetes enters the picture. “Now everything’s distributed, everything’s shared,” Brown said. “It becomes much harder to understand and break down these costs. And things just tend to be way more dynamic.” The ability of pods to spin up and down is a key advantage of Kubernetes and brings resilience, but it also makes it harder to understand how much it costs to run a specific feature. And costs aren’t just about money, either. Even with unlimited money, looking at cost information can provide important information about performance issues, reliability or availability. “Our founding team was at Google working on infrastructure monitoring, we view costs as a really important part of this equation, but only one part of the equation, which is you’re really looking at the relationship between performance and cost,” Brown said. “Even with unlimited budged, you would still care about resourcing and configuration, because it can really impact reliability and availability of your services.”
7/27/2022 • 12 minutes, 27 seconds
Open Technology, Financial Sustainability and the Importance of Community
In this episode of The New Stack’s On the Road show at Open Source Summit in Austin, Amanda Brock, CEO and founder of OpenUK, talked with The New Stack about revenue models for open source and how those fit into building a sustainable project.Funding an open source project has to be part of the sustainability question — open source requires humans to contribute, and those humans have bills to pay and risk burnout if the open source project is a side gig after their full time job. That’s not the only expenses a project might accrue, either — there might be cloud costs, for example. Brock says there are essentially eight categories of funding models for open source, of which really two or three have been proven successful. They are support, subscription and open core.So how do we define open core, exactly? “You get different kinds of open core businesses, one that is driven very much by the needs of the company, and one that is driven by the needs of the open source project and community,” Brock said. In other words, sometimes the project exists to drive revenue, sometime the revenue exists to support the project — a subtle distinction, but it’s easy to see how one or the other orientation could change a company’s relationship with open source.Are both types really open source? For Brock, it all comes down to community. “It’s the companies that have proper community that are really open source to me,” she said. “That’s where you’ve got a proper project with a real community, the community is not entirely based off of your employees.”
7/19/2022 • 12 minutes, 33 seconds
What Can the Tech Community Do to Protect Its Trans Members?
AUSTIN, TEX. — In one of the most compelling keynote addresses at The Linux Foundation’s Open Source Summit North America, held here in June, Aeva Black, a veteran of the open source community, said that a friend of theirs recently commented that, “I feel like all the trans women I know on Twitter, are software developers.” There’s a reason for that, Black said. It’s called “survivor bias”: The transgender software developers the friend knows on Twitter are only a small sample of the trans kids who survived into adulthood, or didn’t get pushed out of mainstream society. “It's a pretty common trope, at least on the internet: transwomen are all software developers, we all have high-paying jobs, we're TikTok or on Twitter. And that's really a sampling bias, the transgender people who have the privilege to be loud,” said Black, in this On the Road episode of The New Stack Makers podcast. Black, whose keynote alerted the conference attendees about how the rights of transgender individuals are under attack around the United States, and the role tech can play, currently works in Microsoft Azure's Office of the Chief Technology Officer and holds seats on the boards of the Open Source Initiative and on the OpenSSF's Technical Advisory Council. In this episode of Makers, they unpacked the keynote’s themes with Heather Joslyn, TNS features editor. Citing Pew Research Center data, released in June, reports that 5% of Americans under 30 identify as transgender or nonbinary — roughly the same percentage that have red hair. The Pew study, and the latest "Stack Overflow Developer Survey," reveal that younger people are more likely than their elders to claim a transgender or nonbinary identity. Failure to accept these people, Black said, could have an impact on open source work, and tech work more generally. “If you're managing a project, and you want to attract younger developers who could then pick it up and carry on the work over time, you need to make sure that you're welcoming of all younger developers,” they said.Rethinking Codes of ConductCodes of Conduct, must-haves for meetups, conferences and open source projects over the past few years, are too often thought of as tools for punishment, Black said in their keynote. For Makers, they advocated for thinking of those codes as tools for community stewardship. As a former member of the Kubernetes Code of Conduct committee, Black pointed out that “80% of what we did … while I served wasn't punishing people. It was stepping in when there was conflict, when people you know, stepped on someone else's toe, accidentally offended somebody. Like, ‘OK, hang on, Let's sort this out.' So it was much more stewardship, incident response mediation.” LGBT people are currently the targets of new legislation in several U.S. states. The tech world and its community leaders should protect community members who may be vulnerable in this new political climate, Black said. “The culture of a community is determined by the worst behavior its leaders tolerate, we have to understand and it's often difficult to do so how our actions impact those who have less privileged than us, the most marginalized in our community,” they said. For example, “When thinking of where to host a conference, think about the people in one's community, even those who may be new contributors. Will they be safe in that location?” Listen to the episode to hear more of The New Stack’s conversation with Black.
7/13/2022 • 10 minutes, 9 seconds
What’s Next in WebAssembly?
AUSTIN, TEX. —What’s the future of WebAssembly — Wasm, to its friends — the binary instruction format for a stack-based virtual machine that allows developers to build in their favorite programming language and run their code anywhere?For Matt Butcher, CEO and founder of Fermyon Technologies, the future of Wasm lies in running it outside of the browser and running it inside of everything, from proxy servers to video games.”And, he added, “the really exciting part is being able to run it in the cloud, as well as a cloud service alongside like virtual machines and containers.”For this On the Road episode of The New Stack Makers podcast, Butcher was interviewed by Heather Joslyn, features editor of TNS.With key programming languages like Ruby, Python and C# adding support for WebAssembly’s new capabilities, Wasm is gaining critical mass, Butcher said.“What we're talking about now is the realization of the potential that's been around in WebAssembly for a long time. But as people get excited, and open source projects start to adopt it, then what we're seeing now is like the beginning of the tidal wave.”But before widespread adoption can happen, Butcher said, there’s still work to be done in preparing the environment the next wave of Wasm: cloud computing.Along with other members of the Bytecode Alliance, such as Cosmonic, Fastly, Intel and Fermyon is working to improve the developer experience and environment this year. The next step, he added is to “start to build this first wave of applications that really highlight where it can happen for us.”The rise of Wasm represents a new era in cloud native technology, Butcher noted. “We love containers. Many of us have been involved in the Kubernetes ecosystem for years and years. I built Helm originally; that's still, in a way, my baby.“But also we're excited because now we're finding solutions to some problems that we didn't see get solved in the container ecosystem. And that's why we talk about it as sort of like the next wave.”Wasm and a ‘Frictionless’ Dev ExperienceFermyon introduced its “frictionless” WebAssembly platform in June here at The Linux Foundation’s Open Source Summit North America. The platform, built on technologies including HashiCorp’s Nomad and Consul, enables the writing of microservices and web applications. Fermyon’s open source tool, Spin, helps developers push apps from their local dev environments into their Fermyon platform.One aspect of Wasm’s future that Butcher highlighted in our Makers discussion is how it can be scalable while also remaining lightweight in terms of the cloud resources it consumes.“Along with creating this great developer experience in a secure platform, we're also going to help people save money on their cloud costs, because cloud costs have just kind of ballooned out of control,” he said.“If we can be really mindful of the resources we use, and help the developer understand what it means to write code that can be nimble, and can be light on resource usage. The real objective is to make it so when they write code, it just happens to have those characteristics.”For those interested in taking WebAssembly for a spin, Fermyon has created an online game called Finicky Whiskers, intended to show how microservices can be reimagined with Wasm.
7/12/2022 • 13 minutes, 32 seconds
What Makes Wasm Different
VALENCIA, Spain — WebAssembly (Wasm) is among the more hot topics under the CNCF project umbrella. In this episode of The New Stack Makers podcast, recorded on the show floor of KubeCon + CloudNativeCon Europe 2022, Liam Randall, CEO and co-founder, Cosmonic, and Colin Murphy, senior software engineer, Adobe, discuss why Wasm’s future looks bright. A quintessential feature of Wasm is that it functions on a CPU level, not unlike Java or Flash. This means, Randall said, that Wasm “can run anywhere.” “Everybody can start using Wasm, which functionally works like a tiny CPU. You can even put WebAssembly inside other applications.”The fact that Wasm has a binary format (with .wasm file format) and can be used to run on a CPU level like C or C++ does means it is highly portable. “WebAssembly really is exciting because it gives us two fundamental things that are truly amazing: One is portability across a diverse set of CPUs and architectures, and even portability into other places, like into a web browser,” said Randall. “It also gives us a security model that's portable, and works the same across all of those different landscape settings.”This portability makes wasm an excellent candidate for edge applications. Its inference capabilities for machine learning (ML) at the edge are particularly promising for applications distributed across many different applications, Murphy described. Wasm is also particularly apt for collaboration for ML edge and other applications. “Collaborative experiences are what WebAssembly is really perfectly in position for," he continued.In many ways, the name “WebAssembly” is not intuitively reflective of its meaning. “WebAssembly is neither web nor assembly — so, it's a somewhat awkwardly named technology, but a technology that is worth looking into,” Randall said. “There are incredible opportunities for your internal teams to transform the way they do business to save costs and be more secure by adopting this new standard.”
7/7/2022 • 16 minutes, 23 seconds
The Social Model of Open Source
In this episode of The New Stack’s On the Road show at Open Source Summit in Austin, Julia Ferraioli, open source technical leader at Cisco’s open source programs office, spoke with The New Stack about some alternative ways to define what is and is not ‘open source.’ When someone says, well, that’s ‘technically’ open source, it’s usually to be snarky about a project that meets the legal criteria to be open source, but doesn’t follow the spirit of open source. Ferraioli doesn’t think that the ‘classic’ open source project, like a Kubernetes or Linux, are the only valid models for open source. She gives the sample of a research project — the code might be open sourced specifically so that others can see the code and reproduce the results themselves. However, for the research to remain valid, they it can’t accept any contributions.“It’s no less open source than others,” Ferraioli said about the hypothetical research project. “If you break things down by purpose, it’s not always that you’re trying to build the robust community.” The social model of open source, Ferraioli says, is about understanding the different use cases for open source, as well as providing a framework for determining what appropriate success metrics could be depending on what the project’s motivations are. And if you’re just doing a project with friends for laughs, well, quantifying fun isn’t going to be easy.
7/6/2022 • 11 minutes, 45 seconds
What’s the State of Open Source Security? Don’t Ask.
AUSTIN, TEX. — How safe is the open source software that virtually every organization uses? You might not want to know, according to the results of a survey released by The Linux Foundation and Snyk, a cloud native cybersecurity company, at the foundation’s annual Open Source Summit North America, held here in June. Forty-one percent of the more than 500 organizations surveyed don’t have high confidence in the security of the open source software they use, according to the research. Only half of participating companies said they have a security policy that addresses open source. Furthermore, it takes more than double the number of days — 98 — to fix a vulnerability compared to what was reported in the 2018 version of the survey. The research was conducted at the request of the Open Source Security Foundation (OpenSSF), a project of The Linux Foundation. For this On the Road episode of The New Stack Makers, Steve Hendrick, vice president of research at The Linux Foundation, and Matt Jarvis, director of developer relations at Snyk, were interviewed by Heather Joslyn, features editor at TNS. Despite the alarming statistics, Jarvis cautions against treating all vulnerabilities as four-alarm fires, our guests said. “Having a kind of zero-vulnerability target is probably unrealistic, because not all vulnerabilities are treated equal,” Jarvis said. Some “vulnerabilities” may not necessarily be a risk in your particular environment. It’s best to focus on the most critical threats to your network, applications and data. One bright spot in the new report: Nearly one in four respondents said they’re looking for resources to help them keep their open source software — and all that depends on it — safe. Perhaps even more relevant to vendors: 62% of survey participants said they are looking to use more intelligent security-focused tools. “There's a lot from a process standpoint that they are responsible for,” said Hendrick. “But they were very quick to jump on the bandwagon and say, we want the vendor community to do a better job at providing us tools, that makes our life a lot easier. Because I think everybody recognizes that solving the security problem is going to require a lot more effort than we're putting into it today.”Jumping on the ‘SBOM Bandwagon’Many organizations still seem confused about which of the dependencies the open source software they use has are direct and which are transitive (dependent on the dependencies), said Hendrick. One of the best ways to clarify things, he said, “ is to get on the SBOM bandwagon.” Understanding an open source tool’s software bill of materials, or SBOM, is “going to give you great understanding of the components, it's going to give you usability, it's going to give you trust, you're gonna be able to know that the components are nonfalsified,” Hendrick said. “And so that's all absolutely key from the standpoint of being able to deal with the whole componentization issue that is going on everywhere today. Additional results from the research, in which core project maintainers discussed their best practices, will be released in the third quarter of 2022. Listen to the podcast to learn more about the report’s results and what Linux Foundation is doing to help upskill the IT workforce in cybersecurity.
7/5/2022 • 15 minutes, 48 seconds
A Boom in Open Source Jobs Is Here. But Who Will Fill Them?
AUSTIN, TEX. —Forty-one percent of organizations in a new survey said they expect to increase hiring for open source roles this year. But the study, released in June by the Linux Foundation and online learning platform edX during the foundation’s Open Source Summit North America, also found that 93% of employers surveyed said they struggle to find the talent to fill those roles.At the Austin summit, The New Stack’s Makers podcast sat down with Hilary Carter, vice president for research at the Linux Foundation, who oversaw the study. She was interviewed for this On the Road edition of Makers by Heather Joslyn, features editor at The New Stack.“I think it's a very good time to be an open source developer, I think they hold all the cards right now,” Carter said. “And the fact that demand outstrips supply is nothing short of favorable for open source developers, to carry a bit of a big stick and make more demands and advocate for their improved work environments, for increased pay.”But even sought-after developers are feeling a bit anxious about keeping pace with the cloud native ecosystem’s constant growth and change. The open source jobs study found that roughly three out of four open source developers said they need more cybersecurity training, up from about two-thirds in 2021’s version of the report.“Security is the problem of the day that I think the whole community is acutely aware of, and highly focused on, and we need the talent, we need the skills,” Carter said. “And we need the resources to come together to solve the challenge of creating more secure software supply chains.”Carter also told the Makers audience about the role open source program offices, or OSPOs, can play in nurturing in-house open source talent, the impact a potential recession may have (or not have) on the tech job market, and new surveys in the works at Linux Foundation to essentially map the open source community outside of North America.Its first study, of Europe’s open source communities, is slated to be released in September at Open Source Summit Europe, in Dublin. Linux Foundation Research is currently fielding its annual survey of OSPOs; you can participate here. It is also working with the Cloud Native Computing Foundation on its annual survey of cloud native adoption trends. You can participate in that survey here.
7/1/2022 • 12 minutes, 51 seconds
Economic Uncertainty and the Open Source Ecosystem
In this episode of The New Stack’s On the Road show at Open Source Summit in Austin, Matt Yonkovit, Head of Open Source at Percona, shared his thoughts on how economic uncertainty could affect the open source ecosystem. Open source, of course, is free. So what role does the economic play in whether or not open source software is contributed to, downloaded and used in production? “Generally, open source is considered a bit recession proof,” Yonkovit said. But that doesn’t mean that things won’t change. Over the past several years, the number of open source companies has increased dramatically, and the amount of funding sloshing around in the ecosystem has been huge. That might change. And if the funding situation does change? “I think the big differentiator for a lot of people in the open source space is going to be the communities,” Yonkovit said. When we talk about having ‘backing,’ it’s usually in reference to financial investors, but in open source the backing of a community is just as important. In the absence of deep pockets, a community of people who believe in the project can help it survive — and show that the idea is really solid. If you look back at the history of open source, Yonkovit said, it’s about people having an idea that inspires other people to contribute to make it a reality. Sometimes those ideas aren’t commercially viable, even in the best of times — even if they do get widespread adoption. The only thing that’s changing now is that financial investors are going to be a bit more picky in making sure the projects they fund aren’t just inspirational ideas, but also are commercially viable.
6/30/2022 • 14 minutes, 22 seconds
Inside a $150 Million Plan for Open Source Software Security
AUSTIN, TEX. —Everyone uses open source software — and it’s become increasingly apparent that not nearly enough attention has been paid to the security of that software. In a survey released by The Linux Foundation and Synk at the foundation’s Open Source Summit in Austin, Tex., this month, 41% of organizations said they aren’t confident in the security of the open source software they use.At the Austin event, The New Stack’s Makers podcast sat down with Brian Behlendorf, general manager of Open Source Security Foundation (OpenSSF), to talk about a new plan to attack the problem from multiple angles. He was interviewed for this On the Road edition of Makers by Heather Joslyn, features editor at The New Stack.Behlendorf, who has led OpenSSF since October and serves on the boards of the Electronic Frontier Foundation and Mozilla Foundation, cited the discovery of the Log4j vulnerabilities late in 2021, and other recent security “earthquakes” as a key turning points.“I think the software industry this year really woke up to not only the fact these earthquakes were happening,” he said, “and how it's getting more and more expensive to recover from them.”The Open Source Security Mobilization Plan sprung from an open source security summit in May. It identifies 10 areas that will be targeted for attention, according to the report published by OpenSSF and the Linux Foundation:Security education.Risk assessment.Digital signatures, such as though the open source Sigstore project.Memory safety.Incident response.Better scanning.Code audits.Data sharing.Improved software supply chains.Software bills of material (SBOMs) everywhereThe price tag for these initiatives over the initial two years is expected to total $150 million, Behlendorf told our Makers audience.The plan was sparked by queries from the White House about the various initiatives underway to improve open source software security — what they would cost, and the time frame the solution-builders had in mind. “We couldn't really answer that without being able to say, well, what would it take if we were to invest?” Behlendorf said. “Because most of the time we sit there, we wait for folks to show up and hope for the best.”The ultimate price tag, he said, was much lower than he expected it would be. Various member organizations within OpenSSF, he said, have pledged funding. “The 150 was really an estimate. And these plans are still being refined,” Behlendorf said. But by stating specific steps and their costs, he feels confident that interested parties will feel confident when it comes time to make good on those pledges.Listen to the podcast to get more details about the Open Source Security Mobilization Plan.
6/28/2022 • 12 minutes, 59 seconds
Counting on Developers to Lead Vodafone’s Transformation Journey
British telecommunications provider, Vodafone, which owns and operates networks in over 20 countries and is on a journey to become a tech company focused around digital services, has plans to hire thousands of software engineers and developers that can help put the company on the cloud-native track and utilize their network through API’s.In this episode of The New Stack Makers podcast at MongoDB World 2022 in New York City, Lloyd Woodroffe, Global Product Manager at Vodafone, shares how the company is working with MongoDB on the development of a Telco as a Service (TaaS) platform to help their engineers increase their software development velocity, and drive adoption of best-practice automation within DevSecOps pipelines. Alex Williams, Founder of The New Stack hosted this podcast.Vodafone has built a backbone to keep the business resilient and scalable. But one thing they are looking to do now is innovate and give their developers the freedom and flexibility to develop creatively. “The TaaS platform – which is the product we’re building – is essentially a developer first framework that allows developers and Vodafone to build things that you think could help the business grow. But because we’re an enterprise, we need security and financial assurance and TaaS is the framework that allows us to do it in a way that gives developers the tools they need but also the security we need,” said Woodroffe.The idea of reuse as part of an inner sourcing model is key as Vodafone’s scales. The company’s key initiative ‘one source’ enables their developers to incorporate such a strategy, “We have a single repository across all our markets and teams where you can publish your code and other teams from other countries can take that code, reuse it, and implement it into their applications,” said Woodroffe. “In terms of outsourcing to the community, our engineers want to start productizing APIs and build new, innovative applications which we'll see in a bit,” he added.“The TaaS developer platform that we’re building with MongoDB acts as our service registry for the platform. When you provision the tools for the developer, we register the organizations, the cost center and guardrails that we’ve set up from a security and finance perspective,” said Woodroffe. “Then we provision MongoDB for the developers to use as their database of choice.”“What we'll see ultimately, as the developer has access to these tools [TaaS] and products more, is they'll be able to build new innovations that can be utilized through our network via API's,” Woodroffe said.
6/21/2022 • 13 minutes, 27 seconds
Pulumi Pursues Polyglotism to Expand Impact of DevOps
VALENCIA – The goal of DevOps was to break down silos between software development and operations. The side effect has become the blurring of lines between dev and ops. For better or for worse. Because the role of software developer is just continuously expanding causing cognitive overload and burnout. This is why the developer tooling market has exploded to automate and assist developers right when and where they need to build, in whatever language they already know. In this episode of The New Stack Makers podcast, recorded on the floor of KubeCon + CloudNativeCon Europe 2022, Matty Stratton, staff developer advocate at Pulumi, talks about this recently universal Infrastructure-as-Code and that impact on both dev and ops teams. Earlier this May, Pulumi released updates that took the platform closer to becoming a truly polyglot way to enforce best cloud practices, including support for: Full Java ecosystem YAML Crosswalk for Amazon Web Services (AWS) in all Pulumi languages Deploying AWS Cloud Development Kit (CDK) in all Pulumi languagesThese are significant updates because they dramatically expand the languages that are available in this low-code way of creating, deploying and managing infrastructure on any cloud. "A lot of times, in Infrastructure-as-Code, we're using domain-specific language using a config file. We call it Infrastructure as Code and are not actually writing any code. So I like to think about Pulumi as Infrastructure as Software." For Stratton, that means writing Pulumi code using a general purpose programming language, like TypeScript, Python, Go, .NET languages, or now Java. "The great thing about that is, not only do you maybe already know this programming language, because that's the language you use to build your applications, but you're able to use all the things that a programming language has available to it, like conditionals, and loops, and packages, and testing tools, and an IDE [integrated development enviornment] and a whole ecosystem. So that makes it a lot more powerful, and gives us a lot of great abstractions we can use," he continued. Pulumi now follows the low-code development trend where, Stratton says, "We're enabling people to solve a problem with just enough tech." But specifically in their common coding language, to limit the tool onboarding needed. This is not only attractive to new customers but specifically to expand Pulumi adoption across organizations, without much adaptation of the way they work. Just making it easier to work together. "I've been part of the DevOps community for a long time. And all that I want to see out of DevOps and all of this work is how do we collaborate better together? How do we be more cross functional?"
6/21/2022 • 17 minutes, 5 seconds
Unlocking the Developer
Proper tooling is perhaps the primary key to unlocking developer productivity. With the right tools and frameworks, developers can be productive in minutes versus having to toil over boilerplate code. And as data-hungry use cases such as AI and machine learning emerge, data tooling is becoming paramount. This was evident at the recent MongoDB World conference in New York City where TNS Founder and Publisher Alex Williams recorded this episode of The New Stack Makers podcast featuring Peggy Rayzis, senior director of developer experience at Apollo GraphQL; Lee Robinson, vice president of developer experience at Vercel; Ian Massingham, vice president of developer relations and community at MongoDB; and Søren Bramer Schmidt, co-founder and CEO of Prisma, discussing how their companies’ offerings help unlock developer productivity.Apollo GraphQL and SupergraphsApollo GraphQL unlocks developers by helping them build supergraphs, Raysiz said. A supergraph is a unified network of a company's data services and capabilities that is accessible via a consistent and discoverable place that any developer can access with a GraphQL query. GraphQL is a query language for communicating about data. “And what's really great about the supergraph is even though it's unified, it's very modular and incrementally adoptable. So you don't have to like rewrite all of your backend system and API's,” she said. “What's really great about the Super graph is you can connect like your legacy infrastructure, like your relational databases, and connect that to a more modern stack, like MongoDB Atlas, for example, or even connected to a mainframe as we've seen with some of our customers. And it brings that together in one place that can evolve over time. And we found that it just makes developers so much more productive, helps them shave, shave months off of their development time and create experiences that were impossible before.”[sponsor_note slug="mongodb" ][/sponsor_note]Vercel: Strong DefaultsMeanwhile, Robinson touted the virtues of Next.js, Vercel’s popular React-based framework, which provides developers with the tools and the production defaults to make a fast web experience. The goal is to enable frontend developers to be able to move from an idea to a global application in seconds. Robinson said he believes it’s important for a tool or framework to have good, strong defaults, but to also be extensible and available for developers to make changes such that they do not have necessarily eject fully out of the tool that they're using, but to be able to customize without having to leave the framework library tool of choice. “If you can provide that great experience for the 90% use case by default, but still allow maybe the extra 10% power, you know, power developer who needs to modify something without having to just rewrite from scratch, you can get go pretty far,” he said.Data ToolingWhen it comes to data tooling, MongoDB is trying to help developers manipulate and work with data in a more productive and effective way, Massingham said. One of the ways MongoDB does this is through the provision of first-party drivers, he said. The company offers 12 different programming language drivers for MongoDB, covering everything from Rust to Java, JavaScript, Python, etc. “So, as a developer, you’re importing a library into your environment,” Massingham said. “And then rather than having to construct convoluted SQL statements -- essentially learning another language to interact with the data in your database or data store -- you're going to manipulate data idiomatically using objects or whatever other constructs that are normal within the programming language that you're using. It just makes it way simpler for developers to interact with the data that's stored in MongoDB versus interacting with data in a relational database.”MongoDB and PrismaBramer Schmidt said while a truism in software engineering is that code moves fast and data moves slow, but now we are starting to see more innovation around the data tooling space. “And Mongo is a great example of that,” he said. “Mongo is a database that is much nicer to use for developers, you can express more different data constructs, and Mongo can handle things under the hood.” Moreover, Prisma also is innovating around the developer experience for working with data, making it easier for developers to build applications that rely on data and do that faster, Bramer Schmidt said. “The way we do that in Prisma is we have the tooling introspect your database, it will go and assemble documents in MongoDB, and then generate a schema based on that, and then it will pull that information into your development environment, such that you can, when you write queries, you will get autocompletion, and the IDE will tell you if you're making a mistake,” he said. “You will have that confidence in your environment instead of having to look at the documentation, try to remember what fields are where or how to do things. So that is increasing the confidence of the developer enabling them to move faster.
"Developers aren't cryptographers. We can only do so much security training, and frankly, they shouldn't have to make hard choices about this encryption mode or that encryption mode. It should just, like, work," said Kenneth White, a security principal at MongoDB, explaining the need for MongoDB's new Queryable Encryption feature. In this latest edition of The New Stack Makers podcast, we discuss [sponsor_inline_mention slug="mongodb" ]MongoDB[/sponsor_inline_mention]'s new end-to-end client-side encryption, which allows an application to query an encrypted database and keep the queries in transit encrypted, an industry first, according to the company. White discussed this technology in depth to TNS publisher Alex Williams, in a conversation recorded at MongoDB World, held last week in New York. MongoDB has offered the ability to encrypt and decrypt documents since MongoDB 4.2, though this release is the first to allow an application to query the encrypted data. Developers with no expertise in encryption can write apps that use this capability on the client side, and the capability itself (available in preview mode for MongoDB 6.0) adds no noticeable overhead to application performance, so claims the company. Data remains encrypted all times, even in memory and in the CPU; The keys never leave the application and cannot be accessed by the server. Nor can the database or cloud service administrator be able to look at the raw data. For organizations, queryable encryption greatly expands the utility of using MongoDB for all sorts of sensitive and secret data. Customer service reps, for instance, could use the data to help customers with issues around sensitive data, such as social security numbers or credit card numbers. In this podcast, White also spoke about the considerable engineering effort to make this technology possible — and make it easy to use for developers. "In terms of how we got here, the biggest breakthroughs weren't cryptography, they were the engineering pieces, the things that make it so that you can scale to do key management, to do indexes that really have these kinds of capabilities in a practical way," Green said. It was necessary to serve a user base that needs maximum scalability in their technologies. Many have "monster workloads," he notes. "We've got some customers that have over 800 shards, meaning 800 different physical servers around the world for one system. I mean, that's massive," he said. "So it was a lot of the engineering over the last year and a half [has been] to sort of translate those math and algorithm techniques into something that's practical in the database."
6/16/2022 • 17 minutes, 23 seconds
Simplifying Cloud Native Application Development with Ballerina
For the past six years, WSO2 has been developing Ballerina, an open-source programming language that streamlines the writing of new services and APIs. It aims to simplify the process of being able to use, combine, and create network services and get highly distributed applications to work together toward a determined outcome.In this episode of The New Stack Makers podcast Eric Newcomer, Chief Technology Officer of WSO2 discusses how the company created a new programming language from the ground up, and the plans for it to become a predominant cloud native language. Darryl Taft, news editor of The New Stack hosted this podcast.Founded on the idea that it was too hard to do development with integration, Ballerina was created to program in highly distributed environments. “Cloud computing is an evolution of distributed computing of integration. You're talking about microservices and APIs that need to talk to each other in the cloud,” said Newcomer. “And what Ballerina does, is it thinks about what functions outside of the program that need to be talked to,” he added.With Ballerina, developers can easily pick it up to create cloud applications. The language design is informed by TypeScript and JavaScript but with some additional capabilities, Newcomer said. “Developers can create records and schemas for JSON payloads in and out to support the API's for cloud mobile or web apps, and it has concurrency for concurrent processing of multiple calls transaction control but in a very familiar syntax, like TypeScript or JavaScript.”WSO2 is using Ballerina in the company’s low-code like offering, Choreo, which includes features such as the ability to create diagrams. “The long-time challenge in the industry is how do you represent your programming code in a graphical form. [Sanjiva Weerawarana, Founder of WSO2] has solved this problem by putting into the language syntax elements from which you can create diagrams. And he did it in such a way that you can edit the diagram and create code,” said Newcomer.Engineering for the cloud requires a programing language that can reengineer applications to achieve the auto scale, resiliency, and independent agility, said Newcomer. WSO2 is continuing push their work forward to tackle this challenge. “We're thinking Choreo is going to help us because it's leveraging the magic of Ballerina to help people get their job done faster. Once they see that, they'll see Ballerina and get the benefits of it,” Newcomer said.
6/7/2022 • 32 minutes, 17 seconds
The Future of Open Source Contributions from KubeCon Europe
VALENCIA – Open source code is part of at least 70% of enterprise stacks. Yet, a lot of open source contributors are still unpaid volunteers. Even more than tech as a whole, the future of open source relies on the community. Unless you're among the top tier funded open source projects, your sustainability replies on building a community – whether you want to or not – and cultivating project leadership to help recruit new maintainers – whether you want to hand over the reins or not. That's where the Tech Advisory Group or TAG on Contributor Strategy comes in, acting as maintainer relations for the Cloud Native Computing Foundation. In this episode of The New Stack Makers podcast, recorded on the floor of KubeCon + CloudNativeCon Europe 2022, we talk to Dawn Foster, VMware's director of open source community strategy; Josh Berkus, Red Hat's Kubernetes community manager; Catherine Paganini, Bouyant's head of marketing and community; and Deepthi Sigireddi, a software engineer at PlanetScale. Foster and Berkus are the co-chairs of the Contributor Strategy TAG, while Paganini is the creator of Linkerd and Sigireddi is a maintainer of Vitess, both CNCF graduated projects. Each brought their unique experience in both open source contribution and leadership to talk about the open source contributor experience, sustainability, governance, and guidance. With 65% of KubeConEU attendees at a CNCF event for the first time, albeit still during a pandemic, it makes for an uncertain signal for the future of open source. It either shows that there's a burst of interest for newcomers or that there's a dwindling interest in long-term contributions. The executive director of CNCF Priyanka Sharma even noted in her keynote that contributions for the foundation's biggest project Kubernetes have grown stagnant. "I see it as a positive thing. I think it's always good to get some new blood into the community. And I think you know, the projects are working to do whatever they can to get new contributors," Foster said. [sponsor_note slug="kubecon-cloudnativecon" ][/sponsor_note] But it's not just about how many contributors but who. One thing that was glaringly apparent at the event was the lack of diversity, with the vast majority of the 7,000 KubeConEU participants being young, white men. This isn't surprising at all, as open source is still based on a lot of voluntary work which naturally excludes those most marginalized within the tech industry and society, which is why, according to GitHub's State of the Octoverse, it sees only about 4% women and nonbinary contributors, and only about 2% from the African continent. If open source is such an integral part of tech's future, that future is built with more inequity than ever before. "The barrier to entry to open source right now is having free time. And to do free work? Yes, and let's face it, women still do a lot of childcare, a lot of housework, much more than men do, and they have less free time." Sigireddi continued that there are other factors which discourage those widely underrepresented in tech from participating, including "not having role models, not seeing people who look like you, the communities tend to have in-jokes [and other] things that are cultural, which minorities may not be able to relate to." Most open source code, while usually forked globally, exists in English only. One message throughout KubeConEU was, if a company relies on an open source project, it should pay some of its staff to contribute to and support that project because business may depend on it. This will in turn help bring OSS up a bit closer to the standard of the still abysmal tech industry statistics. "I think from an ecosystem perspective, I think that companies paying people to do the work on open source makes a big difference," Foster said. "At VMware, we pay lots of people who work primarily on upstream open source projects. And I think that does help us get more diversity into the community, because then people can do it as part of their regular day jobs." Encouraging those contributors that are underrepresented in OSS to speak up and be more representative of projects is another way to attract more diverse contributors. Berkus said the Contributors Strategy TAG had a meeting at KubeConEU with a group of primarily Italian women who have started in inclusiveness effort, starting with some things like speaker coaching and placement. "It turns out that a lot of things that you need to do to have more diverse contributors are things you actually needed to do anyway, just to make things better for all new contributors," Berkus explained. Indeed, welcoming new open source contributors – at all levels and in both technical and non-technical roles – is an important focus of the TAG. Paganini, along with colleague Jason Morgan, is co-author of the CNCF Landscape Guide, which acts as a welcome to the massive, overwhelming cloud native landscape. What she has found is that people will use the open source technology, but they will contribute to it because of the community. "We see a lot of projects really focusing on code and docs, which of course is the basics, but people don't come for the technology per se. You can have the best technology, it's amazing, and people are super excited, but if the community isn't there, if they don't feel welcome," they won't stick around, Paganini said. "People want to be part of a tribe, right?" Then, once you've successfully recruited and onboarded your community, you've got to work to not only retain but promote from within. All this and more is jam-packed into this lively discussion that cannot be missed! More on open source diversity and inclusion efforts: Beat Affinity Bias with Open Source Diversity and Inclusion Open Source Communities Need More Safe Spaces and Codes of Conducts. Now. WTF is Wrong with Open Source Communities Look Past the Bros, and Concerns About Open Source Inclusion Remain How to Give and Receive Technical Help in Open Source Communities Navigating the Messy World of Open Source Contributor Data How to Find a Mentor and Get Started in Open Source
6/1/2022 • 18 minutes, 30 seconds
Simplifying Kubernetes through Automation
VALENCIA, SPAIN —Managing the cloud virtual machines (VMs) your containers run on. Running data-intensive workloads. Scaling services in response to spikes in traffic — but doing so in a way that doesn’t jack up your organization’s cloud spend. Kubernetes (K8s) seems so easy at the beginning, but it brings challenges that rachet up complexity as you go. The cloud native ecosystem is filling up with tools aimed at making these challenges easier on developers, data scientists and Ops engineers. Increasingly, automation is the secret sauce helping teams and their companies work faster, safer and more productively. In this special On the Road edition of The New Stack Makers podcast recorded at [sponsor_inline_mention slug="kubecon-cloudnativecon" ]KubeCon + CloudNativeCon EU[/sponsor_inline_mention], we unpacked some of the ways automation helps simplify Kubernetes. We were joined by a trio of guests from [sponsor_inline_mention slug="netapp" ]Spot.io by NetApp[/sponsor_inline_mention]: Jean-Yves “JY” Stephan, senior product manager for Ocean for Apache Spark, along with Gilad Shahar, and Yarin Pinyan —product manager and product architect, respectively, for Spot.io. Until recently, Stephan noted, Apache Spark, the open source, unified analytics engine for large-scale data processing, couldn’t be deployed on K8s. “So all these regular software engineers were getting the cool technology with Kubernetes, cloud native solutions,” he said. “And the big data engineers, they were stuck with technologies from 10 years ago.” Spot.io, he said, lets Apache Spark run atop Kubernetes: “It’s a lot more developer friendly, it’s a lot more flexible and it can also be more cost effective.” The company’s Ocean CD, expected to be generally available in August, is aimed at solving another Kubernetes problem, said Pinyan: canary deployments. Previously, if you were running normal VMs, without Kubernetes, it was pretty easy to do canary deployments because you had to scale up a VM and then see if the new version worked fine on it, and then gradually scale the others,” he said. “In Kubernetes, it’s pretty complex, because you have to deal with many pods and deployments.” In enterprises, where DevOps and SRE team members are likely serving multitudes of developers, automating as much toil as possible for devs is essential, said Shahar. For instance, Spot.io’s tools allow users to “break the configuration into parts,” he said, which can task developers with whatever percentage of responsibility for the config that is deemed best for their use case. “We try to design our solutions in a way that will allow the DevOps [team] to set things once and basically provide pre-baked solutions for the developers,” he said. “Because the developer, at the end of the day, knows best what their application will require.”
6/1/2022 • 14 minutes, 32 seconds
One of Europe’s Largest Telcos’ Cloud Native Journey
Telecoms are not necessarily associated with adopting new-generation technologies. However, Deutsche Telekom has made considerable investments cloud in native environments, by creating and supporting Kubernetes clusters to supports its operations infrastructure. In this episode of The New Stack Makers podcast, recorded on the floor of KubeCon + CloudNativeCon Europe 2022, DevOps engineers Christopher Dziomba and Samy Nitsche of Deutsche Telekom discuss how one of Europe’s largest telecom providers made the shift to cloud native.Deutsche Telekom obviously didn’t start from scratch. It had decades worth of telecom infrastructure and networks that all needed to be integrated into the new world of Kubenetes. This involved a lot of “discussion with the other teams,” Dziomba said. “We had to work together [with other departments] to see how we wanted to manage legacy integration, and especially, and especially, policy and process integration,” Dziomba said. As it turned out, many of the existing services Deutsche Telekom offered were conductive to integrating into the distributed Kubernetes infrastructure. “It was suited to be deployed on something like Kubernetes,” Dziomba said. “The decision was also made to build the Kubernetes platform by ourselves inside Deutsche Telekom and not to buy one. This really facilitated the move towards cloud native infrastructure.”The shift also heavily involved the vendors that were “coming from the old route,” Nitsche said. “It's sometimes a challenge to make sure that the application is really also cloud native and to make sure it can use all the benefits Kubernetes offers.
6/1/2022 • 16 minutes, 41 seconds
OpenTelemetry Gets Better Metrics
OpenTelemetry is defined by its creators as a collection of APIs used to instrument, generate, collect and export telemetry data for observability. This data is in the form of metrics, logs and traces and has emerged as a popular CNCF project. For this interview, we're delving deeper into OpenTelemetry and its metrics support which has just become generally available. The specifications provided for the metrics protocol are designed to connect metrics to other signals and to provide a path to OpenCensus, which enables customers to migrate to OpenTelemetry and to work with existing metrics-instrumentation protocols and standards, including, of course, Prometheus. In this episode of The New Stack Makers podcast, recorded on the show floor of KubeCon + CloudNativeCon Europe 2022 in Valencia, Spain, Morgan McLean, director of product management, Splunk, Ted Young, director of developer education, LightStep and Daniel Dyla, senior open source architect, Dynatrace discussed how OpenTelemetry is evolving and the magic of observability in general for DevOps.
5/25/2022 • 20 minutes, 11 seconds
Living with Kubernetes After the 'Honeymoon' Ends
Nearly seven years after Google released Kubernetes, the open source container orchestrator, into an unsuspecting world, 5.6 million developers worldwide use it.But that number, from the latest Cloud Native Computing Foundation (CNCF) annual survey, masks a lot of frustration. Kubernetes (K8s) can make life easier for the organization that adopts it — after it makes it a lot harder. And as it scales, it can create an unending cadence of triumph and challenge.In other words: It’s complicated.At KubeCon + CloudNativeCon EU in Valencia, Spain last week, a trio of experts — Saad Malik, chief technology officer and co-founder of Spectro Cloud; Bailey Hayes, principal software engineer at SingleStore; and Fabrizio Pandini, a staff engineer at VMware — joined Alex Williams, founder and publisher of The New Stack, and myself for a livestream event.
5/25/2022 • 49 minutes, 30 seconds
Kubernetes and the Cloud Native Community
The pandemic has significantly accelerated the adoption of Kubernetes and cloud native environments as a way to accommodate the surge in remote workers and other infrastructure constraints. Following the beginning of the pandemic, however, organizations are retaining their investments for those organizations with cloud native infrastructure already in place. They have realized that cloud native is well worth maintaining their investments. Meanwhile, Kubernetes adoption continues to remain on an upward curve. And yet, challenges remain, needless to say. In this context, we look at the status of cloud native adoption, and in particular, Kubernetes at this time, compared to a year ago. In this episode of The New Stack Makers podcast, recorded on the floor of KubeCon + CloudNativeCon Europe 2022, we discussed these themes along with the state of Kubernetes and the community with James Laverack, staff solutions engineer, Jetstack a member of the Kubernetes release team, and Christoph Blecker, site reliability engineer, Red Hat, a member of the Kubernetes steering committee.
5/25/2022 • 15 minutes, 42 seconds
Go Language Fuels Cloud Native Development
Go was created at Google in 2007 to improve programming productivity in an era of multi-core networked machines and large codebases. Since then, engineering teams across Google, as well as across the industry, have adopted Go to build products and services at massive scale, including the Cloud Native Computing Foundation which has over 75% of the projects written in the language.In this episode of The New Stack Makers podcast, Steve Francia, Head of Product: Go Language, Google and alumni of MongoDB, Docker and Drupal board member discusses the programming language, the new features in Go 1.18 and why Go is continuing on a path of accelerated adoption with developers. Darryl Taft, News Editor of The New Stack hosted this podcast.In the State of Developer Ecosystem 2021, Go ranked in the top five languages that developers planned to adopt and continues to be one of the fastest growing languages. According to Francia, it was created with the motivation to see if a new system programming language could be built and compile quick with security as the top focus. With developers coming and going at Google, the simplicity and scalability of the language enabled many to contribute across several projects at any given time.“The influences that separates Go from most languages is the experience of the creators behind it who all came to build it with their collective experience,” Francia said. Today “Go is influencing a lot of the mainstream languages. Elements of it can be found in a tool that formats everyone’s source code to be identical and more readable. Since then, a lot of languages have adopted that same practice,” said Francia. “And then there’s rust. Go and rust are on parallel tracks and we're learning from each other. There's also a new language called V that has recently been open sourced which is the first major language inspired by Go,” Francia said.The latest release of Go 1.18 was Google’s biggest yet. “It included four major features, each of which you could build a release around,” said Francia. In this release, “Generics is the biggest change of the Go language which has been in the works for 10 years,” Francia added. “Because we knew that generics have the potential to make a language more complicated, we spent a long time going through different proposals,” he said. Fuzzing, workspaces and performance were three other features released in this past version of Go.“From improving our documentation and learning – which you can go to go.dev/learn/ to get the latest resources – we’re really focused on the broad view of the developer experience,” Francia said. “And in the future, we're seeing not our team so much as the community taking Go in new ways,” he added.
5/17/2022 • 30 minutes, 48 seconds
Svelte and the Future of Front-end Development
First released in 2016, the Svelte Web framework has steadily gained popularity as an alternative approach to building Web applications, one that prides itself on being more intuitive (and less verbose) than the current framework du jour, Facebook's React. You can say that it reaches back to the era before the web app — when desktop and server applications were compiled — to make the web app easier to develop and more enjoyable to user. In this latest episode of The New Stack Makers podcast, we interview the creator of Svelte himself, Rich Harris. Harris started out not as a web developer, but as a journalist who created the framework to do immersive web journalism. So we were interested in that. In addition to delving into history, we also discussed the current landscape of Web frameworks, the Web's Document Object Model, the way React.js updates variables, the value of TypeScript, and the importance SvelteKit. We also chatted about why Vercel, where Harris now works maintaining Svelte, wants to make a home for Svelte. TNS Editor Joab Jackson hosted this conversation. Below are a few excerpts from our conversation, edited for brevity and clarity. So set the stage for us. What was the point that inspired you to create Svelte? To fully tell the story, we need to go way back into the mists of time, back to when I started programming. My background is in journalism. And about a decade ago, I was working in a newsroom at a financial publication in London. I was very inspired by some of the interactive journalism that was being produced at places like the New York Times, but also the BBC and the Guardian and lots of other news organizations, where they were using Flash and increasingly JavaScript, to tell these data rich interactive stories that couldn't really be done any other way. And to me, this felt like the future of journalism, it's something that was using the full power of the web platform as a storytelling medium in a way that just hadn't been done before. And I was very excited about all that, and I wanted a piece of it. So I started learning JavaScript with the help of the help of some some friends, and discovered that it's really difficult. Particularly if you're doing things that have a lot of interactivity. If you're managing lots of state that can be updated in lots of different ways, you end up writing what is often referred to as spaghetti code. And so I started building a toolkit, really, for myself. And this was a project called Reactive, short for interactive, something out of a out of a Neal Stephenson book, in fact, and it actually got a little bit of traction, not it was never huge, but you know, it was my first foray into open source, and it got used in a few different places. And I maintained that for some years, and eventually, I left that company and joined the Guardian in the U.K. And we used Reactive to build interactive pieces of journalism there, I transferred to the U.S. to continue at the guardian in New York. And we use directive quite heavily there as well. After a while, though, it became apparent that, you know, as with many frameworks of that era, it had certain flaws. A lot of these frameworks were built for an era in which desktop computing was prevalent. And we were now in firmly in this age of mobile, first, web development. And these frameworks weren't really up to the task, primarily because they were just too big, they were too big, and they were too bulky and they were too slow. And so in 2016, I started working on what was essentially a successor to that project. And we chose the name Svelte because it has all the right connotations. It's elegant, it's sophisticated. And the idea was to basically provide the same kind of development experience that people were used to, but change the was that translated into the experience end users have when they run it in the browser. It did this by adopting techniques from the compiler world. The code that you write doesn't need to be the code that actually runs in the browser. Svelte was really one of the first frameworks to lean into the compiler paradigm. And as a result, we were able to do things with much less JavaScript, and in a way that was much more performant, which is very important if you're producing these kinds of interactive stories that typically involve like a lot of data, a lot of animation Can you talk a bit about more about the compiler aspect? How does that work with a web application or web page? So, you know, browsers run JavaScript. And like nowadays, they can run WASM, too. But JavaScript is the language that you need to write stuff in if you want to have interactivity on a web page. But that doesn't mean that you need to write JavaScript, if you can design a language that allows you to describe user interfaces in a more natural way, then the compiler could turn that intention into the code that actually runs. And so you get all the benefits of declarative programming but without the drawbacks that historically have accompanied that. There is this trade off that historically existed: the developer wants to write this nice, state driven declarative code and the user doesn't want to have to wait for this bulky JavaScript framework to load over the wire. And then to do all of this extra work to translate your declarative intentions into what actually happens within the browser. And the compiler approach basically allows you to, to square that circle, it means that you get the best of both worlds you're maximizing the developer experience without compromising on developer experience. Stupid question: As a developer, if I'm writing JavaScript code, at least initially, how do I compile it? So pretty much every web app has a build step. It is possible to write web applications that do not involve a build step, you can just write JavaScript, and you can write HTML, and you can import the JavaScript into the HTML and you've got a web app. But that approach, it really doesn't scale, much as some people will try and convince you otherwise. At some point, you're going to have to have a build step so that you can use libraries that you've installed from NPM, so that you can use things like TypeScript to optimize your JavaScript. And so Svelte fits into your existing build step. And so if you have your components that are written in Svelte files, it's literally a .SVELTE extension. Then during the build step, those components will get transformed into JavaScript files. Svelte seemed to take off right around the time we heard complaints about Angular.js. Did the frustrations around Angular help the adoption of Svelte? Svelte hasn't been a replacement for Angular because Angular is a full featured framework. It wants to own the entirety of your web application, whereas Svelte is really just a component framework. So on the spectrum, you have things that are very focused on individual components like React and Vue.js and Svelte. And then at the other end of the spectrum, you have frameworks like Angular, and Ember. And historically, you had to do the work of taking your component framework and figuring out how to build the rest of the application unless you were using one of these full-featured frameworks. Nowadays, that's less true because we have things like Next.js, and remix-vue, And on the Svelte team are currently working on SvelteKit, which is the answer to that question of how do I actually build an app with this? I would attribute the growth in popularity is felt to different forces. Essentially, what happened is it trundled along with a small but dedicated user base for a few years. And then in 2019, we released version three of the framework, which really rethought the authoring experience, the syntax that you use to write components and, and the APIs that are available. Around that time, I gave a couple of conference talks around it. And that's when it really started to pick up steam. Now, of course, we're growing very rapidly. And we're consistently at the top of developer-happiness surveys. And so now, like a lot of people are aware of is, but we're still like a very tiny framework, compared to the big dogs like React and Vue. You have said that part of the Svelte mission has been to make web development fun. What are some of Svelte's attributes that make it less aggravating for the developer? The first thing is that you can write a lot less code. If you're using Svelte, then you can express the same concepts with typically about 40% less code. There's just a lot less ceremony, a lot less boilerplate. We're not constrained by JavaScript. For example, the way that you use state inside a component with React, you have to use hooks. And there's this slightly idiosyncratic way of declaring a local piece of state inside the component. With Svelte, you just declare a variable. And if you assign a new value to that variable, or if it's an object, and you mutate that object, then the compiler interprets that as a sign that it needs to update the component.
5/10/2022 • 28 minutes, 11 seconds
Is Java Ready for Cloud Native Computing?
First released in 1995, Java’s programming language has been a leading developer platform that has become a workhorse for hundreds of enterprise applications. With each new technology evolution, Java has successfully adapted to change. But even while a recent Java ecosystem study found that more than 70% of Java applications in production environments are running inside a container, there continues to be hurdles the language must overcome to adapt to the cloud-native world.In this episode of The New Stack Makers podcast, Simon Ritter, deputy CTO of Azul Systems and Dalia Abo Sheasha, Java developer advocate of JetBrains discuss some of the challenges the language is working to overcome, and share some insight into the new features that developers are requesting. Darryl Taft, news editor of The New Stack hosted this podcast.The complexity of modern applications requires developers to master a growing array of skills, technologies, and concepts to develop in the cloud. And “what I've seen is that there is a gap in skills, and what it would take to get existing Java applications into the cloud,” said Abo Sheasha.“What developers really want is to focus on the idea of developing the Java code,” said Ritter. “Having the ability to plug in to different cloud providers, but also the ability to integrate with things like your CI/CD tooling so that you've got continuous integration, continuous deployment built in,” he added. Getting Java ready for the cloud is a “distributed responsibility across the people – from cloud providers to tooling providers,” said Ritter. “Everyone recognizes that the more folks we have on it, the more minds we have on it, the better outcome we're going to have for the developer’s language,” Abo Sheasha said. Making developers more efficient and productive is coming into the fold with the introduction of JEP, or JDK Enhancement Proposals - a lightweight approach to add new features in the development of the Java platform itself. “But there's some bigger projects like Project Amber which is all about small changes to the language syntax of Java with the idea of making it more productive by taking some of the boilerplate code out,” Ritter said. The journey to the next chapter of Java is multi-dimensional. While “most developers are focused on getting the job done, picking up skills for new things is a challenge because it takes time. Many still have the issue of using whichever Java version their company is stuck on,” said Ritter. “It's not because the developers don't want to do it; it’s that they need to convince management that it's worth investing in,” added Abo Sheasha.
5/3/2022 • 35 minutes, 36 seconds
KubeCon + CloudNativeCon 2022 Europe, in Valencia: Bring a Mask
Last week, the country of Spain dropped its mandate for residents and visitors to wear masks, to ward off further infections of the Coronavirus. So, for this year's KubeCon + CloudNativeCon Europe conference, to be held May 16 - 20th of May in Valencia, Spain, the Cloud Native Computing Foundation dropped its own original mandate that attendees wear masks, a rule that had been in place for its other recent conferences.This turned out to be the wrong decision, CNCF admitted a week later. A lot of people who already bought tickets were upset at this laxing of the rules for the conference, which could put them in greater danger of contacting the disease.So the CNCF put the mandate back in place, and offered refunds for those who felt Spain's own decision would put them in harm's way. CNCF will even send you a week's worth of N95 masks if you request them.So, long story short: bring a mask to KubeCon. And, as always, it is still a requirement to show proof of vaccination and temperature checks will be made as well.Tricky business running a conference in this time, no?In this latest episode of The New Stack Makers podcast, we take a look at what to expect from this year's KubeCon EU 2022. Our guests for this podcast are Priyanka Sharma, the executive director of CNCF, and Ricardo Rocha, who is a KubeCon co-chair and computer engineer at CERN. TNS Editor-in-chief Joab Jackson hosted this podcast.We recorded this podcast prior to the discussion around masks, and at the time, Sharma said that the CNCF based the mask ruling on Spain's own country-wide mandates. "So we are being very cautious with the health requirements for the event," she said.The conference team is also keeping an eye on Russia's aggressive moves in the Ukraine, though it is unlikely that the chaos will reach all the way to Spain. Still, "this is why it's essential to always have the hybrid option .. [to] have the virtual elements sorted," Sharma said.As the CNCF flagship conference, KubeCon brings together managers and users of a wide variety of cloud native technologies, including containerd, CoreDNS, Envoy, etcd, Fluentd, Harbor, Helm, Istio, Jaeger, Kubernetes, Linkerd, Open Policy Agent, Prometheus, Rook, Vitess, Argo, CRI-O, Crossplane, dapr, Dragonfly, Falco, Flagger, Flux, gRPC, KEDA, SPIFFE, SPIRE, and Thanos, and many many more. Most have been featured on TNS at one time or another.In this podcast, we also discuss what to expect from the virtual sessions at the conference, what to do in Valencia, the current state of Kubernetes, and we get some unofficial picks from Sharma and Rocha as to what keynotes not miss and what sessions to attend."The virtual option is great," Rocha said. "But I think the in-person conferences have have their own value. And there's a lot to be to be gained about meeting people directly and exchanging ideas and going to these events on the side of the conference as well."
4/26/2022 • 29 minutes, 20 seconds
Microsoft Accelerates the Journey to Low-Code
Low-code and no-code is becoming increasingly popular in software development, particularly in enterprises that are looking to expand the number of people who can create applications for digital transformation efforts. While in 2020, less than 25% of new apps were developed using no code/low code, Gartner predicts that by 2025, 70% will utilize this means. Microsoft is one vendor who has been paving the way in this shift by reducing the burden on those in the lines of business and developers in exchange for speed. But what are the potential and best practices for low code/no code software development? In this episode of The New Stack Makers podcast, Charles Lamanna, Corporate Vice President, Business Apps and Platform at Microsoft discusses what the company is doing in the low-code/no code space with its Power Platform offering, including bringing no code/low-code professionals together to deliver applications. Joab Jackson, Editor-in-Chief of The New Stack and Darryl Taft, News Editor of The New Stack hosted this podcast.
4/19/2022 • 36 minutes, 11 seconds
Meet Cadence: The Open-Source Orchestration Workflow Engine
Developers are often faced with complexity when building and operating long-running processes that involve multiple service calls and require continuous coordination. To solve this challenge, Uber built and introduced Cadence, the open-source solution for workflow orchestration in 2016 that enables developers to directly express complex, long-running business logic as simple code. Since its debut, it continues to find increased traction with developers operating large-scale, microservices-based architectures. More recently, Instaclustr announced support for a hosted version of Cadence.In this episode of The New Stack Makers podcast, Ben Slater, Chief Product Officer at Instaclustr and Emrah Seker, Staff Software Engineer at Uber discuss Cadence, and how it is used by developers to solve various business problems by enabling them to focus on writing code for business logic, without worrying about the complexity of distributed systems.Alex Williams, founder and publisher of The New Stack hosted this podcast, along with co-host Joab Jackson, Editor-in-Chief of The New Stack.
4/12/2022 • 28 minutes, 5 seconds
Removing the Complexity to Securely Access the Infrastructure
As the tech stack grows, the list of technologies that must be configured in cloud computing environments has grown exponentially and increased the complexity in the IT infrastructure. While every layer of the stack comes with its own implementation of encrypted connectivity, client authentication, authorization and audit, the challenge for developers and DevOps teams to properly set up secure access to hardware, software throughout the organization will continue to grow, making IT environments increasingly vulnerable. In this episode of The New Stack Makers podcast, Ben Arent, Developer Relations Manager, Teleport discusses how to address the hardware, software and peopleware complexity that comes from the cloud by using tools like Teleport 9.0 and the company’s first release of Teleport Machine ID.
4/5/2022 • 16 minutes, 10 seconds
Rethinking Trust in Cloud Security
From cloud security providers to open source, trust has become a staple from which an organization's security is built. But with the rise of cloud-native technologies, the new ways of building applications are challenging the traditional approaches to security. The changing cloud-native landscape is requiring broader security coverage across the technology stack and more contextual awareness of the environment. So how should DevOps and InfoSec teams across commercial businesses and governments rethink their security approach? In this episode of The New Stack Makers podcast, Tom Bossert, president of Trinity Cyber (and former Homeland Security Advisor to two Presidents); Patrick Hylant, client executive of VMware; and Chenxi Wang, managing general partner, Rain Capital discuss how businesses and the U.S. government can adapt to the evolving threat landscape, including new initiatives and lessons that can be applied in this high-risk environment. Alex Williams, founder and publisher of The New Stack, hosted this podcast. Jim Douglas, CEO of Armory also joined as co-host of this livestream event.
3/29/2022 • 54 minutes, 40 seconds
The Work-War Balance of Open Source Developers in Ukraine
"Many Ukrainians continue working. A very good opportunity is to continue working with them, to buy Ukrainian software products, to engage with people who are working [via] UpWork. Help Ukrainians by giving them the ability to work, to do some paid work," whether still in the country or as refugees abroad. If you take something from this conversation, Anastasiia Voitova's words may be the ones that should stick. After all, Ukraine has a renowned IT workforce, with IT outsourcing among its most important exports.Voitova, the head of customer solutions and security software engineer at Cossack Labs, just grabbed her laptop and some essentials when she suddenly fled to the mountains last month to "a small village that doesn't even have a name." She doesn't have much with her, but she has more work to do than ever — to meet her clients' increasing demand for cybersecurity defenses and to support the Ukrainian defense effort. Earlier this month, her Ukraine-based team even released a new open source cryptographic framework for data protection, on time, amidst the war.Voitova was joined in this episode of The New Stack Makers by Oleksii Holub, open source developer, software consultant and GitHub Star, and Denys Dovhan, front-end engineer at Wix. All three of them are globally known open source community contributors and maintainers. And all three had to suddenly relocate from Kyiv this February. This conversation is a reflection into the lives of these three open source community leaders during the first three weeks of the Russian invasion.This conversation aims to help answer what the open source community and the tech community as a whole can do to support our Ukrainian colleagues and friends. Because open source is a community first and foremost. "Open source for me is a very big part of my life. Idon't try to like gain anything out of it, I just code things. If I had a problem, I solve it, and I think to myself, why not share it with other people," Holub said.He sees open source as an opportunity for influence in this war, but also is acutely aware that his unpaid labor could be used to support the aggression against his country. That's why he added terms of use to his open source projects that use of his code implicitly means you condemn the Russian invasion. This may be controversial in the strict open source licensing world, but the semantics of OSS seem less important to Holub right now.Of course, when talking about open source, the world's largest code repository GitHub comes up. Whether GitHub should block Russia is an on going OSS debate. On the one hand, many are concerned about further cutting off Russia — which has already restricted access to Facebook, Instagram, and Twitter — from external news and facts about the ongoing conflict. On the other hand, with the widespread adoption of OSS in Russia, it's reasonable to assume swaths of open source code is directly supporting the invasion or at least supporting the Russian government through income, taxes, and some of the Kremlin's technical stack.For Dovhan, there's a middle ground. His employer, website builder Wix, has blocked all payments in Russia, but has maintained its freemium offering there. "There is no possibility to pay for your premium website. But you still can make a free one, and that's a possibility for Russians to express themselves, and this is a space for free speech, which is limited in Russia." He proposes that GitHub similarly allows the creation of public repos in Russia, but that it blocks payments and private repos there.Dovhan continued that "I believe [the] open source community is deeply connected and blocking access for Russian developers, might cause serious issues in infrastructure. Alot of projects are actually made by Russian developers, for example, PostCSS, Nginx, and PostHTML."These conversations will continue as this war changes the landscape of the tech world as we know it. One thing is for sure, Voitova, Dovhan and Holub have joined the hundreds of thousands of Ukrainian software developers in making a routine of work-war balance, doing everything they can, every waking hour of the day.
3/23/2022 • 36 minutes, 44 seconds
Securing the Modern Enterprise with Trust: A Look at the Upcoming Code to Cloud Summit
From cloud security providers to open source, trust has become the foundation from which an organization's security is built. But with the rise of cloud-native technologies such as containers and infrastructure as code (IaC), it has ushered in new ways to build applications and requirements that are challenging the traditional approaches to security. The changing nature of the cloud-native landscape is requiring broader security coverage across the technology stack and more contextual awareness of the environment. But how should teams like Infosec, DevOps rethink their approach to security?In this episode of The New Stack Makers podcast, Guy Eisenkot, co-founder and vice president of product at Bridgecrew, Barak Schoster Goihman, senior director, chief architect at Palo Alto Networks and Ashish Rajan, head of security and compliance at PageUp and producer and host for Cloud Security Podcast preview what’s to come at Palo Alto Network’s Code to Cloud Summit on March 23-24, 2022, including the role of security and trust as it relates to DevOps, cloud service providers, software supply chain, SBOM (Software Bill of materials) and IBOM (Infrastructure Bill of Material),Alex Williams, founder and publisher of The New Stack hosted this podcast.
3/15/2022 • 29 minutes, 17 seconds
Optimizing Resource Management Using Machine Learning to Scale Kubernetes
Kubernetes is great at large-scale systems, but its complexity and transparency has caused higher cloud costs, delays in deployment and developer frustration. As Kubernetes has taken off and workloads continue to move to a containerized environment, optimizing resources is becoming increasingly important. In fact, the recent 2021 Cloud Native Survey revealed that Kubernetes has already crossed the chasm to mainstream with 96 percent of organizations using or evaluating the technology.In this episode of The New Stack Makers podcast, Matt Provo, founder and CEO of StormForge, discusses new ways to think about Kubernetes, including resource optimization which can be achieved by empowering developers through automation. He also shared the company’s latest new machine learning-powered multi-dimensional optimization solution, Optimize Live.Alex Williams, founder and publisher of The New Stack, hosted this podcast.
3/8/2022 • 27 minutes, 42 seconds
Java Adapts to Cloud Native Computing
While Java continues to be the most widely used programming language in the enterprise, how is it faring the emerging cloud native ecosystem? Quite well, observed a panel of Oracle engineers who work the language. In fact, they estimate that they there are more than 50 million Java virtual machines running concurrently in the cloud at present.In this latest edition of The New Stack Makers podcast, we discussed the current state of Java with Georges Saab, who is Oracle's vice president of software development, for the Java Platform Group; Donald Smith, Oracle senior director of product management; and Sharat Chander, Oracle senior director of product management. TNS editors Darryl Taft and Joab Jackson hosted the conversation.
3/1/2022 • 28 minutes, 43 seconds
Mitigating Risks in Cloud Native Applications
Two decades ago, security was an afterthought; it was often ‘bolted on’ to existing applications that left businesses with a reactive approach to threat visibility and enforcement. But with the proliferation of cloud native applications and businesses employing a work from anywhere model, the traditional approach to security is being reimagined to play an integral role from development through operations. From identifying, assessing, prioritizing, and adapting to risk across the applications, organizations are moving to a full view of their risk posture by employing security across the entire lifecycle.In this episode of The New Stack Makers podcast, Ratan Tipirneni, President and & CEO, Tigera discusses how organizations can take an active approach to security by bringing zero-trust principles to reduce the application’s attack surface, harness machine learning to combat runtime security risks and enable a continuous compliance while mitigating risks from vulnerabilities and attacks through security policy changes.Alex Williams, founder and publisher of The New Stack hosted this podcast.
2/22/2022 • 28 minutes, 8 seconds
Engineering the Reliability of Chaotic Cloud Native Environments
Cloud-native applications provide an advantage in terms of their scalability and velocity. Yet, despite their resiliency, the complexity of these systems has grown as the number of application components continue to increase. Understanding how these components fit together has stretched beyond what can be easily digested, further challenging the ability for organizations to prepare for technical issues that may arise from the system complexities.Last month, ChaosNative hosted its second annual engineering event, Chaos Carnival where we discussed the principles of chaos engineering and using them to optimize cloud applications in today’s complex IT systems.The panelists for this discussion:Karthik Satchitanand, Co-founder and Open-Source Lead, ChaosNativeRamya Ramalinga Moorthy, Industrialization Head - Reliability & Resilience Engineering, LTI – Larsen & Toubro InfotechCharlotte Mach, Engineering Manager, Container SolutionsNora Jones, Founder and CEO, JeliIn this episode of The New Stack Makers podcast, Alex Williams, founder and publisher of The New Stack served as the moderator, with the help of Joab Jackson, editor-in-chief of The New Stack.
2/15/2022 • 53 minutes, 29 seconds
TypeScript and the Power of a Statically-Typed Language
If there is a secret to the success of TypeScript, it is in the type checking, ensuring that the data flowing through the program is of the correct kind of data. Type checking cuts down on errors, sets the stage for better tooling, and allows developers to map their programs at a higher level. And TypeScript itself, a statically-typed superset of JavaScript, ensures that an army of JavaScript programmers can easily enjoy these advanced programming benefits with a minimal learning curve.In this latest edition of The New Stack Makers podcast, we spoke with a few of TypeScript's designers and maintainers to learn a bit more about the design of the language: Ryan Cavanaugh, a principal software engineering manager for Microsoft; Luke Hoban, chief technology officer for Pulumi, who was one of original creators of TypeScript, and; Daniel Rosenwasser, Senior Program Manager, Microsoft. TNS editors Darryl Taft and Joab Jackson hosted the discussion.
2/8/2022 • 30 minutes, 10 seconds
When to Use Kubernetes, and When to Use Cloud Foundry
While Kubernetes brings a great deal of flexibility to application management, the Cloud Foundry platform-as-a-service (PaaS) software offers the best level of standardization, observed Julian Fischer, CEO, of cloud native services provider anynines.We chatted with Fischer for this latest episode of The New Stack Makers podcast, to learn about the company's experience in managing large-scale deployments of both Kubernetes and Cloud Foundry."A lot of the conversation today is about Kubernetes. But the Cloud Foundry ecosystem has been very strong," especially for enterprises, noted Fischer.
2/1/2022 • 24 minutes, 17 seconds
Makings of a Web3 Stack: Agoric, IPFS, Cosmos Network
Want an easy way to get started in Web3? Download a desktop copy of IPFS (Interplanetary File System) and install it on your computer, advises Dietrich Ayala, IPFS Ecosystem Growth Engineer, Protocol Labs, in our most recent edition of The New Stack Makers podcast.We've been hearing a lot of hype about the Web3 and its promise of decentralization — how it will bring the power of the web back to the people, through the use of a blockchain. So what's up with that? How do you build a Web3 stack? What can you build with a Web3 stack? How far along is the community with tooling and ease-of-use?This virtual panel podcast sets out to answer all these questions.In addition to speaking to Ayala, we spoke with Rowland Graus, head of product for Agoric, and Marko Baricevic, software engineer for The Interchain Foundation, which manages Cosmos Network. an open source technology to help blockchains interoperate. Each participant describes the role in the Web3 ecosystem where their respective technologies play. These technologies are often used together, so they represent an emerging blockchain stack of sorts.TNS Editor-in-Chief Joab Jackson hosted the discussion.
1/25/2022 • 32 minutes, 42 seconds
Managing Cloud Security Risk Posture Through a Full Stack Approach
Kubernetes, containers, and cloud-native technologies offer organizations the benefits of portability, flexibility and increased developer productivity but the security risks associated with adopting them continue to be a top concern for companies. In the recent State of Kubernetes Security report, 94% of respondents experienced at least one security incident in their Kubernetes environment in the last 12 months. In this episode of The New Stack Makers podcast, Avi Shua, CEO and Co-Founder of Orca Security talks about how organizations can enhance the security of their cloud environment by acting on the critical risks such as vulnerabilities, malware and misconfigurations by taking a snapshot of Kubernetes clusters and analyzing them, without the need for an agent.
1/19/2022 • 9 minutes, 28 seconds
Deploying Scalable Machine Learning Models for Long-Term Sustainability
As machine learning models proliferate and become sophisticated, deploying them to the cloud becomes increasingly expensive. This challenge of optimizing the model also impacts the scale and requires the flexibility to move the models to different hardware like Graphic Processing Units (GPUs) or Central Processing Units (CPUs) to gain more advantage. The ability to accelerate the deployment of machine learning models to the cloud or edge at scale is shifting the way organizations build next-generation AI models and applications. And being able to optimize these models quickly to save costs and sustain them over time is moving to the forefront for many developers.In this episode of The New Stack Makers podcast recorded at AWS re:Invent, Luis Ceze, co-founder and CEO of OctoML talks about how to optimize and deploy machine learning models on any hardware, cloud or edge devices.Alex Williams, founder and publisher of The New Stack hosted this podcast.
1/11/2022 • 15 minutes, 48 seconds
Laying The Groundwork: How to Position an Open-Source Project
The most attractive characteristic of open-source projects is the potential to tap into the total addressable market of collaborators. But when looking for users to your project and building a community around it requires the project to stand out from the millions of others, how do you build a plan to monetize it?In this podcast, Emily Omier, a positioning consultant who works with startups to stake out the right position in the cloud native / Kubernetes ecosystem, discusses how to grow your project by finding the right market category for your open-source startup.Alex Williams, founder and publisher of The New Stack hosted this podcast.
1/4/2022 • 31 minutes, 15 seconds
How to Hire (and Keep) Software Devs for Complex Systems
There’s no doubt that the cognitive load developers are facing is seemingly endlessly increasing. Microservices and open source have aggravated the situation, where it’s nearly impossible for one developer to get up to speed with any codebase. This makes onboarding extra challenging, and contributes to about two-thirds of tech workers experiencing burnout. CodeSee looks to help developers get up to speed faster by visualizing a codebase in just a few clicks.Shanea Leven, CEO and founder of CodeSee, sat down with TNS writer, Jennifer Riggins, on this episode of The New Stack Makers podcast to discuss workload complexity, work-life balance, and hiring/retaining best practices within the DevOps community.
12/28/2021 • 28 minutes, 34 seconds
Why AI-Controlled Robots Need to Be Smarter for IT
Artificial intelligence (AI) and machine learning (ML) have seen a surge in adoption and advances for IT applications, especially for database management, CI/CD support and other functionalities. Robotics, meanwhile, is largely relegated to factory-floor automation. In this The New Stack Makers podcast, Pieter Abbeel, co-founder, president, chief scientist at covariant.ai, a supplier of “universal AI” for robotics, discusses why and how the potential of robotics can evolve beyond just serving as pre-programmed devices thanks to advances in IT. Abbeel also draws on his background to offer his perspective, as a professor at the University of California, Berkeley and a podcast host at The Robot Brains Podcast.Alex Williams, founder and publisher of The New Stack, hosted this podcast.
12/21/2021 • 21 minutes, 18 seconds
Why CI/CD Continues to Evolve
Continuous integration and delivery (CI/CD) has seen some radical changes during the past few years, especially for continuous delivery. While not so long ago, application development and delivery was exclusively for monolithic stacks but delivering software for microservices and container environments is a very different animal.In this The New Stack Maker podcast, recorded at KubeCon+CloudNativeCon in October, guest Rob Zuber, chief technology officer at CircleCI, discusses the evolution of CI/CD from the perspective of CircleCI’s experience for over a decade.Alex Williams, founder and publisher of The New Stack, hosted this podcast.
12/14/2021 • 11 minutes, 10 seconds
A Paradigm Shift in App Delivery
Improving the cadences for application delivery and updates and maintaining their availability over Internet infrastructure remain quintessential challenges for organizations delivering distributed digital experiences. Today, especially palpable among DevOps teams are the challenges associated with optimizing application delivery and security infrastructure in today’s increasingly cloud-centric world.In this The New Stack Maker podcast, Pankaj Gupta, senior director, product marketing, Citrix, discusses why a radical change for application delivery is in order.Alex Williams, founder and publisher of The New Stack, hosted this podcast.
12/9/2021 • 29 minutes, 31 seconds
Most DevOps Plans Fail, but Things Are Getting Better
There is much discussion about boosting application release cadences, but the fact is that most organizations have not figured out how to deploy applications more quickly. According to data from analyst firm Gartner, 90% of DevOps initiatives will fail to fully meet expectations through 2023. In this breakfast episode of The New Stack Makers podcast, streamed live during LaunchDarkly’s annual Trajectory user’s conference, we discussed today’s DevOps struggles and challenges. Potential solutions were also covered, such as how DevOps teams are turning to self-service developer platforms to meet their cloud-deployment goals. Cody De Arkland, principal technical marketing engineer, LaunchDarkly; Rachel Stephens, senior analyst for analyst firm RedMonk; Steve George, chief operations officer for GitOps solutions provider and Flux creator Weaveworks; and Margaret Francis, president and chief operating officer for Armory, all participated in this discussion.Alex Williams, founder and publisher of The New Stack, hosted this podcast.
11/30/2021 • 46 minutes, 2 seconds
What It Takes to Go from CNCF Sandbox to Incubation
The number of Cloud Native Computing Foundation (CNCF) projects has exploded since Kubernetes came onboard, setting the stage for hundreds of tools and platforms that have achieved the various CNCF project maturity milestones of Sandbox, Incubated or Graduated.With the profound influence the adoption of the projects have had on cloud native notwithstanding, it can be easy to sometimes overlook the monumental effort involved in every project by their contributors. In this The New Stack Makers podcast, we look at two CNCF projects that have gone from sandbox to incubation: Crossplane, a Kubernetes add-on for infrastructure assembly and OpenTelemetry, which supports a collection of tools, APIs, and SDKs for observability.The podcast featured guests involved with the projects including Dan Mangum, senior software engineer, for cloud platform provider Upbound (Crossplane), Constance Caramanolis, principal software engineer, data platform provider Splunk, and on the OpenTelemetry Governance Committee and Ted Young, director of developer education, observability platform provider Lightstep and an OpenTelemetry co-founder who is also on the OpenTelemetry Governance Committee.Alex Williams, founder and publisher of The New Stack, hosted this podcast.
11/23/2021 • 12 minutes, 34 seconds
Why Cloud Native Is About Community
Cloud native is really only as good as the support and input the community provides. It is in this spirit that the Cloud Native Computing Foundation (CNCF) continues to invest heavily in the community to support new and existing projects, including Kubernetes, Prometheus and Envoy that are among the cornerstones of cloud native today.During this latest episode of The New Stack Makers podcast, held live at KubeCon + CloudNativeCon last month, CNCF Marketing Manager Bill Mulligan and CNCF Developer Advocate Ihor Dvoretskyi spoke of the CNCF's Cloud Native Credits and Kubernetes Community Day program, as well as why these and other initiatives are vital to building cloud native tools and infrastructure of today and in the future.Alex Williams, founder and publisher of The New Stack, hosted this podcast.
11/16/2021 • 16 minutes, 31 seconds
How Pokemon Go Creator Builds on Kubernetes for Developers
Kubernetes played a key role in maintaining Pokemon Go, Niantic’s wildly popular augmented-reality development. Kubernetes, and the efficiencies it offers DevOps teams, continue to play a role at Niantic, as the company builds on the game’s architecture to third-party developers.In this latest episode of The New Stack Makers podcast, Ria Bhatia, senior product manager of Niantic, discusses why the Pokemon Go platform remains relevant and why Kubernetes will remain an integral part of the platform as the company hopes to bring in more “developer customers.”
11/9/2021 • 13 minutes, 21 seconds
Google’s Long-Time Open Source Director Speaks of the Future
Google’s open source program certainly has come a long way since 2003. That was when the search engine giant could still arguably be called a startup, Android had not yet been acquired and open source projects Kubernetes, Go and Chromium were years away in the making.It was also then that Google co-founders Larry Page and Sergey Brin asked their favorite recruiter to go and find an “open source person,” recounted Chris DiBona, the company’s director for open source. Already an open source pioneer before joining Google, DiBona continues to oversee the tech giant’s open source program, which continues to have major implications for the IT industry and the open source community.In this New Stack Makers podcast, DiBona discusses Google’s open source policy, as well as the search engine giant’s plans for its open source future. Alex Williams, founder and publisher of The New Stack, hosted this podcast.
11/8/2021 • 42 minutes, 2 seconds
Open Source and the Cloud Native Data Center
The number of open source components inside services and applications continues to increase exponentially, and this adoption is creating a lot of change in how software is created, deployed and managed. in 2016, applications on average had 86 open source software components. Today, the average number of components is 528, according to “The 2021 Open Source Security and Risk Analysis (OSSRA) report.” In this latest edition of The New Stack Makers podcast, we discuss the implications of the explosion of open source’s adoption and its effect on data center operations. The guests were Mark Hinkle, co-founder and CEO, TriggerMesh, Shaun O’Meara, field CTO, Mirantis; Jeremy Tanner, developer relations, Equinix and Sophia Vargas, research analyst, open source programs office, Google. TNS’ Founder and Publisher Alex Williams and TNS Editor Joab Jackson hosted this podcast.
11/4/2021 • 40 minutes, 6 seconds
Siloscape: Windows Container Malware That Breaks Kubernetes
In March, Daniel Prizmant, senior security researcher for Palo Alto Networks, uncovered the malware targeting Windows containers, calling the exploit “Siloscape.” In a blog post, he wrote the emergence of such an attack was not “not surprising given the massive surge in cloud adoption over the past few years.”In this edition of The New Stack Makers podcast, Prizmant, as the guest, described what makes Siloscape a threat for Kubernetes clusters — both for Linux and Windows containers.The New Stack’s publisher and founder, Alex Williams, hosted this episode.
11/3/2021 • 29 minutes, 18 seconds
What the Future of Cloud Native is About to Bring
Since its creation almost six years ago and 120 projects later, the Cloud Native Computing Foundation (CNCF) has played a key role in the ongoing adoption of Kubernetes and associated tools and platforms for organizations making the shift to cloud native environments. In this The New Stack Makers podcast, Chris Aniszczyk, CTO, CNCF, discusses with The New Stack’s publisher and founder, Alex Williams, what’s hot in cloud native land and offers a glimpse of what is emerging.
11/2/2021 • 21 minutes, 22 seconds
How Kubernetes Stateful Data Management Can Work
How Kubernetes environments might be able to offer hooks for storage, databases and other sources of persistent data still is a question in the minds of many potential users. To that end, a new consortium called the Data on Kubernetes Community (DoKC) was formed to help organizations find the best ways of working with stateful data on Kubernetes.In this latest episode of The New Stack Maker podcast, two members of the group discuss the challenges associated with running stateful workloads on Kubernetes and how DoKC can help.Participants for this conversation were Melissa Logan, principal, of Constantia.io, an open source and enterprise tech marketing firm, and director of DoKC; Patrick McFadin, vice president, developer relations and chief evangelist for the Apache Cassandra NoSQL database platform from DataStax; and Evan Powell, advisor, investor and board member, MayaData, a Kubernetes-environment storage-solution provider.TNS Editor Joab Jackson hosted the podcast.
10/28/2021 • 30 minutes, 48 seconds
Chainguard, a 'Zero Trust' Supply Chain Security Company
Five former Googlers recently started Chainguard, a newly minted supply chain security company focusing on Zero Trust principles. Their mission is to help support DevOps teams with their monumental struggles of securing application code across the development, deployment and management cycle.“Supply chain security by default is our mission and making it really easy for developers to do the right thing,” Kim Lewandowski, founder and product, for Chainguard, said during a The New Stack Makers podcast recorded live at KubeCon + CloudNativeCon in October.Alex Williams, founder and publisher of TNS, hosted the podcast.
10/27/2021 • 14 minutes, 30 seconds
How GitOps Benefits from Security-as-Code
Security-as-code is the practice of “building security into DevOps tools and workflows by mapping out how changes to code and infrastructure are made and finding places to add security checks, tests, and gates without introducing unnecessary costs or delays,” according to tech publisher O’Reilly. In this latest “pancakes and podcast” special episode —recorded during a pancake breakfast during KubeCon + CloudNativeCon in October — we discuss how security-as-code can benefit emerging GitOps practices.The guests were Sean O’Dell, director of developer advocacy, Accurics, Sara Joshi, who was an associate software engineer for Accurics when this recording was made; Parminder Singh, chief information security officer (CISO), for hybrid-cloud digital-transformation services provider DigitalOnUs; Brendan O’Leary, staff developer evangelist, GitLab; Cindy Blake, senior security evangelist, GitLab; and Emily Omier, contributor, The New Stack and owner of marketing consulting provider Emily Omier Consulting.Alex Williams, founder and publisher of TNS, hosted the podcast.
10/26/2021 • 33 minutes, 37 seconds
What It Takes to Become a Senior Engineer
It takes more than just years of experience to become a senior software engineer — among the prerequisites are having a good marketing sense, interviewing skills and other personal qualities required to become one.In this The New Stack Makers podcast, guests Swizec Teller, a senior software engineer, Tia — a healthcare company— and author, and Shawn Wang, head of developer experience for microservices orchestration platform provider Temporal.io, describe the mindset and other attributes required to become a senior engineer.Darryl Taft, TNS news editor, hosted the podcast.
10/21/2021 • 34 minutes, 4 seconds
Business Innovation Across Multiclouds
Software deployments increasingly involve highly distributed and decentralized application development processes for deployments across any combination of data centers, public cloud and to the edge. All the while, reliability, security or performance cannot be compromised.In this The New Stack Makers podcast, a panel of technology executives discussed the best ways to speed up business innovation in today’s multicloud and multi-infrastructure world. They also discussed how to deliver apps and services faster to improve the customer experience — over a pancake breakfast during VMworld, VMware’s annual user’s conference.The guests were Dormain Drewitz, senior director of product marketing for VMware Tanzu, Mandy Storbakken, cloud technologist for VMware, Shawn Bass, CTO for VMware’s end-user computing business, and Jo Peterson, vice president cloud and security services, Clarify360.Alex Williams, founder and publisher of TNS, and Joab Jackson, TNS editor-in-chief, hosted the podcast.
10/20/2021 • 58 minutes, 54 seconds
Mist.io and the Challenge of Multicloud Management
Sometimes, multicloud just happens. Some organizations might have, for example, applications running on Amazon Web Services in one department, while at the same time, while another may come to rely on Google Cloud or other cloud provider services.How do you make them work under one unified architecture? The difficulties of multicloud management is the main topic of this latest episode of the New Stack Makers podcast, where we interview the CEO and co-founder of mulitcloud management platform provider Mist.io, Chris Psaltis. Here we discussed the inherent difficulties and possible solutions for running operations across multiple cloud services, as well as how Mist.io can help. TNS Editor Joab Jackson was the host.
10/12/2021 • 25 minutes, 11 seconds
Policy and Infrastructure as Code Go Together Like Syrup and Pancakes
Many organizations need better and tighter infrastructure policy for their distributed systems. This need has been underscored by an increasing number of misconfigurations, especially in distributed microservices and Kubernetes environments.How policy as code extends infrastructure as code was discussed in this latest episode of The New Stack Makers podcast, another one of our “pancakes and podcast” special episodes. The guests were Deepak Giridharagopal, chief technology officer of Puppet; Tiffany Jachja, data engineering manager for Vox Media; James Turnbull, vice president of engineering of the internationally known luxury and art auctioneer Sotheby’s; and Shea Stewart, a self-professed DevOps tech nerd. Alex Williams, founder and publisher of TNS, hosted the podcast.
10/7/2021 • 43 minutes, 33 seconds
The Advantages and Challenges of Going ‘Edge Native’
As the internet fills every nook and cranny of our lives, it runs into greater complexity for developers, operations engineers, and the organizations that employ them. How do you reduce latency? How do you comply with the regulations of each region or country where you have a virtual presence? How do you keep data near where it’s actually used?For a growing number of organizations, the answer is to use the edge.In this episode of Makers, the New Stack podcast, Ron Lev, general manager of Cox Edge, and Sheraline Barthelmy, head of product, marketing and customer success for Cox Edge, were joined by Chetan Venkatesh, founder and CEO of Macrometa. The trio discussed the best use cases for edge computing, the advantages it can bring, and the challenges that remain.The podcast was hosted by Heather Joslyn, features editor of The New Stack.
10/6/2021 • 28 minutes, 49 seconds
Databases and Kubernetes: Adopting a Distributed Mindset
Cloud native systems are, by definition, distributed —but to run databases securely and effectively on them, what’s needed is not only purpose-fit technology, but a change of mindset, according to this podcast episode’s guests.In this episode of Makers, the New Stack podcast, Jim Walker, principal product evangelist and Michelle Gienow, senior technical content manager, of Cockroach Labs (and a former New Stack reporter), discussed how distributed systems create new challenges for databases, the paradigm shift that’s needed to run databases effectively on Kubernetes, and the results of a new survey of Kubernetes users.The podcast was hosted by Heather Joslyn, features editor of The New Stack.
10/4/2021 • 25 minutes, 42 seconds
What to Expect at KubeCon+CloudNativeCon
It’s that time of the year again, when we gather to discuss all matters related to Kubernetes and the other assorted tooling necessary to make cloud native computing happen.KubeCon+CloudNativeCon will be held in Los Angeles next month, October 11 -15.A key difference at this year’s event — the first onsite event from the Cloud Native Computing Foundation since the beginning of the pandemic — is that the flagship cloud native conference will offer a much more significant virtual experience for those unable to travel to the venue in L.A..The virtual aspect of this year’s KubeCon+CloudNativeCon “is expected to continue indefinitely,” Priyanka Sharma, general manager, CNCF said in this edition of The New Stack Makers podcast. Sharma was joined by conference co-chair Jasmine James, who is the Twitter developer experience lead and manager for engineering effectiveness. They discussed this year’s schedule and agenda, how it will all compare to KubeCon+CloudNativeCon of years past and general cloud native trends. TNS Editor-In-Chief, Joab Jackson, hosted this episode of The New Stack Makers.
9/27/2021 • 26 minutes, 24 seconds
Fiberplane's Collaborative Notebooks for Incident Management
Database giant Oracle added a container native CI/CD platform to its cloud portfolio when it purchased Wercker in 2017. Since the acquisition, Wercker founder, Micha Hernandez van Leuffen, started Fiberplane, for which he is the CEO. In this latest episode of The New Stack Makers podcast, van Leuffen discusses the different aspects of the development of the Wercker and how that has parlayed into his work at Fiberplane, which offers collaborative notebooks for resolving incidents. Alana Anderson, founder and managing partner, base case capital, offered input from an investment capital firm perspective as well.Alex Williams, founder and publisher, and Joab Jackson, editor-in-chief, both of The New Stack, hosted the podcast.
9/21/2021 • 32 minutes, 24 seconds
Puppet's New Mission: Automating Cloud Native Infrastructure
An organization that has any ambitions or hopes to scale application deployments across cloud native environments is not going to get very far without automation.From CI/CD support, increasing application deployment speed — often across different environments — and maintaining compliance and security, operations teams manually managing these processes is just not humanly possible after a certain point.In this latest episode of The New Stack Makers podcast, Abby Kearns, Chief Technology Officer and head of R&D, and Chip Childers, Puppet Chief Architect, discussed what automation for infrastructure management for cloud native deployments means for Puppet and for the IT industry. Alex Williams, founder and publisher of TNS, hosted this interview.
9/15/2021 • 32 minutes, 34 seconds
Why Cloud Native Open Source is Critical for Twitter and Spotify
At last count, social media giant Twitter enjoys around 353 million active users, and streaming music service Spotify has 356 million active listeners. In both cases, open source tools and platforms for cloud native environments have served as the cornerstones for their tremendous growth.In this latest episode of The New Stack Makers podcast, Spotify Senior Staff Engineer Dave Zolotusky, and Twitter Developer Experience Lead and Manager for Engineering Effectiveness Jasmine James discussed the role of open source software in their respective organizations. Katie Gamanj, ecosystem manager of the Cloud Native Computing Foundation and Alex Williams, founder and publisher of TNS, co-hosted this interview.
9/1/2021 • 31 minutes, 24 seconds
Meet the DevSecOps Skillset Challenge For Cloud Deployments
There is much discussion about technology and tool gaps when organizations make the shift to cloud environments. However, a major — and often less-discussed — challenge is how to ensure that the DevOps team has the necessary skillsets to see the project through. Making sure that the right in-house talent and DevSecOps culture is in place to make the shift without exposing the organization's data and applications to security risks is especially critical.In this The New Stack Makers podcast hosted by Alex Williams, founder and publisher of TNS, guest Ashley Ward, technical director, office of the CTO, Palo Alto Networks, discussed the associated DevSecOps skillsets challenges for cloud deployments.
8/31/2021 • 28 minutes, 4 seconds
What User Empathy Means at Google Today
It's said we can all stand to make improvements when it comes to empathy. In software engineering, empathy is required to create something that the end user can easily figure out; it's unacceptable to build something you think is great but expect customers to figure it out on their own, just because you think they should. Search engine giant, cloud services leader and Kubernetes creator, Google, realizes this.In this latest episode of The New Stack Makers podcast, The New Stack Founder and Publisher Alex Williams and TNS News Editor Darryl Taft sit down with Google’s Kim Bannerman, program manager for Empathetic Engineering, and Kelsey Hightower, principal developer advocate, Google Cloud Platform (GCP), to discuss Google's Customer Empathy Program and end-user satisfaction.
8/24/2021 • 36 minutes, 17 seconds
Low-code, No-code Can Work for Cloud Native
The definition of “low-code, no-code” remains a subject of debate. For some, it is the ability for a so-called “citizen developer” who lacks the training and skills to develop software — to be able to rely on a platform to deploy code with the same level of competence as that of a professional software engineer. Others describe low-code, no-code as a way to rely on a platform that facilitates software development — while automating many of the tasks in a build — to both simplify the process for inexperienced developers and to save time and resources for experienced developers. In both cases in this increasingly crowded space, low-code, no-code makes the coding and software development process simpler and more automated as a result.In the case of low-code, no-code platform provider gopaddle, the idea is to to “unleash the power of a no-code platform for modern applications.” How low-code, no-code can be applied to Go-centric applications running in cloud native environments was the main subject of this The New Stack Makers podcast with Vinothini Raju, founder and CEO, gopaddle as the guest. The New Stack founder and publisher Alex Williams and TNS news editor Darryl Taft hosted the conversation.
8/19/2021 • 29 minutes, 43 seconds
CloudBees Preps for DevOps World and a New Phase of Growth
As continuous integration and delivery provider CloudBees prepares for its annual DevOps World conferences, the company also is gearing up for a new phase of growth with a greater focus on security, AI and making DevOps easier.DevOps World will run September 28-30. Last year, the event drew around 30,000 virtual attendees. This year the event is again virtual and is also free. With a tagline of “building the future of software delivery together,” the focus of DevOps World will be to reach out to the entire DevOps ecosystem to share knowledge on the tools, techniques and best practices currently in use and those anticipated for the future.In this latest episode of The New Stack Makers podcast we interview Sacha Labourey, co-founder and chief strategy officer of CloudBees, about both DevOps World and the future of the company. TNS Publisher Alex Williams hosted this episode, with the help of TNS News Editor Darryl K. Taft.
8/17/2021 • 37 minutes, 30 seconds
What It Requires to Secure APIs for Microservices
Both APIs and microservices play a key role in cloud native environments. Microservices serve as the cornerstone of distributed and shared computing resources. At the same time, APIs serve as a very efficient way to streamline many operations and development tasks from DevOps teams.However, both microservices and APIs carry with them their own security risks. All it takes is for one compromised Kubernetes node to allow for an intruder to gain root access through an API to an organization’s entire container infrastructure across multiple clusters (a worst-case scenario).In this episode of The New Stack Makers podcast, we look at how to both secure microservices with APIs and how to rely on APIs to delegate certain security tasks to a trusted third party. Our guest is Viktor Gamov, principal developer advocate for Kong, an API-connectivity company. The episode is hosted by Alex Williams, TNS founder and publisher, and Bharat Bhat, marketing lead, developer relations, Okta.
8/12/2021 • 28 minutes, 18 seconds
Ransomware Is More Real Than You Think
You have a teddy bear you want to love and protect. A big brother or sister takes the teddy bear and threatens to hold it for ransom until you pay up. What do you do?The teddy bear analogy is certainly simplistic, but it also reflects the reality of the ransomware attacks that organizations increasingly face. Attackers block access to critical data in exchange for increasingly outlandish ransoms. According to a Palo Alto Networks’ Unit 42 report, the highest ransom in 2020 was $30 million, up from $15 million in 2019.In this latest episode of The New Stack Makers podcast, we spoke with Jason Williams, product marketing manager for Prisma Cloud at Palo Alto Networks, about what organizations should do to protect themselves from ransomware attacks. Alex Williams, founder and publisher of TNS, hosted this episode.
8/6/2021 • 25 minutes, 54 seconds
Cloud Native Deployments Bring New Complexities to the Developer
Many organizations are finding that shifting to cloud native environments has become easier than it was in the past. However, the complexities and ensuing challenges can still surmount once at-scale deployments begin.In this episode of The New Stack Makers podcast, hosted by TNS’ Alex Williams, founder and publisher, and Joab Jackson, TNS managing editor, application-deployment standards are the discussion of the day. The featured guests are Bruno Andrade, founder, Shipa, a provider of frameworks for Kubernetes; and Bassam Tabbara, founder and CEO, Upbound, which offers a universal control plane for multi-cluster management.
7/28/2021 • 25 minutes, 2 seconds
Kelsey Hightower, Mark Shuttleworth: Kubernetes Relies on Linux
Canonical's wildly popular Ubuntu Linux distribution continues to quietly play a role in the continued widespread adoption of Kubernetes. And that quiet support is as it should be, concluded Kelsey Hightower, Google Cloud Platform principal developer advocate, and Mark Shuttleworth, CEO of Canonical, in this latest episode of The New Stack Makers podcast. Alex Williams, founder and publisher of TNS hosted this episode.Taking a step back, Ubuntu, as well as Linux in general, has become much easier to use, expanding beyond what many once considered to be a server operating system and an esoteric alternative to Windows.“There was this kind of inflection point where Linux has gone from like this command line server-side thing to something that you could actually run on a desktop with a meaningful UI and it felt like we were closing the gap on all the other popular open operating systems,” said Hightower.Kubernetes and Cloud Native Operations ReportCanonical's Kubernetes Managed Services
7/21/2021 • 37 minutes, 14 seconds
Infoblox: How DDI Can Help Solve Network Security and Management Ills
Network connections can be likened to attending an amusement park, where Dynamic Host Configuration Protocol (DHCP), serves as the ticket to enter the park and the domain name system (DNS) is the map around the park. Network management and security provider Infoblox made a name for itself by collapsing those two core pieces into a single platform for enterprises to be able to control where IP addresses are assigned and how they manage network creation and movement."They control their own DNS so that they can have better control over their traffic,” explained Anthony James, Infoblox vice president of product marketing, in this latest episode of The New Stack Makers podcast, hosted by Alex Williams, founder and publisher of The New Stack.
7/20/2021 • 29 minutes, 12 seconds
Continuous Delivery and Release Automation (CDRA) Picks Up Where CI/CD Ends
When it comes to at-scale software development, is continuous delivery and release automation (CDRA) the next step in the evolution of continuous integration/continuous delivery (CI/CD)?Forrester Research thinks so. The analysis firm describes CDRA as a way for organizations to deliver better-quality software faster and more securely, by automating digital pipelines and improving end-to-end management and visibility.In this edition of The New Stack Makers podcast, Anders Wallgren, CloudBees vice president of technology strategy, discusses CDRA, supporting tools and the goals and challenges DevOps teams have when delivering software today. CI/CD systems provider CloudBees was named a leading CDRA vendor in the report "The Forrester Wave: Continuous Delivery And Release Automation, Q2 2020."The episode was hosted by Alex Williams, founder and publisher of The New Stack, and co-hosted by Joab Jackson, TNS managing editor.
7/15/2021 • 25 minutes, 58 seconds
When Is Decentralized Storage the Right Choice?
The amount of data created has doubled every year, presenting a host of challenges for organizations: security and privacy issues for starters, but also storage costs. What situations call for that data move to decentralized cloud storage rather than on-prem or even a single public cloud storage setup? What are the advantages and challenges of a decentralized cloud storage solution for data, and how can those be navigated?On this episode of Makers, the New Stack podcast, Ben Golub, CEO of Storj, and Krista Spriggs, software engineering manager at the company, were joined by Alex Williams, founder and publisher of The New Stack, along with Heather Joslyn, TNS’ features editor. Golub and Spriggs talked about how decentralized storage for data makes sense for organizations concerned about cloud costs, security, and resiliency.
7/14/2021 • 26 minutes, 9 seconds
CNCF Assesses the Tools for Kubernetes Multicluster Management
Once they have piloted Kubernetes, many organizations then want to scale up their K8s deployments, and run workloads across many clusters. But managing multiple clusters requires a new set of tools, ones that automate many routine and manual tasks. So, for its fifth Tech Radar report, the Cloud Native Computing Foundation surveyed the tools available for multicluster management, based on the input from its end-user community.In this edition of The New Stack Analysts podcast, we talk with two people who helped assemble the report, Federico Hernandez, principal engineer social media analysis provider Meltwater, and Simone Sciarrati, Meltwater engineering team lead. We chatted about the report's findings and how the multicluster management tool landscape is taking shape. Co-hosting this episode is Alex Williams, founder and publisher of The New Stack and the Tech Radar's organizer Cheryl Hung, CNCF vice president of ecosystem.
7/13/2021 • 28 minutes, 30 seconds
Video Game Security Should Be Simple for Developers
Video games continue to explode in popularity, while the number of potential attack vectors increase as well. In this The New Stack Makers podcast host Alex Williams, publisher and founder of TNS and co-host Bharat Bhat, marketing lead, developer relations, for Okta, cover why and how video game platforms and connections should be more secure with guest Okta Senior Developer Advocate Nick Gamb.The gaming industry has often served as a showcase for some of industry’s greatest programming talents. As a case in point, John Carmack’s C++ code underpinning “Doom” is considered one of historical greats of programming not just for gaming but for software in general. For Gamb, while growing up, playing “Quake” and “Doom” before studying the code for these games served as his entry point into the software industry, as he noted how these games helped to “revolutionize gaming with first-person shooters (FPS).
7/8/2021 • 25 minutes, 12 seconds
Decentralization Returns the Internet to its Roots
The internet's fabled history includes such milestones as the Advanced Research Projects Agency's (ARPA) development of packet switching (ARPANET), paving the way for today's modern infrastructure, or Tim Berners-Lee’s research that culminated in the explosive adoption of the World Wide Web (WEB) in the 1990s. Today, as microservices, Kubernetes and distributed environments and connections become more prevalent, the use of the Internet is becoming more decentralized as well.In this episode of The New Stack Makers podcast hosted by Alex Williams, founder and publisher of TNS, Storj Labs' Ben Golub, chairman and interim CEO, and Katherine Johnson, head of compliance, discuss how the Internet today centers around decentralization — and more importantly — how decentralization reflects upon the roots of the internet.
7/7/2021 • 25 minutes, 12 seconds
Reckoning With the Human Factor in Observability
Observability is widely misunderstood, but in an age of increased security breaches and more business being conducted online, it’s never been more important. How should organizations be thinking about their resources in multicloud environments? What strategies should they adopt to catch gaps in their security before hackers do? And also, what cultural changes might DevOps teams adopt to strengthen their observability?In this episode of The New Stack Makers podcast, Maya Levine, technical marketing engineer and cloud native and cyber security evangelist for Check Point, joined co-hosts Alex Williams, The New Stack’s publisher, and Heather Joslyn, TNS’s features editor, for a discussion of what observability means now.
7/6/2021 • 26 minutes, 43 seconds
Why One Storage Provider Adopted Go as Its Programming Language
Go owes its popularity to a number of factors as Golang advocates often speak of its speed, robustness and versatility, especially compared to C++ and Java and JavaScript. In this The New Stack Makers podcast, hosts TNS’ Alex Williams, founder and publisher, and Darryl Taft, news editor, cover the reasons for decentralized storage provider Storj’s shift to Go with featured guests Storj’s JT Olio, CTO, and Natalie Villasana, software engineer.Storj’s needs for Go to support its development and operations stems from its unique requirements as the “Airbnb for hard drives,” Olio explained.
6/30/2021 • 25 minutes, 54 seconds
Cloud Native Security Shifts the Focus Back to Securing the Application
Cloud native computing is bringing about such a sea change in how applications are developed, deployed and run, that, not surprisingly, it is changing the rules for information security as well. Case in point: serverless computing.In this latest edition of The New Stack makers podcast, we speak with Check Point's Cloud Security Strategist Hillel Solow, who has been at the cutting edge of these changes. Solow co-founded Protego Labs, a pioneer in serverless security. Security vendor Check Point saw the writing on the security wall early on and gobbled up Protego in 2019. The New Stack Publisher Alex Williams and TNS Editor Joab Jackson hosted this episode.
6/29/2021 • 27 minutes, 33 seconds
How to Secure Microservices in Ways Developers Like
The number of services cloud providers alone have begun to offer over the past couple of years has exploded, potentially exposing an exponentially larger number of microservices to vulnerabilities that support these services across multiple cloud and on-premises environments.In this The New Stack Makers podcast hosted by Jack Wallen, a correspondent for The New Stack, TJ (Tsion) Gonen, head of Cloud Security, Check Point, puts microservices security in context and describes the critical role security tools play and the support that artificial intelligence (AI) and machine learning (ML) offer.
6/23/2021 • 32 minutes, 26 seconds
Progressive Delivery Past, Present and Future
The definitions of progressive delivery can vary, while many, if not most, would agree it represents an evolution of CI/CD. In this The New Stack Makers podcast The New Stack’s Alex Williams, publisher and founder, and B. Cameron Gain, correspondent, of The New Stack, cover why progressive delivery will play a large role in the future of DevOps. Nick Rendall, senior product marketing manager, CloudBees, is the featured guest.While progressive delivery is universally accepted as important for DevOps and software development, delivery and post-deployment management, how best to implement it remains a challenge for many organizations. “Everyone understands that progressive delivery is a good thing, and now it's like, ‘okay, great, but how do we really do it and let's take this concept and let's really build it out into our big enterprise organizations,” said Rendall.
6/22/2021 • 24 minutes, 40 seconds
How to Recognize, Recover from, and Prevent Burnout
The tech industry is broken. We deify overworking, and think burnout comes with bragging rights. But how do we break this exhausting cycle? In this episode of The New Stack Makers, we talk with LaunchDarkly's Manager of Developer Marketing Dawn Parzych about how to identify burnout in others and in yourself, how to treat it, and how to build a psychologically safe working environment that allows folks to say no.With a masters in psychology and a DevRel role that certainly straddles people and tech, Parzych's work often sits on the people side of what they're building."I love the idea ofthe socio-technical systems that we're building,like tech, doesn't exist in a bubble. People are building the technology. They're very interrelated and you can't just focus on the tech, the people are the hardest part of tech. And we spend more time talking about how tech's the hard piece,where it's reallythe people and the interrelation betweenthe people and the machines," she said.
6/15/2021 • 23 minutes, 45 seconds
Why Cloud Native Data Management Day Is About Stateful Data
No longer considered an ephemeral concept as it originally was, data management has become a huge issue and challenge, especially for managing stateless data in Kubernetes environments. Cloud Native Data Management Day at the recently held KubeCon + CloudNativeCon Europe 2021 event in May and the state of data management were the subject of discussion in this edition of The New Stack Makers podcast, hosted by Alex Williams, founder and publisher of The New Stack. The guests were Michael Cade, senior global technologist, Veeam Software and Nigel Poulton, owner of nigelpoulton.com, which offers Kubernetes and Docker training and other services. Both Cade and Poulton were also involved in the organization of Cloud Native Data Management Day.
6/10/2021 • 33 minutes, 37 seconds
The New Stack Makers: Staying in "the Zone" with the Right Dev Tools
Today’s developer seems to be working with more tools than ever. Building a Node.js-based JavaScript application could require over a dozen tools at times to get code out into production. It's easy to get sucked down a rabbit hole and not stay focused. Debugging an application once in production can also be a challenge: You want as much context at your fingertips as needed while maintaining a reasonable signal-to-noise ratio.Dan O’Brien, a software engineer for feature management platform provider LaunchDarkly, has a personal interest in how to keep from being distracted/staying in the flow when working on a new feature or any piece of code.In this very latest episode of The New Stack Makers podcast, we ask O'Brien about the complexities he sees in today’s developer workflow, as well as some tips he has to stay “in the zone” when writing code. We’ll also discuss the tools that LaunchDarkly has that can help expedite application development. TNS founder and Publisher Alex Williams, along with TNS Managing Editor Joab Jackson, hosted this podcast.
6/8/2021 • 28 minutes, 21 seconds
A Different Perspective on Software Planning and Deployment
No matter how much we prepare, deployments don’t always go as planned. In this edition of The New Stack Makers podcast, hosted by Alex Williams, founder and publisher of The New Stack, Isabelle Miller, software engineer, LaunchDarkly, describes how DevOps teams can build processes to help remove unwanted surprises during release cycles — and why they do not need to be stressful.One of the main things Miller said she has discovered since joining LaunchDarkly at the beginning of 2020 is the importance of having procedures in place for when things do go wrong, “because things are going to go wrong,” she said.“You need to be able to manage that problem as quickly as possible, and minimize any harm before things get out of control when that happens,” said Miller. “So, one of the great things about working at LaunchDarkly is that I get to use our products. And one of the wonderful things about LaunchDarkly’s feature flags is that you can just turn things off.”
6/1/2021 • 28 minutes, 28 seconds
How Adidas Manages for Scale
How Adidas manages for scale gives a sense for how a sportswear company is also in part similar to a software house just in terms of measuring how much code they run.In this episode of The New Stack Makers podcast, Alex Williams, founder and publisher of The New Stack, speaks with Adidas’ Iñaki Alzorriz, senior director platform engineering, and Rastko Vukasinovic, director solution architecture, on how Adidas scales DevOps and resiliency on Kubernetes. They also discuss how Adidas views managing at scale in three ways: technically, culturally and strategically.
5/26/2021 • 31 minutes, 15 seconds
What Observability Should Do for Your Organization
Debate continues in the industry about what observability is, and more specifically, what it should offer DevOps, especially those working in operations who are often responsible for detecting those “unknown unknowns.” In this The New Stack Makers podcast hosted by Alex Williams, founder and publisher of The New Stack, Bartek Plotka, a principal engineer at Red Hat, a SIG observability tech Lead for Thanos and a Prometheus maintainer; and Richard Hartmann, community director at Grafana, a Prometheus maintainer, OpenMetrics founder and a CNCF SIG observability chair member, discuss how observability should be easier to use and how it can be cost effective.
5/25/2021 • 37 minutes, 9 seconds
Data Persistence and Storage over Pancakes at KubeCon EU 2021
At this year’s KubeCon EU 2021, some things were the same — it was still virtual which meant it attracted a huge turnout of a more broadly international audience — and some things were different — like that almost everyone’s bought into Kubernetes and cloud native architecture, it’s now just how they use it. Another KubeCon tradition, The New Stack hosted a live pancake breakfast to reflect on the maturity of Kubernetes particularly around data persistence and storage.Our Publisher Alex Williams hosted this year’s very early discussion (at least for him) with Itzik Reich, VP of technologists, and Nivas Iyer, senior principal product manager, both at Dell Technologies, along with pancakes regular Cheryl Hung, VP of ecosystem at the Cloud Native Computing Foundation.
5/19/2021 • 26 minutes, 20 seconds
GitOps, WebAssembly, Smarter APIs: The Developer Experience Is Just Getting Started
The adoption of GitOps, improvements to APIs and the increasing reach of virtual machine language WebAssembly (Wasm) are influencing the developer experience, and ultimately, how DevOps teams reach their application-deployment and -management goals. These were among the more talked-about themes at Cloud Native Computing Foundation KubeCon + CloudNativeCon EUPutting it all into context, Alex Williams, founder and publisher, and Joab Jackson, managing editor, of The New Stack, are the hosts of this The New Stack Makers podcast. The featured guests are Bryan Liles, principal engineer, VMware and Cheryl Hung, vice president of ecosystem, CNCF.
5/18/2021 • 45 minutes, 22 seconds
How to Improve Kubernetes Observability for Developer Velocity
A major part of improving developer velocity is about getting the most out of an observability platform. While that is a commonly held assumption, this best practice is also a far-reaching goal for many DevOps teams.Hosted by Alex Williams, founder and publisher of The New Stack, this The New Stack Makers — podcast recorded during a virtual pancake breakfast — features a discussion on improving observability for developers. The featured guests were Zain Asgar, general manager, Pixie and New Relic open source and CEO and co-founder of Pixie Labs, Roopak Venkatakrishnan, engineering manager, Bolt (an e-commerce retailer tool), Ihor Dvoretskyi, developer advocate, Cloud Native Computing Foundation (CNCF) and Christine Wang, senior solutions engineer, Grafana Labs.
5/17/2021 • 32 minutes, 44 seconds
GitOps Modern Practices for Reaching a Desired State and Decreasing Exposure
As GitOps moves beyond improving how code repositories are managed for continuous integration/continuous integration (CI/CD), the security component of GitOps has become more of a pressing issue as the use of Git, and GitOps, becomes more widely adopted. The open source community should also play a critical role in improving GitOps.Hosted by Alex Williams, founder and publisher of The New Stack, this recording features Om Moolchandani, co-founder and CISO/CTO, for Accurics, Cindy Blake, senior security evangelist, GitLab, Frank Kim, fellow, SANS Institute; Sanjeev Sharma, head of platform engineering, Truist Financial and Katie Gamanji, ecosystem advocate, Cloud Native Computing Foundation (CNCF).
5/12/2021 • 28 minutes, 23 seconds
Developers Just Want To Know if They Have a Problem
Developers just want to know if they have a vulnerability before putting code into production. But often, the answer back is not what the developer wants to hear.More analysis is needed, the software security group will often reply, said Meera Rao, senior director of product management at Synopsys, in this latest episode of The New Stack Makers, hosted by Alex Williams, founder and publisher of The New Stack,Rao is the creator of a new intelligent orchestration technology that helps developers get their issues resolved without a long wait. That long wait is remedied by relying on Synopsys’ system to let developers know what’s wrong and whether specific security holes require immediate fixing or not.
5/10/2021 • 28 minutes, 43 seconds
What to Build First: Istio or Kubernetes?
What we do know about Kubernetes? It’s a raw, gaping maw. It’s not meant for most of us. What is needed? Access to the grinding, digital gears that make what we know of as distributed architectures.
Istio is an example of a management layer for Kubernetes, said Zack Butcher, part of the founding engineering team at Tetrate, a service mesh company. He joins Varun Talwar, co-founder at Tetrate for a discussion about the service mesh Istio and its role in the management of highly distributed networks, including, of course, Kubernetes in this The New Stack Makers podcast. Alex Williams, founder and publisher of The New Stack, hosted this episode.
4/22/2021 • 30 minutes, 44 seconds
The Insider’s Guide to KubeCon+CloudNativeCon EU 2021
In this episode of The New Stack Makers podcast, hosted by Joab Jackson, managing editor for The New Stack, we speak two of the fabled conference’s key organizers about what to expect and what the organizers’ goals are: Priyanka Sharma, general manager for CNCF and Stephen Augustus, engineering director and head of open source at Cisco.
This is no business-as-usual KubeCon conference, of course. Last year’s KubeCon EU was cancelled just a few weeks before the event was scheduled to take place. Then, many question marks remained during the early days of the pandemic about not only the future of conferences but how workers in the IT industry would continue to live and work. As it turns out, this year’s event is virtual, of course, and at the very least, there is no shortage of talks and events.
All told, for KubeCon, experts from organizations including Adobe, Apple, CERN, Nvidia and OVHcloud will deliver more than 100 sessions, keynotes, lightning talks, and breakout sessions. There will also be more than 60 sessions hosted by project maintainers – spanning beginner-level introductions, end-user case studies and technical deep dives.
4/20/2021 • 34 minutes, 22 seconds
How Kasten’s Ongoing Contribution to Open Source Bears Fruit for Stateful Storage
This The New Stack Makers podcast explores the state of open source software today and features a case example of what is possible: Kasten by Veeam has created Kubestr to identify, validate and evaluate storage systems running in cloud native environments. As Michael Cade, a senior global technologist for Veeam, describes Kubestr, the open source tool provides information about what storage solutions are available for particular Kubernetes clusters and how well they are performing. The software project is also intended to offer DevOps teams an “easy button” to automate these processes.
Hosted by Alex Williams, founder and publisher of The New Stack, Cade and fellow guest Sirish Bathina, a software engineer, for Kasten, describe Kasten’s long-standing collaboration with the open source community and how Kubestr serves as case study example of both an ambitious open source project and what is possible today for stateful storage in Kubernetes environments.
4/15/2021 • 26 minutes, 52 seconds
How eBay Is Working for Developer Speed
The New Stack Makers’ recent “eBay Baby! How eBay Is Working for Developer Speed” livestream podcast covered a lot of ground about eBay’s five successive reengineers of its IT architecture. Recorded on April 1 and hosted by Alex Williams, founder and publisher of The New Stack, eBay’s challenges and achievements were certainly no joke. The eBay guests Randy Shoup, vice president, engineering and chief architect, Mark Weinberg, vice president, core product engineering and Lakshimi Duraivenkatesh, vice president, buyer experience engineering offered their insight and lessons learned over pancakes.
4/12/2021 • 34 minutes, 23 seconds
How Your Network Impacts User Experience in a COVID-19 World
In this The New Stack Makers podcast, hosted by Alex Williams, founder and publisher of The New Stack, Lad discusses ThousandEyes’ work as a network monitoring provider to meet the challenges of the day. Lad discussed his background in networking, the evolution of networking and software in general and the parallel growth of the Internet as it relates to ThousandEyes.
Needless to say, the ongoing COVID-19 pandemic continues to have a profound impact on remote work in a number of ways. Mohit Lad, general manager, co-founder and former CEO of ThousandEyes, has been at the front lines. He and his team at ThousandEyes have helped a number of customers meet the networking- and infrastructure-management challenges associated with tremendous surges in remote data connections during the past year.
4/8/2021 • 30 minutes, 54 seconds
K3s Gets its Due and its Own Day at KubeCon EU
In 2018, Kubernetes had become too big to run on Raspberry Pi. For a while, it meant that kubeadmin could not run on the micro-device. K3s changed that and represents a new take on Kubernetes: stripped of code, K3s is a lightweight version of Kubernetes meant to run on edge devices.
Today, K3s is seeing a rise in popularity as are a host of other new services that focus on the edge for Kubernetes architectures.
It’s now at the point that the Cloud Native Computing Foundation (CNCF) is planning Kubernetes on Edge Day at KubeCon, said Bill Mulligan, marketing manager for CNCF in a podcast recording with Alex Ellis, founder at OpenFaaS and the author of a new course on K3s that will be available for KubeCon, scheduled for May 4-7.
4/5/2021 • 30 minutes, 53 seconds
Advanced Threats in the Orchestrated Cloud
In this, The New Stack Makers Livestream podcast, hosted by Alex Williams, founder and publisher of The New Stack, the security challenges associated with moving to a public cloud is the central theme. Issues such as what are the different ways that attackers can attack an enterprise that is using public cloud infrastructure and how the enterprises can defend themselves from such attacks are discussed.
The guests are Ankur Shah, vice president of products, Prisma Cloud, Alok Tongaonkar, director, data science, Palo Alto Networks and Gaspar Modelo-Howard, principal data scientist at Palo Alto Networks.
3/30/2021 • 38 minutes, 52 seconds
OKTA Series - The Road to As-Needed Infrastructure Security
In this episode, co-hosts Alex Williams, founder, and publisher of The New Stack, and Randall Degges, head of developer advocacy at security services provider Okta, speak with guest Ev Kontsevoy, co-founder and CEO of Teleport, which offers organizations instant access to computing resources.
An organization’s cloud security processes often cover several different cloud providers, while oftentimes hundred, if not thousands, of developers all have multiple cloud accounts. Since each account typically adheres to different security systems and policies, managing it all represents yet another security challenge DevOps teams face.
Web security is the theme of the latest episode in our new series “Security @ Scale” on The New Stack Makers with Okta. The series explores security in modern environments with stories from the trenches including security horror stories and fantastic failures.
3/24/2021 • 35 minutes, 32 seconds
Okta Series - How a Security-Minded Culture Can Change Bad Habits
In this episode, co-hosts Alex Williams, founder and publisher of The New Stack, and Randall Degges, head of developer advocacy at security services provider Okta, discuss the challenges associated with building a security-minded culture and what works and what does not work.
Culture is a cornerstone of sound security policy. However, at many — if not most — organizations, cultural changes are warranted in a number of ways, not least of which for security and policy.
How to build a security-minded culture is the theme of the latest episode in our new series “Security @ Scale” on The New Stack Makers with Okta. The series explores security in modern environments with stories from the trenches including security horror stories and fantastic failures.
3/17/2021 • 32 minutes, 56 seconds
When Application Management Across the Net Requires ‘Google Maps’ Visibility
In this, The New Stack Makers podcast, hosted by Alex Williams, founder, and publisher of The New Stack, Joe Vaccaro, head of products, ThousandEyes, discussed today’s digital supply chain for the modern app experience and managing backend interdependencies.
The days are long gone when users accessed data mainly through local area network (LAN) connections and ran applications stored on centralized servers in the data center. Conversely, in today’s highly distributed network experience, the user’s access to applications is through a vast contingent of network connections, supported by microservices and in multicloud environments. Application performance is also highly dependent on DNS and other network connections for which organizations often lack visibility into the complete digital supply chain. In many cases, for example, it is thus difficult to determine whether sub-par application performance is due to network connectivity or bad code in the stack.
3/11/2021 • 28 minutes, 46 seconds
Okta Series - Mobile Security Dev, a Database and Authentication POV
This episode of The New Stack Makers series with Okta, on all topics related to development and security at scale, features the development requirements for securing mobile apps. They are explored from two points of view: the database and authentication.
Guests Ian Ward, senior product manager, mobile, for MongoDB discusses synchronizing mobile data with backend databases and his related work on Realm, a mobile database, and Aaron Parecki, senior security architect, for Okta, describes authentication, and OAuth, for which he is the spec editor and member of the OAuth working group. Alex Williams, founder and publisher of The New Stack hosts with co-host Randall Degges, head of developer advocacy at API security firm at Okta.
3/10/2021 • 34 minutes, 13 seconds
Kim Crayton: Anti-Racist Economist and Future Nobel Prize for Economics
“Black women are the moral compass of this country,” Kim Crayton said, referring to the United States in this episode of The New Stack Makers. But it’s exhausting work. And repetitive, to continue to offer the same basics to white people of what’s wrong with a country, an economy, and a tech industry that’s systemically built on anti-Blackness.
“Tech always thinks in binaries, which gets on my nerves. People of color, people from marginalized communities, we survive living in the gray. There is no right, wrong, good, bad because it changes situationally. So you have people who want to flip the tables. And then folks act like the only alternative is to prepare marginalized communities to go into spaces and work in places where they’re going to be harmed,” Crayton said.
3/8/2021 • 47 minutes, 41 seconds
HashiCorp Vault Gets Top Honors in Latest CNCF Tech Radar
In this edition of The New Stack Analysts podcast, host Alex Williams, founder and publisher of The New Stack and co-host Cheryl Hung, vice president of ecosystem at CNCF Cloud Native Computing Foundation (CNCF), discuss why secrets management is essential for DevOps teams, what the tool landscape is like and why Vault was selected as the top alternative. CNCF Tech Radar contributors and featured guests were Steve Nolen, site reliability engineer, RStudio — which creates open source software for data science, scientific research and technical communication — and Andrea Galbusera, engineering and co-founder, AuthKeys, a SaaS platform provider for managing and auditing servers authorizations and logins.
3/4/2021 • 34 minutes
Okta Series - How to Secure Web Applications in a Static and Dynamic World w/ Dustin Rogers
In this episode, co-hosts Alex Williams, founder and publisher of The New Stack, and Randall Degges, head of developer advocacy at security services provider Okta, speak with guest Dustin Rogers, staff application security engineer, Netlify, about all things related to static Web security management.
Netlify is a popular static website hosting platform for Jamstack used by over a million web developers. But while Netlify is popular, thanks to its simplicity for uploading code to the platform from GitHub and managing Web applications once uploaded, the security it offers for the static environments is of interest as well.
Using Netlify as a case example, static websites’ security layers and related security practices are the themes of the latest episode in our new series “Security @ Scale” on The New Stack Makers with Okta. The series explores security in modern environments with stories from the trenches including security horror stories and fantastic failures.
3/3/2021 • 33 minutes, 35 seconds
Vaibhav Kamra CTO of Kasten on Cloud Native Lessons Learned During these Pandemic Days
In this The New Stack Makers podcast, hosted by Alex Williams, founder and publisher of The New Stack, Vaibhav Kamra, chief technology officer, Kasten by Veeam, discussed the changes he has observed, and ultimately, the lessons learned during the past year. During this time, Kasten has provided the necessary platforms for application and data management that organizations rely on to scale across Kubernetes applications.
3/2/2021 • 31 minutes, 7 seconds
Okta Series - APIs’ Evolution, Future and Vulnerabilities
Okta sponsored this podcast.
This episode of The New Stack Makers series with Okta on all topics related to development and security at scale features guest Anant Jhingran, CEO, StepZen. Jhingran’s deep well of experience, including long stints at IBM, Apigee, Google, and, currently, CEO of StepZen certainly qualifies him as a leading expert on APIs and their role in today’s DevOps environments. Co-hosted by Alex Williams, founder and publisher of The New Stack, and Randall Degges, head of developer advocacy at Okta, Jhingran offers his take on how APIs have evolved, their potential for the developer community and how their success accounts, in part, for their exposure to vulnerabilities.
2/24/2021 • 40 minutes, 20 seconds
Varun Badhwar - How to Tighten Security Across Complex and Cloud Native Environments
In this The New Stack Makers podcast, Varun Badhwar, senior vice president, product, Palo Alto Networks, puts today’s multicloud security challenges into perspective. He also describes how Prisma Cloud 2.0 offers a single and comprehensive security alternative for cloud native applications across different cloud platforms.
2/23/2021 • 15 minutes, 10 seconds
Security Horror Stories: Why Hackers are Influencers by Okta
Welcome to our new series ‘Security @ Scale’ on The New Stack Makers with Okta exploring security in modern environments with stories from the trenches including security horror stories and fantastic failures.
In this episode, co-hosts Alex Williams, founder and publisher of The New Stack, and Randall Degges, head of developer advocacy at Okta, speak with guest Marc Rogers, vice president, cybersecurity, Okta, and co-founder of the CTI League, to discuss the anatomy of what will likely be considered to be one of the most disruptive hacks in the history of Wall Street. It could also change how institutional and individual investors buy, sell — and short — stocks in the future that are traded on U.S. exchanges.
2/17/2021 • 36 minutes, 59 seconds
Palo Alto Networks Virtual Event: Customers Share Their War Stories
This The New Stack Makers podcast series features a number of guests who speak during Palo Alto Networks’ Cloud Native Security Virtual Event. In this segment, Alex Williams, founder and publisher of The New Stack, hosts a roundtable with Palo Alto Networks customers who share their experiences and insights about cloud native security and other related topics. The guests are Brian Cababe, director of cyber security, architecture and governance, Cognizant; Tyler Warren, director of IoT security, Prologis and Alex Jones, infosec manager, Cobalt.io.
A key talking point is how legacy on-premises practices and processes cannot be directly transferred to work for cloud native security and management. Jones noted, for example, that when moving to the cloud, the first question for threat modeling is “what are we doing?”
2/16/2021 • 30 minutes, 59 seconds
How Seth Meyers and Guests Learn Cloud Native Security Is No Joke
This edition of The New Stack Makers podcast features a number of guests who speak during Palo Alto Networks’ Cloud Native Security Virtual Event. It kicks off with none other than Seth Meyers, an Emmy Award-winning comedian of “Late Night with Seth Meyers” and “Saturday Night Live (SNL)” fame. Meyers’ interview with Palo Alto Networks founder and CTO Nir Zuk is followed by a customer roundtable hosted by Alex Williams, founder and publisher of The New Stack, with guests Brian Cababe, director of cyber security, architecture and governance, Cognizant; Tyler Warren, director of IoT security, Prologis and Alex Jones, infosec manager, Cobalt.io. Meanwhile, the event concludes with a talk on Prisma Cloud 2.0, given by Varun Bradhwar, senior vice president, product, Palo Alto Networks.
Meyers began the session by declaring that “much like Nir Zuk, I am a cyber security luminary.” He also said he didn’t want to “brag too much” about his accomplishments, but said using your mother’s maiden name to recover passwords was his idea.
Meyers then asked Zuk, while at least feigning to be serious, what cloud native means for organizations, as well as its impact on security management.
2/15/2021 • 25 minutes
Why Security Teams Need a Higher Appetite for Risk
Prisma Cloud from Palo Alto Networks sponsored this podcast.
Security teams need a higher appetite for risk. While accepting, and even embracing risk is widely accepted outside the sphere of IT, risk also often plays a role in DevOps operations, developer and SRE team culture. However, security teams typically have yet to accept and manage risk in this way.
In this edition of The New Stack Makers podcast, hosted by Alex Williams, founder and publisher of The New Stack, how and why security teams need to rethink risk, with the aim of improving resiliency and achieving other benefits that thus far have remained elusive for many organizations, is discussed.
The guests were Matt Chiodi, chief security officer of Public Cloud at Palo Alto Networks, Meera Rao, senior director of product management, Synopsys and Tal Klein, chief marketing officer, Rezilion.
2/12/2021 • 40 minutes, 24 seconds
John Morello, Palo Alto Networks - API Security Basics are One Thing but What is the Greater Need?
Prisma Cloud by Palo Alto Networks sponsored this podcast.
Palo Alto Networks John Morello, vice president of product, has for a long time talked about the basics that come with cloud native security. In this edition of The New Stack Makers, hosted by Alex Williams, founder and publisher of The New Stack, Morello discusses how APIs are less the weakest link and are more so better known due to the widespread use of APIs, especially in the past five years. There are more people developing APIs, there are more people consuming APIS and there are more attackers who are exploiting APIs — and that makes the basics more important than ever both now and as more applications go online.
2/8/2021 • 32 minutes, 54 seconds
Ravi Lachhman and Frank Moley - How to Fight the Kubernetes Complexity-Fatigue Monster
Harness sponsored this podcast.
The growing pains continue: As organizations push ahead, shifting to Kubernetes and cloud native environments at scale, the complexities of managing Kubernetes clusters increase as well. The associated challenges of adoption, and then managing these highly distributed containerized environments, remain daunting. For many DevOps teams, the advent of “Kubernetes complexity fatigue” has become a concern.
In this episode of The New Stack Makers podcast, hosted by TNS founder and Publisher Alex Williams, Kubernetes complexity fatigue, and more importantly, what can be done about it, are discussed. The guests were Ravi Lachhman, evangelist at Harness, and Frank Moley, senior technical engineering manager at DataStax.
2/3/2021 • 39 minutes, 28 seconds
Ory Segal - A New Approach to the Firewall for Protecting Cloud-Native Services
Prisma Cloud by Palo Alto Networks sponsored this podcast.
This edition of The New Stack Makers podcast featured a news announcement: Palo Alto Networks is providing a new approach to protecting APIs with the release of its WAAS (web application and API security). As BotNets become more sophisticated, Palo Alto’s WAAS bot-defense platform offers API security, runtime protection, and other security features for today’s cloud native environments.
Hosted by Alex Williams, founder, and publisher of The New Stack, guest Ory Segal, senior distinguished research engineer, Palo Alto Networks, discussed how the company’s WAAS offers apps end-to-end protection for loosely coupled services in declarative environments and a range of other capabilities.
2/2/2021 • 40 minutes, 20 seconds
Nanda Vijaydev of HPE - How to Adapt Data-Centric Applications to a Kubernetes Architecture
In this, The New Stack Makers podcast hosted by TNS founder and publisher Alex Williams; guest Nanda Vijaydev, distinguished technologist and lead data scientist, HPE, discusses how the concepts of loosely coupled architectures are now playing a part in data-centric applications on Kubernetes. It’s an evolution that has been taking shape, preceded by the use of Kubernetes for microservices development — as opposed to data-centric approaches that have historically been developed on tightly coupled, monolithic architectures.
1/27/2021 • 34 minutes, 41 seconds
Frontend Development Challenges for 2021 w/ David Cramer - Sentry
In my 2021 web development predictions, I identified 2 key trends heading into this year: serverless expanding into a more full-featured platform (for example, stateful apps becoming a reality on serverless), and the continued growth of JavaScript (and especially React). Jamstack is another growth area, although that’s at a much earlier stage. To discuss these and other frontend trends, I spoke to David Cramer, co-founder, and CTO of Sentry, an application monitoring platform. You can hear the full discussion on The New Stack Makers podcast, but in this article, I’ll review the main talking points.
1/25/2021 • 25 minutes, 54 seconds
A New Relic Tale About Migrating to AWS w/ Wendy Shepperd
New Relic sponsored this podcast.
In this The New Stack Makers podcast, Wendy Shepperd, general vice president of engineering, New Relic, describes the challenges of migrating New Relic’s telemetry platform to a cloud native environment on Amazon Web Services (AWS). Hosted by TNS founder and publisher Alex Williams, Shepperd discussed key lessons learned about New Relic’s shift to AWS, as well as implications for observability following the move.
1/21/2021 • 27 minutes, 29 seconds
What is Data Management in the Kubernetes Age?
In this episode of The New Stack Analysts podcast, TNS founder and publisher Alex Williams virtually shared pancakes and syrup with guests to discuss how Apache Cassandra, gRPC and, other tools and platforms play a role in managing data on Kubernetes.
Mya Pitzeruse, software engineer and OSS contributor from effx; Sam Ramji, chief strategy officer at Datastax; and Tom Offermann, a lead software engineer at New Relic were the guests. They offered deep perspectives about the evolution of data management on Kubernetes and the work that remains to be done.
1/19/2021 • 48 minutes, 22 seconds
Infrastructure as Code is a Movement Ready to Boom
Prisma Cloud from Palo Alto Networks sponsored this podcast.
Infrastructure as code is a movement ready to boom. It’s also emerging as one of the three pillars in cloud security that are bringing DevOps and security together in the evolving DevSecOps market, said Varun Badhwar, senior vice president, Prisma Cloud at Palo Alto Networks, in this episode of The New Stack Makers hosted by TNS Founder and Publisher Alex Williams.
Infrastructure as code is also a major component of the DevOps’ trend to shift left. “Shift left security now means application security, it means software composition analysis and it means infrastructure as code scanning — and all of that now is available for DevOps teams to do in the pipeline,” Badhwar explained.
“And in an ideal situation,” he continued, “you want to tie all of that to the tools that your infosec teams want to use in runtime in production, such that you have one set of policies globally recognized in your enterprise. And you’re working against the same standards — it’s just a matter of fact about where you’re deploying those tools in your lifecycle.”
1/13/2021 • 29 minutes, 20 seconds
Scaling New Heights EP #8 - Making a Difference at Airbnb, the Story of a Reliability Engineer
Welcome to The New Stack Makers: Scaling New Heights, a series of interviews, conducted by Scalyr CEO Christine Heckart, that cover the challenges engineering managers have faced when scaling architectures to support the demands of the business.
Uber. Recall the company in 2017, the management, the scale, and the post by Susan Fowler, who detailed experiences that speak to the hopes and terrible realities at the company. That’s the scenario that faced Donald Sumbry, who now heads reliability engineering at Airbnb in this interview with Heckart. He was not aware of the issues internally at Uber due, he says, to the work and all the technical problems that needed resolving.
"In early 2017, we had the Susan Fowler blog post, and one of the things I remember the most was that some of what was what had happened was actually a surprise to me," Sumbry said, "And I realized that I was so knee-deep in the work that I was doing, that there were so many problems to solve. And we attracted the type of people that just jumped into a problem.
Joining Airbnb, Sumbry brought what he learned at Uber about looking at the big picture. He also learned to avoid the savior complex. Every company is different, no matter how much it may seem that the engineer has seen it all and can solve all the problems.
1/5/2021 • 14 minutes, 4 seconds
Is Hindsight Still 2020? Reviewing the Year in Tech
On the last The New Stack Analysts of the year, the gang got together — remotely, obviously — to reflect on this year. And oh what a year! But for a year in tech, 2020 still had a lot of hits — and some misses.
Publisher Alex Williams was joined by Libby Clark, Joab Jackson, Bruce Gain, Steven Vaughan-Nichols, and Jennifer Riggins. We looked back on the year that saw millions die, no one fly, and a lot of jobs in turmoil. It was also a year that, while many things screeched to a halt, much of the tech industry had to keep going more than ever.
12/28/2020 • 47 minutes, 45 seconds
The AWS Viewpoint on Open Source and Kubernetes
KubeCon+CloudNativeCon sponsored this podcast.
Kubernetes is certainly evolving, but it will be some time before organizations deploy and run applications seamlessly in cloud native environments without today’s associated challenges of its adoption and maintenance. Amazon Web Services (AWS), of course, is both an early proponent of Kubernetes and a leading provider of cloud native services and support, and has thus been implicitly involved with its changes over the past few years.
In this The New Stack Makers podcast, AWS’ Bob Wise, general manager of Kubernetes, and Peder Ulander, head of product marketing for enterprise, developer and open source initiatives, described AWS’ role in Kubernetes and how cloud native plays into the company’s open source strategy. They also discussed how Kubernetes is evolving in the market, including in terms of how customer needs are changing, and why open source technologies are critical to fill in gaps in order for cloud native to realize its full potential.
Alex Williams, founder and publisher of the New Stack, hosted this episode.
12/22/2020 • 49 minutes, 36 seconds
New Relic’s OpenTelemetry and Open Source Commitment
New Relic sponsored this podcast.
The Cloud Native Computing Foundation’s (CNCF) OpenTelemetry project was created to help foster the adoption of observability by helping to improve interoperability among the different observability toolsets through a vendor-neutral framework. In this way, OpenTelemetry should help to provide a single set of APIs, libraries, agents and collector services to capture distributed traces, metrics and other information from an application for improved observability.
In this The New Stack Makers podcast, hosted by TNS Founder and Publisher Alex Williams, Ben Evans, principal engineer and JVM technologies architect, New Relic, discussed OpenTelemetry and New Relic’s contributions to OpenTelemetry and other open source projects.
The genesis of OpenTelemtry was not to create a technology for its own sake in anticipation of what observability users might need, but to serve as a common framework to meet palpable challenges organizations already face.
12/21/2020 • 40 minutes, 27 seconds
Why IAM is a Pain Point in Kubernetes
Prisma Cloud from Palo Alto Networks sponsored this podcast.
Identity and access management (IAM) was previously relatively straightforward. Often delegated as a low-level management task to the local area network (LAN) or wide area network (WAN) admin, the process of setting permissions for tiered data access was definitely not one of the more challenging security-related duties. However, in today’s highly distributed and relatively complex computing environments, network and associated IAM are exponentially more complex. As application creation and deployment become more distributed, often among multicloud containerized environments, the resulting dependencies, as well as vulnerabilities, continue to proliferate as well, thus widening the scope of potential attack surfaces.
How to manage IAM in this context was the main topic of this episode of The New Stack Analysts podcast, as KubeCon + CloudNativeCon attendees joined TNS Founder and Publisher Alex Williams and guests live for the latest “Virtual Pancake & Podcast.” They discussed why IAM has become even more difficult to manage than in the past and offered their perspectives about potential solutions. They also showed how enjoying pancakes — or other variations of breakfast — can make IAM challenges more manageable.
The event featured Lin Sun, senior technical staff member and Master Inventor, Istio/IBM; Joab Jackson, managing editor, The New Stack and Nathaniel “Q” Quist, senior threat researcher (Public Cloud Security – Unit 42), Palo Alto Networks. Jackson noted how the evolution of IAM has not been conducive to handling the needs of present-day distributed computing. Previously, it was “not exactly a security thing” nor a “developer problem,” and wasn’t even “a security problem, he said.
“[IAM] really almost was a network problem: if a certain individual or a certain process wants to access another process or a resource online, then you have to have the permissions in place to meet all the policy requirements about who can ask for these particular resources,” Jackson said. “And this is an entirely new problem with distributed computing on a massive and widespread scale…it’s almost a mindset, number one, about who can figure out what to do and then how to go about doing it.”
12/18/2020 • 43 minutes, 45 seconds
On the Tech Radar: Database Storage
KubeCon+CloudNativeCon sponsored this podcast.
How to manage database storage in cloud native environments continues to be a major challenge for many organizations. Database storage also came to the fore as the issue to explore in the latest Cloud Native Computing Foundation (CNCF) Tech Radar report.
In this edition of The New Stack Analysts podcast, host Alex Williams, founder and publisher of The New Stack and co-hosts Cheryl Hung, vice president of ecosystem at Cloud Native Computing Foundation (CNCF) and Dave Zolotusky, senior staff engineer at Spotify discuss stateless database storage, recent results of the report findings and perspectives from the user community.
The podcast guests — who both contributed to the CNCF Tech Radar report and hail from the database storage user community — were Jackie Fong, engineering leader, Kubernetes and developer experience for Ticketmaster, and Mya Pitzeruse, software engineer, OSS contributor, effx.
12/16/2020 • 51 minutes, 28 seconds
Why K8 Cluster Management Is Not Expected to Become Boring Anytime Soon
In this The New Stack Makers podcast featured during KubeCon + CloudNativeCon North America, Eric Sorenson, technical product manager for Relay at Puppet and Dave Lindquist, general manager and vice president engineering, hybrid cloud management, Red Hat, discuss the state of Kubernetes cluster configuration management, associated DevOps challenges and how problems can be solved in the future. TNS correspondent B. Cameron Gain hosted the episode.
12/15/2020 • 41 minutes, 47 seconds
Scaling New Heights EP #7 - Glassdoor: Performance Matters
Welcome to The New Stack Makers: Scaling New Heights, a series of interviews with engineering managers who talk about the problems they have faced and the resolutions they sought, conducted by guest host Scalyr CEO Christine Heckart.
Bhawna Singh had two mandates at Glassdoor when she started as senior vice president of engineering and CTO: open an office in San Francisco to access the region’s talent pool, and rebuild the search vertical for job results. Glassdoor is a job and recruiting site that offers services that allow people to see information such as company reviews, salary reviews, and benefits that a potential employer offers.
To improve the quality of search, the team had to set metrics that the team trusted. Performance challenges surfaced when the team focused its efforts on the tactical aspects of architecting the platform. The Glassdoor team had tuned the system for quality; building out the deployment infrastructure and adding machine learning models. The work made the system heavier and less performant.
12/14/2020 • 12 minutes, 34 seconds
Teleport, a Unified Access Plane, Built on Google’s Crypto
In this The New Stack Makers podcast, Alex Williams, publisher and founder of The New Stack, spoke with Ev Kontsevoy, co-founder and CEO of Teleport, about what the shift to a widely distributed architecture means for engineers and developers and how Teleport accommodates their needs in this new dynamic.
Teleport was formerly known as Gravitational until just recently when it rebranded itself after the name of its flagship unified access plane platform. Built with Go on Google’s cryptography, Teleport allows engineers, among other things, to bypass layers of legacy architecture in order to securely take advantage of cloud resources from any location worldwide with an Internet connection.
12/9/2020 • 39 minutes, 36 seconds
Why Kubernetes and Kafka are the Combo for DataOps Success
What is DataOps? Why is a real-time data platform essential to the use cases driving it? How can you build data pipelines with open source complexity?
In this episode of The New Stack Makers live — yet from our respective sofas — from KubeCon North America, we talk to Andrew Stevenson, chief technical officer and co-founder of Lenses, about how Apache Kafka and Kubernetes can together dramatically increase the agility, efficiency and security of building real-time data applications.
12/8/2020 • 41 minutes, 31 seconds
What Happens to SaltStack Now Under VMware
VMware sponsored this podcast.
SaltStack’s Salt is a leading automation and security platform for configuration management for on premises and cloud native environments. Created with Python, Salt is in use among Juniper, Cisco, Cloudflare, Nutanix, SUSE and Tieto, as well as a number of other Fortune 500 technology companies, as well as banks. Saltstack also offers a suite of tools, including SaltStack Enterprise for Salt, Plugin Oriented Programming (POP) and Tiamat,
SaltStack’s portfolio has also been merged with VMware’s suite of offerings, following VMware’s purchase of SaltStack earlier this year.
In this The New Stack Makers podcast, SaltStack’s Thomas Hatch, founder and CTO and Salt’s creator, and Janae Andrus, community manager for Salt, discuss SaltStack’s roots, evolution and integration with VMware’ platforms and technologies. The future of SaltStack’s open source projects were also discussed.
Alex Williams, founder and publisher of The New Stack, hosted this podcast.
11/30/2020 • 45 minutes, 40 seconds
Pancakes Are Hot and So is Immutable Security
Accurics sponsored this podcast.
Who doesn’t love hotcakes? And to make them right, you need to wait until the batter starts to bubble up before you flip them. Immutable infrastructure management and related security challenges are also “bubbling up” these days, as many organizations make the shift to cloud native environments, with containerized, serverless and other layers.
In this The New Stack Analysts podcast, TNS founder and publisher Alex Williams asked served up pancakes with KubeCon attendees who joined him for a “stack” at the “Virtual Pancake Breakfast and Podcast” while they offered their deep perspectives on what is at stake as immutable infrastructure security and other related concerns take hold.
The guests joining the virtual breakfast were Om Moolchandani, co-founder and CTO for Accurics, Rosemary Wang, developer advocate for HashiCorp, Krishna Bhagavathula, CTO, for the NBA (who also brought his own L.A. Lakers-branded spatula), Chenxi Wang, Ph.D., managing general partner of Rain Capital, and Priyanka Sharma, general manager, for the Cloud Native Computing Foundation (CNCF).
11/24/2020 • 44 minutes, 7 seconds
Scaling New Heights Ep # 6 - From Rack and Stack to SaaS
Welcome to The New Stack Makers: Scaling New Heights, a series of interviews, conducted by Scalyr CEO Christine Heckart, that cover the challenges engineering managers have faced when scaling architectures to support the demands of the business.
Rack and stack memories often come up when enterprise engineers talk about building an SaaS. There is respect there when people recall the work that others did.
It’s something that is oftentimes lost, circa 2011-12, when startups rushed into the enterprise space. Social tech was hip. Cloud was fascinating. APIs, webhooks, the evolution of RSS into social technologies — it was like this sudden excitement, the intoxicating rush of services — loosely coupled technologies changing the world!
Nicolas Fischbach, CTO of Forcepoint, provides a view that is often heard from enterprise managers at technology companies. The legacy technologies are there to stay but there is an excitement that comes from building a new SaaS.
11/16/2020 • 15 minutes, 57 seconds
How CERN Accelerates with Kubernetes, Helm, Prometheus and CoreDNS
KubeCon+CloudNativeCon sponsored this podcast.
CERN, the European Organization for Nuclear Research, is known for its particle accelerator and experiments and analysis of the properties of subatomic particles, anti-matter and other particle physics-related research. CERN is also considered to be where the World Wide Web (WWW) was created.
Research and experiments conducted at the largest particle physics research center consisting of a 27-km long tunnel generate massive amounts of data to manage and store. All told, CERN now manages over 500 petabytes — over half of one exabyte — which, in a decade's time, is expected to total 5,000 petabytes, said Ricardo Rocha, a staff researcher at CERN.
In this episode of The New Stack Analysts, we learn from Rocha how CERN is adapting as a new accelerator goes online in the next few years with the ability to manage 10x the data it manages now.
11/11/2020 • 37 minutes, 13 seconds
Scaling New Heights Ep # 5 - Platform Resilience, a New Driver for a Roadside Assistance Company
Welcome to The New Stack Makers: Scaling New Heights, a series of interviews, conducted by Scalyr CEO Christine Heckart, that cover the challenges engineering managers have faced when scaling architectures to support the demands of the business.
Roadside service. The car breaks down, the driver makes a call, an agent answers and help is on the way.
Turn to the past few years and the agent is no longer central to the experience. The app is the roadside assistant. The change in the market has turned a company like Agero from a B2B company into one that deals directly with the consumer.
Bernie Gracy is chief digital officer for Agero, a white label roadside assistance platform that provides support for 12 million roadside events per year through its digital assistance platform.
Agero built its business on empathy as a foundation for its service. Its empathetic agents were tasked with getting travelers through often stressful experiences. Moving to a digital experience put a load on the platform that could not be sustained.
11/9/2020 • 16 minutes, 59 seconds
The Status of Cloud Native and Kubernetes Today
In this The New Stack Makers livestream podcast recorded ahead of KubeCon + CloudNativeCon, Founder and Publisher Alex Williams and Managing Editor Joab Jackson hosted a roundtable discussion covering the status of cloud native adoption and its near- and long-term outlook. The guests were Rachel Stephens, an analyst for RedMonk, Steven Vaughan-Nichols, a long-time journalist for ZDNet and well-recognized Linux professional and Katie Gamanji, a cloud platform engineer for American Express and member of the CNCF Technical Oversight Committee.
11/6/2020 • 41 minutes, 4 seconds
One Bank's Path for Moving Deep Legacy Infrastructure into Cloud Native Operations
Some legacy infrastructures are certainly more difficult to manage than others when organizations make the shift to cloud native. In the case of the heavily regulated financial services industry and the deep legacy infrastructure involved when banks transition to the cloud, challenges inherent in the sector abound. Regulatory and compliance and data-management challenges are also usually amplified when the bank has an especially large international presence.
In this edition of The New Stack Analysts podcast, as part of The New Stack’s recent coverage of end-use Kubernetes, Michael Lieberman, senior innovation engineer, vice president, of Tokyo-based MUFG, discusses his company’s journey to scale out architectures in a microservice and Kubernetes environment in the world of financial services. Alex Williams, founder and publisher of The New Stack hosted the podcast with co-hosts Cheryl Hung, vice president of ecosystem at Cloud Native Computing Foundation (CNCF) and Dave Zolotusky, senior staff engineer at Spotify.
11/4/2020 • 31 minutes, 27 seconds
Scaling New Heights Episode #4 - Maybe Building a DIY Logging Tool is Not the Best Idea
Welcome to The New Stack Makers: Scaling New Heights, a series of interviews, conducted by Scalyr CEO Christine Heckart, who discusses the problems engineering managers have faced when scaling architectures to support the demands of the business.
The idea behind building a logging analysis tool is fairly simple. That is until it’s time to scale across multiple teams and manage it beyond day two. Then things become a bit more complicated. There comes the temptation to build a logging platform because, well, how complicated could it be?
11/2/2020 • 20 minutes, 32 seconds
Scaling New Heights Episode #3: Nextdoor: Test Challenges Two Weeks Before Launch
Welcome to The New Stack Makers: Scaling New Heights, a series of interviews, conducted by Scalyr CEO Christine Heckart, who discusses the problems engineering managers have faced when scaling architectures to support the demands of the business.
Mai Le is Nextdoor’s head of engineering. She and her team knew they had to do something for small businesses suffering in the pandemic. They needed to develop ways to connect them with their local customers and this was especially important for Nextdoor. Social networks have served as hubs for small businesses with Nextdoor standing out more than most.
After her team’s considerable effort, it should have been smooth sailing. But, with only two weeks until launch, Le couldn’t get the new service to work.
10/26/2020 • 12 minutes, 22 seconds
The Future of Data in Serverless Will Be API-Driven
In the serverless paradigm, the idea is to abstract away the backend so that developers don’t need to deal with it. That’s all well and good when it comes to servers and complex infrastructure like Kubernetes. But up till now, database systems haven’t typically been a part of the serverless playbook. The assumption has been that developers will build their serverless app and choose a separate database system to connect to it — be it a traditional relational database, a NoSQL system, or even a Database-as-a-Service (DBaaS) solution.
But the popularity of serverless has prompted further innovation in the data market. In this episode of The New Stack Analysts podcast, we talked about the latest developments in regards to managing data in a serverless system.
My two guests were Evan Weaver, co-founder and chief technology officer of Fauna, and Greg McKeon, a product manager at Cloudflare. Fauna is building a “data API” for serverless apps so that developers don’t even need to touch a database system, while Cloudflare runs a serverless platform called Cloudflare Workers.
10/21/2020 • 25 minutes, 28 seconds
The Hero in Four Acts: We’ve Got This, WTF, Oh Shift, We Did It
Welcome to The New Stack Makers: Scaling New Heights, a series of interviews, conducted by Scalyr CEO Christine Heckart, with engineers who talk about the problems they have faced and the resolutions they sought.
These are the stories about engineering management and how technology decisions are made for scaling architectures to support the demands of the business.
Pooja Brown is vice president of engineering at Stitch Fix and also one of the founding members of ENG, a peer network of VPs and CTOs from leading SaaS companies. Brown spends most of the discussion talking about the work in developing a modern infrastructure prior to her work at StitchFix.
Brown tells her story in four acts, discussing her work on monoliths and the move to a modern architecture that came with hiring. The developers were hired for their Node.js skills. And, they were quite different than the .Net devs they had once known so well.
10/19/2020 • 13 minutes, 21 seconds
Robinhood’s Kubernetes Journey: A Path More Treacherous Than it Appears
Welcome to The New Stack Makers: Scaling New Heights, a series of interviews conducted by our guest host Scalyr CEO Christine Heckart with engineering managers who talk about the problems they have faced and the resolutions they sought.
The challenges engineering leaders face define how technology decisions are made for scaling architectures to support the demands of rapid-growth businesses.
Heckart's first interview is with Adam Wolff, Robinhood’s former vice president of engineering who recalls two years ago when Kubernetes looked so right — and then the difficulties that followed when Wolff mandated the platform's adoption.
10/12/2020 • 18 minutes, 25 seconds
Data Center and Cloud Environments for Next-Generation Data Stacks
Next-generation data stacks and data centers will continue to meet the demands of software developer-led business models. But as new tools and platforms emerge to meet the deployment needs of today and tomorrow to run at scale, the data centers and cloud infrastructures will need to evolve as well.
In this edition of The New Stack Makers podcast, Shrey Parekh, senior manager of product marketing, and Whitney Satin, director of product marketing, of Cisco’s AppDynamics discuss how data center and cloud environments are changing to accommodate next-generation applications and data stacks. Alex Williams, founder and publisher of The New Stack, hosted this episode.
10/8/2020 • 34 minutes, 43 seconds
Supplanting Scripting with Engineering Management and Machine Learning
Scripting behaviors and habits are hard to break as engineers are often accustomed to their own ways and processes. Meanwhile, the processes team leaders need to take when adopting new CI/CD processes and technologies, include engineering management from the bottom up and mapping people's skill sets and other processes. An example of a CI/CD process used to make the shift includes Harness’ platform that uses machine learning and other technologies to help DevOps engineer teams further their goals.
In this edition of The New Stack Makers podcast, Tiffany Jachja, evangelist, Harness and Rajsi Rana, senior product manager, Oracle Cloud, discuss scripting, its background and how machine learning, CI/CD and other processes can help guide a shift in engineering culture to make the most of time and resources. Alex Williams, founder and publisher of The New Stack, hosted this episode.
10/5/2020 • 30 minutes, 45 seconds
Episode 136: Lightbend’s Cloudstate Builds on Akka to Offer Stateful Serverless
In this episode of The New Stack Context podcast, we speak with Jonas Bonér, Akka creator and founder/chief technology officer of Lightbend, about the challenges of bringing state to serverless, reactive microservices frameworks, and Cloudstate itself. TNS Editorial and Marketing Director Libby Clark hosts this episode, with the help of TNS Managing Editor Joab Jackson.
10/2/2020 • 33 minutes, 8 seconds
A Next-Gen World Does Not Mean Putting a Server in a Container
In this edition of The New Stack Makers podcast, Charlotte Mach and Ian Crosby, CTO for Container Solutions, discuss a number of topics about next-generation computing. They include edge computing, hybrid and multicloud adoption, environmentally sustainable cloud computing and culture. Alex Williams, founder and publisher of The New Stack, hosted this episode.
10/1/2020 • 33 minutes, 8 seconds
DataOps: The Basics and Why It Matters
For this latest episode of The New Stack Makers podcast, Alex Williams, founder and publisher of The New Stack hosts guest speakers Dina Graves Portman, developer relations engineer for Google, Emilie Schario, internal strategy consultant, data, for GitLab, and Nicole Schultz, assistant director of engineering for Northwestern Mutual to discuss how DataOps is defined and why its application in the context of DevOps is particularly relevant in today’s highly complex and increasingly distributed environments.
Like DevOps, Schario describes DataOps as workflow-related, but it extends much further to help resolve data-management challenges.
9/30/2020 • 43 minutes, 2 seconds
Kubernetes Has Evolved, So Should Your Security
Prisma Cloud by Palo Alto Networks sponsored this podcast.
In this edition of The New Stack Makers podcast, Robert Haynes, cloud security evangelist, Palo Alto Networks, discusses Kubernetes security above and beyond what Kubernetes has natively and the evolution of the Kubernetes vulnerability landscapes since the first API attacks. Alex Williams, founder and publisher of The New Stack, hosted this episode.
9/28/2020 • 36 minutes, 35 seconds
Episode 135 : WebAssembly Could Be The Key For Cloud Native Extensibility
For this week’s episode of The New Stack Context podcast, we ask Levine about the excitement around WebAssembly, its use in the Envoy proxy, and Solo.io’s new proposal for packaging WASM modules in the Open Container Initiative format. TNS editorial and marketing director Libby Clark hosts this episode, with the help of TNS senior editor Richard MacManus, and TNS managing editor Joab Jackson.
Although WebAssembly was created for bringing advanced programming to the browser, Solo.io’s founder/CEO Idit Levine has been a vocal proponent of using the portable fast open source runtime to extend service meshes — citing Solo.io’s own work in offering tools and services to support commercial service mesh operations. In fact, WASM, as its also known, could be used to bring extensibility across a wide variety of cloud native projects, she argues.
9/25/2020 • 37 minutes, 44 seconds
Kolton Andrus, CEO and co-founder, Gremlin on Chaos Engineering
For Kolton Andrus, CEO and co-founder, Gremlin, describing what chaos engineering is “is one of my favorite topics for debate,” and “is what makes chaos engineering sound fun and exciting."
In this edition of The New Stack Makers podcast, Andrus, in addition to defining chaos engineering, describes how organizations can make it work for them. Alex Williams, founder and publisher of The New Stack, hosted this episode.
The very idea of chaos — and an IT organization’s embrace of it — can conjure up fear in many. “[Chaos engineering] scares the pants off of some old school folks that aren’t comfortable with that kind of chaos in their environments. And so most people think chaos engineering is randomly breaking things and seeing what happens,” said Andrus. “I think that chaos engineering is thoughtful, planned experiments that teach us about our system and one of the key concepts that goes with that is this idea of the ‘blast radius.’ When we run this experiment, whom might we impact? Because the goal is to prevent outages, not to cause an outage and we never want to inadvertently cause customer pain. We never want to cause an outage because we were being cavalier in our approach.”
9/23/2020 • 42 minutes, 43 seconds
2020 GitLab Commit - The Opportunity of Open Source to Create Opportunities for Others
In this episode of The New Stack Makers, we sit down with Christina Hupy, GitLab’s senior education program manager, and Nuritzi Sanchez, GitLab’s senior open source program manager, in the lead-up to GitLab Commit later this summer. We talk about the ups and downs of inclusion in the open source world, how you can best leverage the career opportunities of open source, and most importantly, how open source communities can open themselves up more to better foster those opportunities. We discuss this all not only within the context of traditional enterprise settings, but at universities and in prisons.
Much of both Hupy and Sanchez’s time is spent with the broader community of GitLab users. And it’s part of their job to bring external feedback inside the company. So they may be more prepared than most to answer the essential question: What does a better open source community look like?
9/21/2020 • 46 minutes, 48 seconds
The Flux Factor: GitOps for Continuous Delivery
In this episode of The New Stack Makers, Alex Williams, founder and publisher of The New Stack, talks to three members of the WeaveWorks team: Alexis Richardson, founder and CEO, Cornelia Davis, chief technology officer, and Stefan Prodan, developer experience engineer and the architect of Flux2 and Flagger. They reflect on the next generation tooling in the cloud native tech community. The quartet discusses how that tooling fits into the GitOps toolkit and, particularly, the next evolution of the Flux continuous delivery for GitOps projects.
9/21/2020 • 44 minutes, 8 seconds
Context 134 : The CNCF Technology Radar Evaluates Observability Tools
Application and system observability was the focus of the latest Cloud Native Computing Foundation Technology Radar end user survey, posted last week. So for this week’s TNS Context podcast episode, we invited Cheryl Hung, CNCF vice president of ecosystem, to discuss these latest findings. To get an additional industry perspective on observability, we’ve also invited Buddy Brewer, vice president of full stack observability for New Relic.
Java remains one of the most popular and trusted programming languages, but it is not necessarily well-suited for everything, including cloud native and containerized applications.
While Java’s elegance and versatility is reflected in how it can be written once and run practically anywhere, the language was geared mainly for creating application stacks decades ago when it was first created. Cloud native and Kubernetes, of course, are different animals compared to the stacks of decades past.
In other words, Java is not Golang for Kubernetes. And yet…
Frameworks will likely serve as the solution to Java’s Kubernetes dilemma. In this edition of The New Stack Makers podcast, DataStax’s Alice Lottini, Vanguard architect, and Christopher Splinter, senior product manager, open source, discuss how frameworks can allow Java to still work for creating applications that run better in cloud native environments and how they represent a new identity for the 25-year-old programming language. Alex Williams, founder and publisher of The New Stack, hosted this episode.
9/15/2020 • 26 minutes, 42 seconds
Sid Sijbrandij - GitLab Co-Founder and CEO on Iteration and Open Source
The iterative software development model can help organizations improve agility and the efficiencies of production pipelines as DevOps teams continue to seek ways to create applications and updates at ever-faster cadences. GitLab serves as an example of an enterprise that is successfully taking advantage of iteration and applying lessons it has learned to contributing to and supporting its open source projects, as well as to the open source community.
For this latest episode of The New Stack Makers podcast, Alex Williams, founder and publisher of The New Stack, speaks with Sid Sijbrandij, co-founder and CEO at GitLab, about iteration, open source projects — including Meltano and Kubernetes— and how SpaceX’s iterative development processes in the hardware industry can teach the software sector.
9/14/2020 • 40 minutes, 43 seconds
Episode 133: Crossplane - A Kubernetes Control Plane to Roll Your Own PaaS
The ideal state of a cloud native shop is to run a development and deployment pipeline that can seamlessly move applications from the developer’s laptop to the data center (or the edge) without any manual intervention. And while there are many tools available to facilitate such automation — Helm, Operators, CI/CD toolchains, GitOps architectures, Infrastructure-as-Code tools such as Terraform — all too often edge cases and exceptions still require personal attention, bringing DevOps pipelines to a halt.
The missing pieces of the puzzles are a control plane and a unified application model for the control plane to run upon, asserted Phil Prasek, a principal product manager at Upbound, in this latest episode of The New Stack Context podcast. Prasek envisions a time when organizations can build their own customized set of platform services, where developers can draw from a self-serve portal the building blocks they need — be they containerized applications or third party cloud services, and have the resulting app run uniformly in multiple environments.
“Within an enterprise control plane, you can basically have your own abstractions, and then you can publish them,” Prasek said.
TNS Editorial and Marketing Director Libby Clark hosts this episode, with the help of TNS Senior Editor Richard MacManus, and TNS Managing Editor Joab Jackson.
9/11/2020 • 31 minutes, 47 seconds
‘From Zero to Dopamine’: Testing Helm’s Developer Experience
Michelle Norrali, a senior software engineer for Microsoft, wrote a statement on a whiteboard during Helm’s first days when she worked with Matt Butcher at Deis, which Microsoft acquired in 2017. “From zero to dopamine in five minutes,” is still the phrase Butcher, a principal software development engineer for Microsoft, and his team use to measure how they are building a developer experience for the popular package manager used to get Kubernetes up and going.
In this edition of The New Stack Makers podcast, host Alex Williams, founder and publisher of The New Stack, speaks with Butcher and Matt Farina, a senior staff engineer for Samsung, about how updates to Helm help improve the overall Kubernetes experience and balance usability in such a large community to provide the best developer experience.
9/10/2020 • 49 minutes, 7 seconds
Struggles of the Cloud — Survival Tactics From Two GitLab Experts
GitLab sponsored this podcast.
The struggle is real. The Cloud Native Computing Foundation landscape map has over 1,400 cloud native projects listed on it, over a variety of categories. The total market cap of the cloud native ecosystem is $18.66T, which gives you an idea of the scale of cloud business now. So as companies continue their inevitable migration from legacy IT systems to the new cloud native world, they have a mind-boggling number of choices to make. And it’s not just choices about cloud infrastructure and tools, but also how they run IT projects in the cloud era, and how operators and developers are increasingly working together using the DevOps approach.
In this episode of The New Stack Makers, we discuss these and other struggles of the cloud with two GitLab executives: Brandon Jung, vice president of alliances at GitLab, and Pete Goldberg, director of partnerships at GitLab. Both have extensive experience working in the cloud ecosystem, so they were able to provide insights on both the struggles and the solutions.
Prior to GitLab, Jung worked at Google and Canonical. He’s also currently a Linux Foundation board member, so I asked him what are some of the challenges he’s seeing in the cloud native ecosystem given how popular it’s become?
9/8/2020 • 39 minutes, 53 seconds
Episode 132: Darren Shepard of Rancher - Who Needs Kubernetes Operators Anyway?
Late last month, Rancher Labs donated its popular K3s Kubernetes distribution to the Cloud Native Computing Foundation. This stripped down version of Kubernetes has been a quiet hit among cloud native users — many who are deploying to edge environs.
So for this week’s episode of The New Stack Context podcast, we invited Rancher Co-Founder Darren Shepherd to discuss what Rancher is seeing in the cloud native ecosystem. Rancher is in the process of being acquired by SUSE and, because the deal is still pending, Darren could not comment but he did chat about K3s, as well as Kubernetes.
The New Stack Editorial and Marketing Director Libby Clark hosted this episode, alongside TNS Senior Editor Richard MacManus, and TNS Managing Editor Joab Jackson.
9/4/2020 • 30 minutes, 22 seconds
Open Source Project Momentum: What it Takes
Many projects are initiated to solve a problem that an organization or a user is experiencing. Thanks to the magic of open source, the community can serve to help solve the problem and, ideally, offer solutions better than the creators had originally hoped for. The maintainers’ main mission is largely about helping to make sure the software platform or tool continues to improve and to ensure the contributions are properly maintained and managed.
In this edition of The New Stack Makers podcast, host Alex Williams, founder and publisher of The New Stack and guests Michael Michael, director of products, VMware, Travis Nielsen, senior principal software engineer, Red Hat, Annette Clewett, principal architect, Red Hat and Rob Szumski, senior manager, product management, OpenShift at Red Hat discuss how an open source project develops, changes and becomes sustainable.
9/3/2020 • 58 minutes, 28 seconds
2020 GitLab Commit - Communication Drives Diversity and Inclusion
Tech is building the future, so it should set the example, right? While it could always do better, the tech industry does do better than some more traditional sectors at attempting diversity and inclusion. And D&I all comes down to the words we choose and the people we call out to make it happen.
In this episode of The New Stack Makers, our Founder & Publisher Alex Williams talks to two women who’ve taken non-traditional paths to the tech industry. This episode features Kate Milligan, who went from selling mobile phones to global ISV alliance manager for DevOps at Red Hat, and Sara E. Davila, who journeyed from the oil and gas industry to senior manager of partner marketing at GitLab.
They dive into how they’ve witnessed an embracing — and lack — of inclusive language and actions and how GitLab and Red Hat are proactively contributing to a more inclusive future. After all, the core values of open source collaboration should be core to any future.
“In a COVID world, the key to success or, you know, what’s really driving digital transformation, is communication,” Davila said, “just really open, transparent and honest communication.”
While we are all in kind of a holding pattern of tentative re-emergence, we are also all sharing in virtual fatigue. What started out as a novelty of online sharing and learning back in March and April has become, as summer closes, what Davila calls “an over-saturation of webinars.”
For this week’s episode, we spoke with Mike Yawn, a senior solution architect at Hazelcast, about the potential of in-memory computing to supercharge microservices and cloud native workloads.
Yawn recently contributed a post to TNS explaining how an in-memory technologies could make microservices run more smoothly. Hazelcast offers an in-memory data grid, Hazelcast IMDG, along with stream processing software Hazelcast Jet. We wanted to know more about how in-memory could be used with microservices. While in-memory offers caching just like key-value database such as Redis, it also offers additional computing capacity, which can help process that data on the fly, Yawn explained.
8/28/2020 • 32 minutes, 45 seconds
Why Kubernetes Needs to Be Dumbed Down for DevOps
An organization’s shift to a cloud native environment will invariably involve the adoption of control, data planes, and a number of other Kubernetes and microservices-specific platforms and tools to help manage the Kubernetes “parallel universe.” Additionally, many DevOps team members, including developers, will need to adopt new skill sets to make the shift.
In this edition of The New Stack Makers podcast, Alex Williams, TNS founder and publisher, speaks with analyst Janakiram MSV about the context of the Kubernetes parallel universes. They also discuss the developer experience, how GitOps is helping to plug in some of the gaps and the idea of cluster sprawl and how it relates to multicloud environments.
The New Stack recently published its latest edition of “The State of the Kubernetes Ecosystem” ebook, which MSV authored that can also serve as a guide. The second chapter of the ebook offers a detailed overview of the cloud-ready and cloud native worlds. The chapter “maps” the new ecosystem that is “growing exponentially.”
8/27/2020 • 38 minutes, 53 seconds
The Evolution of Stateful Applications on Kubernetes
Kubernetes and containers are obviously much talked about in the IT world today, but how to manage the stateful applications and data that run on top of cloud native platforms is also — especially for operations — important. The process includes managing the data from legacy stateful applications as organizations make the shift to highly distributed containerized environments.
In this edition of The New Stack Makers podcast, Alex Williams, founder and publisher of The New Stack, discusses the concepts of big data, storage and stateful applications on Kubernetes. Guests Tom Phelan, fellow, big data and storage organization, Hewlett Packard Enterprise (HPE); and Joel Baxter, distinguished engineer, HPE, draw from their deep experience managing stateful applications and data in containerized environments. They also discuss KubeDirector, an open source platform for running non-cloud native stateful applications on Kubernetes.
8/26/2020 • 42 minutes, 24 seconds
KCCNC 2020 EU Virtual Pancake Breakfast: Why Your K8s ‘Stack’ Should Be Boring
Kubernetes is becoming boring and that’s a good thing — it’s what’s on top of Kubernetes that counts.
In this The New Stack Analysts podcast, TNS Founder & Publisher Alex Williams asked KubeCon attendees to join him for a short “stack” at our Virtual Pancake & Podcast to discuss “What’s on your stack?” The podcast featured guest speakers Janakiram MSV, principal analyst, Janakiram & Associates, Priyanka Sharma, general manager, CNCF, Patrick McFadin, chief evangelist for Apache Cassandra and vice president, developer relations, DataStax and Bill Zajac, regional director of solution engineering, Dynatrace. The group passed the virtual syrup and talked Kubernetes, which may be stateless, but also means there’s plenty of room for sides.
8/24/2020 • 39 minutes, 36 seconds
Episode 130: KubeCon EU and the Zombie Workloads
Welcome to The New Stack Context, a podcast where we discuss the latest news and perspectives in the world of cloud native computing. For this week’s episode, we spoke with Pratik Wadher, vice president of product development at Intuit, to discuss the company’s experience as a Kubernetes end user, as well as its involvement in the Argo Flux project — a single toolchain for continuous deployment and automated workflows using GitOps. We also share our experiences of attending KubeCon + CloudNativeCon EU 2020, held this week “virtually.”
The New Stack editorial and marketing director Libby Clark hosted this episode, alongside TNS Publisher Alex Williams, TNS senior editor Richard MacManus, and TNS managing editor Joab Jackson.
8/21/2020 • 38 minutes, 6 seconds
How to Sell Your Infrastructure to the Colleagues That Don’t Have to Buy It w/ Simone Sciarrati
A lot of the time, it’s harder to convince your friends and family than a stranger. The first group is usually more decisive and direct with you. The same goes for your work family. When you’re building an internal infrastructure for autonomous teams, it becomes your job to not only provide that technical backbone, but to act as sales and customer support.
Nobody said internal developer advocacy would be easier.
The sixth episode of The New Stack Analysts End User Series brings together again our Publisher Alex Williams with co-hosts Cheryl Hung from the Cloud Native Computing Foundation and Ken Owens of Mastercard. In this episode they talk with Simone Sciarrati, the engineering team lead at Meltwater media intelligence platform about the autonomous engineering culture, molding developer experience, and those tough technological decisions.
8/19/2020 • 31 minutes, 56 seconds
How the Right Load Balancer Supports a Video SaaS Provider’s Ambitious Plans for Kubernetes
Citrix sponsored this podcast.
It would be an understatement to say 8×8’s ability to offer its Software-as-a-Service (SaaS) requires high bandwidth to provide its voice, video and other enterprise-class API solutions. While the company has always sought ways to boost its throughput capabilities, the Coronavirus pandemic has placed huge pressures on the company’s bandwidth needs to both maintain and improve its users’ network experience. Earlier this year, for example, traffic surged by 50-fold in less than one month.
Ultimately, 8×8’s DevOps largely relied on Kubernetes infrastructure and a load balancer and other support that Citrix, an application-delivery solution provider, offered to help manage the unprecedented traffic.
In this The New Stack Makers podcast, Alex Williams, founder and publisher of The New Stack, spoke with Pankaj Gupta, senior director of product marketing, cloud native, DevOps, security, analytics and network, for Citrix and Lance Johnson, director of engineering, cloud R&D, for 8×8. They discussed how Kubernetes and Citrix helped 8×8 achieve and maintain agility while delivering a better customer experience for its collaboration product portfolio during this time of exceptional demand for video and other networking infrastructure capabilities.
8/18/2020 • 44 minutes, 35 seconds
Episode 129 : Kubernetes 2020, by the Numbers
The New Stack has just released an updated eBook on Kubernetes, “The State of the Kubernetes Ecosystem,” and so this week on The New Stack Context podcast, we’ve invited TNS analyst Lawrence Hecht to discuss some of the analysis he did for this volume. We covered Kubernetes adoption in the cloud, storage and networking concerns and the changing DevOps culture around cloud native computing. At the end of the podcast, we also discuss what to expect from next week’s KubeCon + CloudNativeCon Europe virtual conference.
The New Stack Senior Editor Richard MacManus hosted this episode, with the help of Joab Jackson, TNS managing editor, and Alex Williams, founder and publisher of The New Stack.
8/14/2020 • 38 minutes, 48 seconds
Why Spotify’s Golden Path to Kubernetes Adoption Has Many Twist and Turns
Spotify is well known worldwide for its music service. Not so well known, is its path to Kubernetes Oz has been a road with many twists and turns.
What also may be a surprise to many is that Spotify is a veteran user of Kubernetes and how it owes much of its product-delivery capabilities to its agile DevOps. Indeed, Spotify continues to increasingly rely on a container and microservices infrastructure and cloud native deployments to offer a number of advantages. This allows its DevOps teams to continually improve the overall streaming experience for millions of subscribers.
In this edition of The New Stack Analysts podcast, as part of The New Stack’s recent coverage of end use Kubernetes, Jim Haughwout, head of infrastructure and operations, shares Spotify’s cloud native adoption war stories and discusses its past and present Kubernetes challenges. Alex Williams, founder and publisher of The New Stack; Cheryl Hung, vice president of ecosystem at Cloud Native Computing Foundation (CNCF) and Ken Owens, vice president, cloud native engineering, Mastercard hosted the podcast.
8/13/2020 • 37 minutes, 2 seconds
How a Service Mesh Amplifies Business Value
Aspen Mesh sponsored this podcast.
A key function of what service meshes should increasingly offer is to help DevOps teams have better observability into what events are causing application deployment and management problems. They should also help to determine which team can take appropriate action.
In this final episode of The New Stack Makers three-part podcast series featuring Aspen Mesh, Alex Williams, founder and publisher of The New Stack, and correspondent B. Cameron Gain, discuss with invitees how service meshes help DevOps stave off the pain of managing complex cloud native as well as legacy environments and how they can be translated into cost savings. With featured guests Shawn Wormke, vice president and general manager, Aspen Mesh and Tracy Miranda, director of open source community, CloudBees, they also cover what service meshes can — and cannot — do to help meet business goals and what to expect in the future.
8/12/2020 • 40 minutes, 14 seconds
The Developer’s Struggle for Control
GitLab sponsored this podcast.
The developer experience today certainly offers software engineers the freedom to create applications at scale across often highly distributed microservices environments. But with this degree of freedom to create and update deployments at scale, developers are under pressure to deliver faster cadences. They also face security concerns as well as unknowns about the frontend user experience, even once the security and QA teams have properly vetted the code.
In this The New Stack Makers podcast, correspondent B. Cameron Gain, speaks with Christopher Lefelhocz, vice president of development at GitLab and Ben Sigelman, CEO and co-founder of Lightstep, about how developers can leverage elasticity and other processes and tools to ensure software remains resilient and secure from the time the code is uploaded to GitLab’s repository and throughout the entire deployment and usage cycle.
8/10/2020 • 45 minutes, 4 seconds
From One Server to Kubernetes, A Startup’s Story
KubeCon+CloudNativeCon sponsored this podcast as part of a series of interviews with Kubernetes end users. Listen to the previous stories about the ups and downs of Box’s Kubernetes journey and what Wikipedia’s infrastructure is like behind the firewall.
It started simply enough but soon the site needed more than a server to keep things managed. Today, EquityZen runs on Kubernetes and is considering its next moves, in particular exploring how container as a service may serve them.
In this edition of The New Stack Analysts podcast, Andy Snowden, engineering manager, DevOps, for EquityZen, discusses how he helped the company begin its cloud native journey and the challenges associated with the move. Alex Williams, founder and publisher of The New Stack; Cheryl Hung, vice president of ecosystem at Cloud Native Computing Foundation (CNCF) and Ken Owens, vice president, cloud native engineering, Mastercard hosted the podcast.
When Snowden joined EquityZen, he immediately began to apply his background managing Kubernetes environments to help solve a chief concern the company had: The reliability of its infrastructure.
“During our initial conversations, they explained to me that ‘hey, we are having these issues and we are having these big site hits where the site will go down’ and that is really bad for our customers. They also asked ‘what have you done in your past that has worked well for you?,’” said Snowden. “And knowing Kubernetes as I knew it, I said this sounds like a really good use case for it and I explained that these are the sort of things I might consider doing.”
Once convinced that a Kubernetes environment would both boost reliability and help the company to better scale its operations, making the shift was, of course, a major undertaking.
8/6/2020 • 28 minutes, 40 seconds
Sebastien Goasguen, TriggerMesh: Event-Driven Architectures and Kubernetes
TriggerMesh sponsored this podcast.
Cloud native environments and the breadth of tools and platforms developers have at their disposal has made the developers’ experience, and especially, the scale and breadth of applications organizations can deploy today, that much richer. However, today’s cloud native and highly distributed environments typically involve much complexity, while the developer’s role increasingly involves managing applications deployments and integrating the applications they create. A number of tools and processes have emerged to help improve the developer experience, such as serverless environments that help developers concentrate more on their task of creating applications.
In this The New Stack Makers podcast, Alex Williams, TNS founder and publisher, speaks with Sebastien Goasguen, co-founder, TriggerMesh, about developers’ challenges and the tools and processes of event-driven architectures, including TriggerMesh for AWS EventBridge.
Many developers rely on tools and processes that allow them to spend more time on developing code and less time managing deployments.
“What we’re seeing in a lot of enterprises is that the developers really want to develop, write applications and deploy their apps — they don’t really want to have to deal with the infrastructure and scaling and configuring it,” Goasguen said. “They really want to concentrate on what they’re building, which is the apps.”
TriggerMesh helps to improve the developer experience by bringing to the table what Goasguen describes as an “infrastructure mindset.”
“Developers really want to get their job done and abstract the infrastructure and the difficulties — I think that’s where serverless really arrives,” he explained. “At the heart of serverless, you have events. And that’s where we are.”
Events or event-driven architectures are also increasingly relevant for developers working in cloud native environments. By helping to improve the developer experience with its cloud native integration platform, TriggerMesh supports event-driven architectures for front-end environments. To this end, TriggerMesh is helping DevOps teams bring events from on-premises applications and cloud environments with AWS EventBridge.
TriggerMesh opted to partner with AWS since it is “really evolving the way people are building apps on their cloud using functions.”
“We’ve seen that with Lambda during the last few years, but now they are also tying all those functions and the other services through events,” said Goasguen. “And the entry point for events for AWS is EventBridge.”
8/3/2020 • 26 minutes, 53 seconds
Episode 128: Operators Can Be a Security Hazard
A few years back, Kubernetes was in full development and many of its basic concepts were still evolving, so security was not a huge priority. But as K8s deployments have moved into production, more attention is being focused in securing Kubernetes and its workloads. Gadi Naor has been following Kubernetes security from the start. Alcide, the company Naor founded and now serves as CTO, offers an end-to-end Kubernetes security platform.
For this week’s episode of The New Stack Context podcast, we speak with Naor about a variety of Kubernetes security-related topics. Last week, Naor hosted a Kubernetes security Webinar for the Cloud Native Computing Foundation, which in addition to offering many helpful hints, discussed in detail the spate of recent vulnerabilities found in Kubernetes. And for The New Stack, he wrote about the problem about configuration drift in Kubernetes, and why it can’t be solved simply through continuous integration tools.
TNS Editorial and Marketing director Libby Clark hosted this episode, alongside TNS Senior Editor Richard MacManus, and TNS Managing Editor Joab Jackson.
7/31/2020 • 35 minutes, 44 seconds
OpenJS Keynote: JavaScript, the First 20 Years of the Web Stack
The first 20 years of JavaScript marked the dawn of the Web stack and a new generation of Web developers who had to deal with a community of traditional technologists. They also faced the continuous looming threat of Microsoft explained Allen Wirfs-Brock in a recorded keynote from OpenJS Foundation's virtual conference in June.
Wirfs-Brock was also project editor of the ECMAScript specification from 2008-2015 and wrote the book-sized journal article for the Association for Computing Machinery (ACM) entitled “JavaScript: The First 20 Years” for the History of Programming Language conference (HOPL), with co-author Brendan Eich, JavaScript’s creator.
In this The New Stack Makers podcast, hosted by Alex Williams, founder and editor-in-chief of The New Stack, Wirfs-Brock of Wirfs-Brock Associates offers his historical perspective, including JavaScript’s changes during the 25 years after Eich created the language.
“What really happened is what people thought was going to be a minor piece of browser technology — particularly over the last 10 years — has really taken over the world of programming languages,” said Wirfs-Brock. “And so it's quite remarkable.”
7/29/2020 • 20 minutes, 33 seconds
The Ups and Downs of One Cloud Management Provider's Kubernetes Journey w/ Kunal Parmar of Box
KubeCon + CloudNativeCon sponsored this post.
Box was one of the first companies to build on Kubernetes. Initially building its platform on PHP, Box’s architecture still uses some parts of the PHP architecture. Today, Box serves as a case study of a software platform’s cloud native journey that began a few years ago. The company also continues to rely on its legacy infrastructure dating back to the days when PHP ran on Box’s bare metal servers in its data centers.
In this edition of The New Stack Analysts podcast, Kunal Parmar, director of engineering, Box, discusses the evolution of the cloud content management provider’s cloud native journey with hosts Alex Williams, founder and publisher of The New Stack, Cheryl Hung, vice president of ecosystem at Cloud Native Computing Foundation (CNCF) and Ken Owens, vice president, cloud native engineering, Mastercard.
Prior to Box’s adoption of Kubernetes, the company sought ways to “create more services outside of the monolith in order to scale efficiently,” Parmar said. One way to do that, he explained, was to shift its legacy monolith applications into microservices.
“For anybody who has [made the shift to Kubernetes], they would know this is a really long and hard journey. And so, in parallel, we have been focusing on adopting Kubernetes for all of the new microservices that we have been building outside of the monolith,” said Parmar. “And today we are at a point where we're actually now looking at also starting to migrate the monolith to run on top of Kubernetes so that we can take advantage of the benefits that Kubernetes brings.”
7/28/2020 • 34 minutes, 47 seconds
Bots, Emojis and Open Source Maintainers Oh My!
Open source maintainers have a different set of challenges today. To name just a few, bots help manage overload and emojis are as predominant in open source groups as they are in twenty somethings’ social circles. Meanwhile, maintainers are deeply involved in governance issues like never before.
In this The New Stack Makers podcast, Alex Williams, TNS founder and publisher, and VMware guests Dawn Foster, director of open source community strategy, Nikhita Raghunath, senior member of technical staff, and Michael Klishin, senior principal software engineer discuss what it is like to be an open source maintainer, to build a community and to be a leader.
7/27/2020 • 51 minutes, 45 seconds
Episode 127: Serverless Web Content Delivery with JAMstack
There is a new architecture for front-end web development: JAMStack rethinks the current server-browser architecture, freeing the developer from worrying about fiddling with Apache, Linux or other aspects of backend support.
For this week’s episode of the The New Stack Context podcast, we speak with Guillermo Rauch, founder and CEO of Vercel, which offers a JAMstack-based service that allows developers to simply push their code to git in order to update their web site or application. Key to this platform is an open source user interface framework created by Rauch, called Next.js, based on Facebook’s React, but tweaked to make it easier to build user interfaces not only for the developer but even for the designer.
TNS Editorial and Marketing Director Libby Clark hosted this episode, alongside TNS Senior Editor Richard MacManus, and TNS Managing Editor Joab Jackson.
On the benefit of using a managed JAMstack such as Vercel’s (over a traditional LAMP stack), Rauch noted that:
You can deploy to an essentially serverless infrastructure, right? I always tell people that content delivery networks were the OG serverless — because they never required management. They were perfectly delegated. It’s a globally distributed system with no single point of failure. You’re not going to have to worry about Linux and Apache because you can deploy to any distributed global network that can serve essentially markup, JavaScript, CSS and static files. Then obviously to power the API, server rendering and more advanced functionality, the Vercel network gives you serverless functions. So we try to complete the entire JAMstack equation.
7/24/2020 • 41 minutes, 41 seconds
When You Need (Or Don’t Need) Service Mesh w/ B. Cameron Gain
Aspen Mesh sponsored this post.
The adoption of a service mesh is increasingly seen as an essential building block for any organization that has opted to make the shift to a Kubernetes platform. As a service mesh offers observability, connectivity and security checks for microservices management, the underlying capabilities — and development — of Istio is a critical component in its operation, and eventually, standardization.
In the second of The New Stack Makers three-part podcast series featuring Aspen Mesh, correspondent B. Cameron Gain opens the discussion about what service mesh really does and how it is a technology pattern for use with Kubernetes. Joining in the conversation were Zack Butcher, founding engineer, Tetrate and Andrew Jenkins, co-founder and CTO, Aspen Mesh, who also covered how service mesh, and especially Istio, help teams get more out of containers and Kubernetes across the whole application life cycle.
Service mesh helps organizations migrate to cloud native environments by serving as a way to bridge the management gap between on premises datacenter deployments to containerized cloud environments in cloud environments. Once implemented, a service mesh should, if functioning properly, reduce much of the enormous complexity of this process. In fact, for many DevOps team members, the switch to a cloud native environment and Kubernetes cannot be done without service mesh.
7/22/2020 • 48 minutes, 20 seconds
Dynatrace: Andreas Grabner - How AI Observability Cuts Down K8s Complexity
Dynatrace sponsored this podcast.
The Kubernetes era has made scaled-out applications on multiple cloud environments a reality. But it has also introduced a tremendous amount of complexity into IT departments.
My guest on this episode of The New Stack Makers podcast is Andreas Grabner from software intelligence platform Dynatrace, who recently noted that “in the enterprise Kubernetes environments I’ve seen, there are billions of interdependencies to account for.” Yes, billions.
Grabner, who describes himself as a “DevOps Activist,” argues that AI technology can tame this otherwise overwhelming Kubernetes complexity. As he put it in a contributed post, “AI-powered observability provides enterprises with a host of new capabilities to better deploy and manage their Kubernetes environments.”
During the podcast, we dig into how AI – and automation in general – is impacting observability in Kubernetes environments. To kick the show off, I asked Grabner to clarify what he means by “AI observability.”
7/21/2020 • 31 minutes, 3 seconds
Self-Serve Architectures, The K8S Operator for Cassandra on DataStax
DataStax sponsored this podcast.
About 10 years ago the tech industry rejected the single relational database, and demanded a way to scale — at scale — with distributed systems. This movement saw the birth of React, Cassandra, MongoDB, and Tokyo Cabinet, all to better manage distributed databases.
“All those databases that grew from: ‘Hey, we have a scaled data problem and this single relational database is not solving it.’ And I think that was the first time we really had to solve scale problems and use distributed technology to make it work,” said Patrick McFadin, chief evangelist for Apache Cassandra and vice president of developer relations at DataStax.
McFadin joined colleague Kathryn Erickson, head of strategy and product at DataStax, for this episode of The New Stack Makers. They sat down with founder and Publisher of The New Stack, Alex Williams, to reflect on how the industry has seen a sudden explosion of scale and how that’s now guiding the next steps toward fully self-service architecture.
7/20/2020 • 40 minutes, 52 seconds
Episode 126: Denise Gosnell, DataStax - How Many Database Joins Are Too Many?
Welcome to The New Stack Context, a podcast where we discuss the latest news and perspectives in the world of cloud native computing. For this week’s episode, we spoke with Denise Gosnell, chief data officer at Datastax, who is a co-author of the O’Reilly book “A Practitioner’s Guide to Graph Data.” She also graciously wrote a post for us explaining why graph databases are gaining traction in the enterprise.
TNS editorial and marketing director Libby Clark hosted this episode, alongside TNS senior editor Richard MacManus, and TNS managing editor Joab Jackson.
Graph database systems differ from the standard relational (SQL) kind in that they are engineered to more easily capture the relations across different entities. “When you’re looking at your databases, graph databases allow you to model your data more efficiently by using relationships,” Gosnell said.
You could capture that relationship information through a series of database joins of separate tables, but eventually, the complexity of this approach would make it prohibitive. “When you look at the full end-to-end complexity for using it in an application or maintaining your code, or updating edges, graph databases are going to make that a lot easier for the full lifecycle and maintenance of that application,” she said.
7/17/2020 • 33 minutes, 42 seconds
Kelsey Hightower on His Very Personal Kubernetes Journey
KubeCon + CloudNativeCon sponsored this podcast.
The New Stack will shortly launch its latest edition of “The State of the Kubernetes Ecosystem” after the first edition of the ebook was published in 2017. Ahead of its publication, The New Stack was able to speak with Kelsey Hightower, principal developer advocate at Google Cloud, who is likely one of the most recognized voices in the Kubernetes space.
In this edition of The New Stack Makers podcast hosted by Alex Williams, founder and publisher of The New Stack, Hightower spoke about his role in Kubernetes since the beginning, his thoughts on the project’s leadership today and the challenges that lay ahead.
During the early days of Kubernetes, there “were no ebooks available” on the subject, Hightower said. The main goal was to help “raise the profile of the people with the job of trying to manage applications."
“I think the whole point was when I was showing Kubernetes off [as] a contributor [and] building things around the ecosystem, my product work at CoreOS — we were all trying to solve problems that we all had in the past,” Hightower said. “We were trying to uplift the community. We were pretty sure that technology was going to be okay over time.”
7/16/2020 • 41 minutes, 19 seconds
Why a Financial Data Firm Bet Security on Palo Alto Networks
Prisma Cloud from Palo Alto Networks sponsored this podcast.
Both data and governed access to it play an integral part in our lives. With the freedom to access vast amounts of pervasive data comes the responsibility of ensuring protection is in place. For an organization, data protection is required for a range of access points, including apps, the hosts, the containers and serverless architectures.
In this edition of The New Stack Makers podcast hosted by Alex Williams, founder and publisher of The New Stack, speaks with Darian Jenik, risk product security lead architect for public cloud migration at Refinitiv.
Refinitiv offers financial-related information, data and analysis to 40,000 institutions worldwide. Jenik discusses how Refinitiv uses Prisma Cloud as a foundation for its custom cloud security reporting app. Among the themes covered, he shares his initial security challenges that drove Refinitiv to consider a third-party solution like Prisma Cloud, as well as what drove the need for the innovative new custom app.
7/14/2020 • 20 minutes, 38 seconds
Chip Zoller, Boskey Savla - How to Find the Less Painful Path For Kubernetes Infrastructure
Dell Technologies sponsored this podcast.
In this The New Stack Makers podcast, Savla and Chip Zoller, senior principal engineer for Dell Technologies, discuss infrastructure challenges associated with cloud native and Kubernetes and how the right tool choice can help to make the shift that much less painful.
Kubernetes’ arrangement of container clusters and pods is one of the more amazing computing structures this writer has observed. Its relatively simplicity as a container orchestrator, in many ways, begs the question why such a straightforward system was so hard to invent in the first place. Regardless, in addition to the resource-savings capabilities Kubernetes offers, the hype about its versatility and scaling capabilities is also well-deserved.
But then your organization decides to make the cloud native shift to Kubernetes — suddenly, DevOps sees the very steep learning curve ahead as they face the often immense challenges of managing a Kubernetes infrastructure.
DevOps teams, for example, begin to think about the daunting prospect of ensuring a particular stack is safe and secure on Kubernetes, Boskey Savla, technical product line marketing manager, modern apps, for VMware, said. “These are the things a lot of times customers start thinking about [when adopting] cloud native architectures and they tend to think about this as an afterthought,” Savla said. “And they go all in on Kubernetes. But then they realize, ‘okay, we need to take care of all this. How do I even scale a cluster”’?
7/13/2020 • 50 minutes, 16 seconds
Episode 125: Chris DiBona - Google Launches a Trademark Office for Open Source
Welcome to The New Stack Context, a podcast where we discuss the latest news and perspectives in the world of cloud native computing. For this week’s episode, we spoke with Chris DiBona, director of open source at Google, about Google’s launch of the Open Usage Commons, an independent company to help open source projects better manage their trademarks.
In a blog post, DiBona notes that trademarks sit at the juncture of the rule-of-law and the philosophy of open source. So for this episode, we wanted to find out more about how they interact and how Google is attempting to improve the management of trademarks in an open source way. We also wanted to address the rumors that this organization was created to manage Google’s Istio open source service mesh in lieu of the Cloud Native Computing Foundation (DiBona’s answer: no).
TNS editorial and marketing director Libby Clark hosted this episode, alongside TNS senior editor Richard MacManus, and TNS managing editor Joab Jackson.
7/10/2020 • 32 minutes, 12 seconds
Cheryl Hung - How The CNCF’s Radar ‘Shows Reality’
KubeCon + CloudNativeCon sponsored this podcast.
In this edition of The New Stack Makers podcast hosted by Alex Williams, founder and publisher of The New Stack, Cheryl Hung, vice president of ecosystem, CNCF, discusses CNCF’s the Technology Radar’s role in software development today.
In many ways, the Technology Radar can “show reality,” Hung said. And in doing that, the purpose is to show and distinguish between what is important and what is just hype.
“We are obviously all really interested in new projects, new tools and the new products that are coming out, but the question has always been is anyone actually using this?,” Hung said. “Is it real or is it hype that’s going to fade away in a few months or a few years? So, that was really my motivation behind this new report.”
7/8/2020 • 45 minutes, 21 seconds
Shift as Far Left as You Can to Protect Cloud Native Applications
Prisma Cloud from Palo Alto Networks sponsored this podcast.
In this edition of The New Stack Makers recorded for The State of Cloud Native Security virtual summit held on June 24, thought leaders from Palo Alto Networks discuss why the shift left for security in the software production process is essential for DevOps today. The topics discussed include how the trend to shift left has its roots in DevOps, its integration with continuous delivery (CD), security’s role not only in software development processes but for the enterprise as well and, ultimately, how the shift left helps to ensure software is safe and secure.
Many, if not most, DevOps team leaders and CTOs are well aware of the importance of embedding security processes at the very beginning of the production pipeline.
The guests from Palo Alto Networks are:
Aqsa Taylor, a product manager for Prisma Cloud.
Ashley Ward, solutions architect.
Keith Mokris, head of product marketing, Prisma Cloud.
Vinay Venkataraghavan, Cloud CTO, Prisma Cloud.
7/7/2020 • 41 minutes, 14 seconds
Panel Discussion: The State of Cloud Native Security Report 2020
Prisma Cloud from Palo Alto Networks sponsored this podcast.
Palo Alto Networks, Amazon Web Services, and Accenture, in March 2020, began to survey over 3,000 cloud architecture, InfoSec and DevOps professionals, on a quest to uncover the practices, tools and technologies companies are using to meet and deal with challenges of securing cloud native architectures and methodologies — and to gain the benefits of moving to the cloud.
This edition of The New Stack Makers features the keynote panel discussion with thought leaders from Palo Alto Networks, Amazon Web Services (AWS) and Accenture who shared their own experiences and anecdotes within their organizations as they related to the findings. Moderated by Alex Williams, founder and publisher of The New Stack, the panel discussion was recorded for the The State of Cloud Native Security virtual summit held on June 24.
The panelists guests were:
John Morello, vice president of product, Prisma Cloud.
Mark Rauchwarter, multicloud security lead, Accenture.
Daniel Swart partner solutions architect Amazon Web Services (AWS)
7/7/2020 • 40 minutes, 17 seconds
How Infrastructure as Code Democratizes Scale
Dell Technologies sponsored this podcast.
The tools and technologies DevOps teams rely on for infrastructure as code certainly has changed, especially as it has begun to scale for storage. At the same time, the tools, technologies, and platforms that started in the hyper-scale, cloud native applications and their DevOps environments have actually democratized “scale”. The concepts of configuration management and state-based declarative paradigms are actually reinforcing the fact that “Cloud is not a place” and instead it’s how you manage the operations to achieve objectives like self-service, elastic scale, and agile application development.
During this The New Stack Makers podcast as part of the Dell Technologies Virtual Day of Podcast series, we discuss a number of topics relating to infrastructure as code and how it applies to storage — and what that implies in today’s increasingly cloud native-centric world.
The guest are:
Catherine Paganini, head of marketing for Kublr.
Parasar Kodati, senior consultant, product marketing, Dell Technologies.
Patrick Ohly, senior software engineer, Intel.
7/6/2020 • 44 minutes, 16 seconds
Episode 124: Tanzu, VMware’s Kubernetes Distro for Developers
Welcome to The New Stack Context, a podcast where we discuss the latest news and perspectives in the world of cloud native computing. For this week’s episode, we spoke with Craig McLuckie, who is the VMware chief of Tanzu development, as well as one of the creators of Kubernetes. We asked him about the importance of the developer for modern business, the value that Kubernetes brings to developers and how VMware’s Tanzu portfolio enables that.
TNS Editorial and Marketing Director Libby Clark hosted this episode, alongside TNS Senior Editor Richard MacManus, and TNS Managing Editor Joab Jackson.
7/3/2020 • 32 minutes, 31 seconds
HoneyComb's Charity Majors - Observability Helps You See What Looks Weird
In this conversation for The New Stack Makers, Majors discusses a number of themes relating to observability and monitoring, as well as how she continues to make herself a better developer. The topics include how:
Test-driven development and how it has evolved.
Monitoring as a practice — much like test-driven development — was built for on-premises architectures.
Observability is the successor to monitoring by allowing for the discovery of the “unknown, unknowns,” which Majors previously wrote is like “following breadcrumbs to find what you don’t even know what is there.”
A robust architecture is required for observability.
While observability has been referred to as a “missing link” in DevOps, Majors said, instead, “it’s not such a missing link as it is a necessary first step.”
7/2/2020 • 32 minutes, 33 seconds
How the Financial Sector is a Barometer for Cloud Native
Prisma Cloud from Palo Alto Networks sponsored this podcast.
BankUnited N.A. falls under the mid-sized bank category. Based in Miami Lakes, Florida, it has about $32.9 billion in total assets and serves both the consumer and commercial sectors. Not one of the largest banks or one of the smallest banks in the U.S., BankUnited N.A., a subsidiary of BankUnited, reflects what it is like for an organization in the financial sector, seen as a barometer for cloud native adoption, to make the switch to cloud native.
In this edition of The New Stack Makers podcast hosted by Alex Williams, founder and publisher of The New Stack, Felipe Medina, vice president, IT security operations, InfoSec engineering, and Michael Lehmbeck, cloud architecture and operations manager, for BankUnited N.A. spoke about their DevOps’ cloud native journey in the financial sector. The podcast was recorded for The State of Cloud Native Security Virtual summit that took place on June 24.
BankUnited N.A. began to make its switch to the cloud about three years ago. The initial idea was to “test the waters” in order to achieve “some tangible returns,” Lehmbeck said. The DevOps team set about testing its disaster recovery capabilities. “We proved our ability to be able to failover between our primary data centers, to Amazon Web Services (AWS) in a disaster recovery-type scenario,” Lehmbeck said. “So, that basically enabled us to get an initial footprint stood up and proof out that our mission critical systems could in fact run in that cloud estate.”
6/30/2020 • 17 minutes, 36 seconds
“How the Worlds of Cloud Native and the Coronavirus Pandemic Collide in Sweden”
Dell Technologies sponsored this podcast.
Enterprises worldwide are often revamping their IT organizations amid the great shift to cloud native infrastructures and the ongoing effects of the Coronavirus pandemic. In the New Stack Makers podcast, Jonas Emilsson, concept manager for hybrid platforms for Atea, discusses the impact on his firm’s delivery of IT infrastructure solutions services for its main customer base in Europe and the Baltics.
One macro trend Emilsson has observed is how the role of the developer has changed as DevOps adopts software-defined datacenters and Kubernetes. “We’ve seen a greater change in the market in terms of how customers and users, in general, are using the technologies,” Emilsson said. “A lot of it lies within the software-defined data center, of course. We thus do a lot of business with self-service portals and automation.”
6/29/2020 • 29 minutes, 57 seconds
Episode 123: What ‘Open Source’ Means for the GitHub Generation
Welcome to The New Stack Context, a podcast where we discuss the latest news and perspectives in the world of cloud native computing. For this week’s episode, we spoke with Matt Asay, principal from the open source office at Amazon Web Services about his new series of posts on The New Stack that documents the contributors and originators behind many of the most popular open source programs we use every day.
TNS Editorial and Marketing Director Libby Clark hosted this episode, alongside TNS Senior Editor Richard MacManus, and TNS Managing Editor Joab Jackson.
Over the past few weeks, AWS’ Asay has been traveling the open source world — virtually — to write a set of fascinating series on The New Stack that documents the contributors and originators behind many of the most popular open source programs we use every day.
In this series, we’ve met the developers behind more than a dozen projects, including Wireshark, Matplotlab, Curl and many other widely-used tools. The idea with the series is to, in Asay’s words, “shine a spotlight on an array of open source projects (and their founders and/or lead maintainers) that quietly serve behind-the-scenes. In the process, I hope that we’ll gain insight into both why and how these critically important projects have managed to thrive for so long. This, in turn, just might provide useful information on how best to sustain open source projects.”
In this interview, we ask Asay what he has learned speaking with all these creators, about project management and open source itself. We chat about how to join an open source project, and why it is difficult for maintainers to attract more help (and, in some cases, why they may not want contributions at all). Also on the agenda was the importance of open source licensing, how the younger generation of developers think about the idea of “open source,” and the long path it has taken for worldwide acceptance.
“I spent 10 years railing against the Microsoft machine for things with FUD around SUSE and Linux and whatnot. And now I’ve spent just as much time praising Microsoft for the great open source contributions that they make. But people don’t know that history.”
6/26/2020 • 36 minutes, 15 seconds
ASPEN MESH: How Istio is Built to Boost Engineering Efficiency
One of the bright points to emerge in Kubernetes management is how the core capabilities of the Istio service mesh can help make engineering teams more efficient in running multicluster applications. In this edition of The New Stack Makers podcast, we spoke with Dan Berg, distinguished engineer, IBM Cloud Kubernetes Services and Istio, and Neeraj Poddar, co-founder and chief architect, Aspen Mesh, F5 Networks. They discussed Istio’s wide reach for Kubernetes management and what we can look out for in the future. Alex Williams, founder and publisher of The New Stack, hosted this episode.
6/23/2020 • 39 minutes, 45 seconds
Data Protection for Today’s Highly Complex Cloud Native World
In this The New Stack Makers podcast, technology thought leaders from Dell EMC and VMware discuss the dynamics of data protection and other DevOps-related themes for today’s highly complex cloud native environments.
6/22/2020 • 55 minutes, 35 seconds
Episode 122: Splunk on Removing Exclusionary Language from its IT Systems
Welcome to The New Stack Context, a podcast where we discuss the latest news and perspectives in the world of cloud native computing. For this week’s episode, we spoke with Eric Sammer, Splunk distinguished engineer, about the IT system monitoring company’s ongoing effort to rename its “white list / black list” and “master/slave” terminology to remove language that perpetuates systemic racism and unconscious bias in tech. Splunk brought together a working group of people from across the organization to develop additional recommendations, guidelines, and procedures to identify and replace biased language and to prevent other instances from happening in the future. We also chatted with Sammer about what has happened since the company he co-founded, event-driven services monitoring provider Rocana, was acquired by Splunk in 2017.
6/19/2020 • 31 minutes, 22 seconds
Chase Pettet - What Wikipedia's Infrastructure Is Like Behind The Firewall
The Wikimedia Foundation‘s impact on culture and media sharing has had immeasurable benefits on a worldwide scale. As the foundation that manages the fabled Wikipedia, Wikimedia Commons, Wikisource and a number of outlets, Wikimedia’s mission is to “to bring free educational content to the world”
All told, Wikipedia alone is available in about 300 different languages with more than 50 million articles on 1.5 billion unique devices a month with 6,000 views a second — with 250,000 engaged editors, Chase Pettet, senior security architect, Wikimedia Foundation, said.
“Editors are sort of the lifeblood of the movement,” he said.
In this, The New Stack Analyst podcast, hosted by Alex Williams, founder, and editor-in-chief of The New Stack, and Ken Owens, vice president, cloud native engineering for Mastercard, Pettet discussed Wikimedia’s infrastructure-management challenges, both past and present, and what makes one of the world’s foremost providers of free information tick.
6/17/2020 • 1 hour, 16 minutes, 13 seconds
Dell Technologies Virtual Day of Podcasts Sneak Peek: Free Your Apps, Simplify Your Operations
It sounds pretty basic, but getting your infrastructure ready for modern apps, and centrally managing your clouds and cluster requires a modernized app platform.
And that’s exactly what we explored with Dell Technologies in our series of five recordings scheduled to start tomorrow. Dell Technologies and VMware are making continued investment in the cloud native market. It’s perhaps most apparent with the acquisition of companies such as Heptio, Wavefront, Bitnami, and most recently Pivotal. It’s now transformed into initiatives using Dell Technologies’ and VMware’s Tanzu solution portfolio. VMware Tanzu is built upon the company’s infrastructure products and technologies that Pivotal, Heptio, Bitnami, Wavefront, and other VMware teams bring to this new portfolio of products and services.
6/15/2020 • 9 minutes, 4 seconds
Dormain Drewitz and Bob Ganley: Infrastructure Pillars for Application Success
In this The New Stack Makers podcast, we discuss the infrastructure requirements for organizations making the shift to modern-day application development, deployment and management — and how Dell Technologies’ expertise can help to make that possible. Our guests are Bob Ganley, cloud senior consultant, product marketing for Dell Technologies and Dormain Drewitz, director, product marketing and content strategy, at VMware.
Before tasking DevOps with technology adoption to take advantage of the immense opportunities application development can offer with the right mix of tools, platforms and usually cloud environments, it is essential to both determine what users really want. Once that is established, making sure that the infrastructure can support the adoption of the technologies to deliver is also essential.
6/15/2020 • 1 hour, 1 minute, 4 seconds
Episode 121: CTO, Ben Hindman D2IQ - How Mesosphere Helps Kubernetes Grow
This week in TNS, D2IQ co-founder Tobi Knaup wrote about the growing problem of container sprawl, a by product of more companies running containers in production, and as a result, there is a loss of efficiency on the part of the DevOps teams managing them. https://thenewstack.io/container-sprawl-is-the-new-vm-sprawl/
In this episode, we will speak with Ben Hindman, D2IQ co-founder, and CTO, about this issue of container sprawl, and how it hampers “Day 2 Operations” as D2IQ (formerly Mesosphere) calls it.
We also will discuss the company’s recent Cloud Native Virtual Summit, its recently released KUDO tool https://thenewstack.io/kudo-automates-kubernetes-operators/ , the 6th Anniversary of Kubernetes, and the latest on Mesosphere and the DCOS.
TNS editorial and marketing director Libby Clark hosted this episode, alongside TNS Senior Editor Richard MacManus, and TNS Managing Editor Joab Jackson.
6/12/2020 • 38 minutes, 32 seconds
Chenxi Wang, Ph.D. Why Third-Party Security Adoption Has to Get Better
In this edition of The New Stack Makers podcast hosted by Alex Williams, founder and publisher of The New Stack, Wang spoke about these and other third-party security trends. The podcast was recorded in anticipation of the The State of Cloud Native Security virtual summit to take place on June 24.
Major exploits such as the Target and Equifax hacks made headlines a few years ago. But these infamous attacks have not necessarily served as a wakeup call for many, if not most, organizations. They lack the security tools, processes and culture required to properly protect their data, Chenxi Wang, Ph.D., managing general partner, Rain Capital, said.
“Everybody read about those headlines but translating that into the work [organizations] do day to day, I think there’s still a gap,” Wang said. “As security industry professionals — myself included — we need to reach out more to the adjacent community and especially with Dev these days. I mean software is eating the world and Dev is the one driving software, so we need to work with dev to make it happen.”
6/10/2020 • 43 minutes, 17 seconds
Episode 120: Priyanka Sharma - The New Boss of the Cloud Native Computing Foundation
For this week’s episode, we spoke with Priyanka Sharma, the new general manager for the Cloud Native Computing Foundation, about her rich work history and her visions and strategies for moving CNCF forward. Also joining the convo is Chris Aniszczyk, CNCF chief technology officer.
This week, the CNCF announced that Sharma will now lead the Cloud Native Computing Foundation, taking over the role filled by former Executive Director Dan Kohn.
6/5/2020 • 43 minutes, 3 seconds
How Kubernetes, Open Source Underpin Condé Nast Operations
In this, The New Stack Analyst podcast hosted by Alex Williams, founder, and editor-in-chief of The New Stack, and Ken Owens, vice president, cloud native engineering, Mastercard, Jennifer Strejevitch, site reliability engineer for Condé Nast speaks about her experiences and observations at the front lines of the publishing company infrastructure-related challenges and successes.
Condé Nast is one of the most well recognized media brands in the world, with a range of stand-out titles that include “Wired,” “The New Yorker” and “Vanity Fair.” The publishing giant also represents a case study of how a large multinational company was able to shift its entire international web and data operations to a homogenous Kubernetes infrastructure it built and now manages with open source tools.
Indeed, during the past five years, Condé Nast has been able, build a single underlying platform consisting of several dozen websites spread out around the world, including Russia and China in addition to the U.S. and Europe. Its web presence now hosts more than 300 million digital unique users per month and 570 article views every second.
6/3/2020 • 34 minutes, 40 seconds
Observability, Distributed Tracing and Kubernetes Management w/ Raj Dutt of Grafana
In this episode of The New Stack Makers, our Publisher Alex Williams sits down with Raj Dutt, CEO and co-founder Grafana Labs, provider of the open source observability platform Grafana. They’re talking about creating a more seamless transition among observability, tracing, metrics and logs, across different data types and open source projects.
Observability and distributed tracing are intrinsically linked to reliability of increasingly distributed systems. Observability-driven development uses data and tooling to observe the state and behavior of a system to learn more about its patterns for weaknesses. Distributed tracing provides the metrics and logs that allow for diving into individual requests and to get closer to the problem. In this powerful pairing, observability happens at the event level, which drives your questions, and tracing happens at the request level, which helps answer them.
6/1/2020 • 41 minutes, 19 seconds
Episode 119: Observability in the Time of Covid
Welcome to The New Stack Context, a podcast where we discuss the latest news and perspectives in the world of cloud native computing. For this week’s episode, we spoke with Christine Yen, CEO of Honeycomb.io, the observability platform vendor, about the company’s pricing changes brought on by COVID-19 and more broadly how observability practices and tools are changing as more companies make the move to the cloud.
TNS editorial and marketing director Libby Clark hosted this episode, alongside TNS senior editor Richard MacManus, and TNS managing editor Joab Jackson.
Honeycomb this week changed its pricing structure to reflect the cost realities for businesses and the long term effect of COVID-19. The company also recently released the results of a survey that shows half of the developers surveyed aren’t using observability currently, but 75% plan to do so in the next two years. And in April the company released an open source collector for OpenTracing that allows teams to import telemetry data from open source projects into any observability platform, including their own but also their competitors.
Yen said of the pricing changes:
Our old pricing was, you bought a certain amount of storage and gigabytes and paid for a certain amount of data ingest, also in gigabytes, over a period of time. We felt like that was a little bit harder for people to map to their existing workflows, harder for them to predict. So we shifted to an events-per-month ingest model, one axis, one way to scale your usage.
5/29/2020 • 32 minutes, 16 seconds
What Cloud Native Security Means for You and Your Peers Today by Palo Alto Networks and Prisma Cloud
In this edition of The New Stack Makers podcast hosted by Alex Williams, founder, and editor-in-chief of The New Stack, Keith Mokris, head of product marketing, Prisma Cloud, Palo Alto Networks, and Mark Rauchwarter, cloud and infrastructure security for Accenture Security, discuss the key talking points of the Prisma Cloud Native Security Summit and what the results of the survey mean for the DevOps community.
Join Prisma Cloud by Palo Alto Networks June 24 at 9:00 AM PDT at The State of Cloud Native Security virtual summit for a full discussion of the “The State of Cloud Native Security” report and other topics relevant to your organization’s digital journey. The summit will feature a panel session hosted by The New Stack’s Founder and Editor-in-Chief Alex Williams, with security thought leaders from AWS, Accenture, and Prisma Cloud by Palo Alto Networks.
5/27/2020 • 31 minutes, 15 seconds
Why Bloomberg’s OpenAPI Participation Is Important for the Financial Industry
Bloomberg’s involvement as a financial information leader with the OpenAPI Initiative and the open source community is built on wider aspirations than it is about choosing the right tools to grow its business. The stakes are especially high in these turbulent times as the ravages of COVID-19 continue to take its toll, already wiping out large swaths of the economy .
In this The New Stack Makers podcast, we speak with two open source leaders from Bloomberg:
Richard Norton, head of the data license engineering group.
Kevin Fleming, head of open source community engagement and member of Bloomber’s CTO office.
Bloomberg’s collaboration and backing of OpenAPI as well, its involvement with open source and what this means for the financial sector are discussed.
From the outset, APIs are seen as a core underpinning of what Bloomberg offers with, of course, its famous Bloomberg terminals. The data and analytics Bloomberg provides its financial institution and other customers — which might include data feeds for futures or commodities — allows for key decisions to be made, often affecting world capital markets.
5/25/2020 • 37 minutes, 17 seconds
Episode 118: SQL Databases in a Cloud Native World
Welcome to The New Stack Context, a podcast where we discuss the latest news and perspectives in the world of cloud native computing. For this week’s episode, we spoke with Peter Zaitsev, CEO of the open source database software and services company Percona,. This week, Percona held its own virtual 24 hour virtual conference, Percona Live Online, where open source, databases and cloud native computing were all discussed. So we grilled Zaitsev about how traditional SQL databases operate in a cloud native world, as well as about Percona’s newly announced performance and optimization package for MongoDB.
TNS editorial and marketing director Libby Clark hosted this episode, alongside TNS senior editor Richard MacManus, and TNS managing editor Joab Jackson.
5/24/2020 • 37 minutes, 11 seconds
Virtual Pancake Breakfast
Thanks to the COVID-19 global pandemic, many IT systems are facing unprecedented workloads, reaching levels of usage on a daily basis that usually only happen on the busiest days of the year. The good news is that the cloud native approach has been rapidly gaining popularity with businesses large and small to help meet these sudden demands. And proper security precautions must be built into these emerging cloud native systems.
Applying principles of cloud native security to the enterprise was the chief topic of discussion for our panel of experts in this virtual panel. Panelists were:
Cheryl Hung, Director of Ecosystem, Cloud Native Computing Foundation.
Carla Arend, Senior Program Director, Infrastructure Software, IDC.
John Morello, Palo Alto Networks Vice President of Product, Prisma Cloud.
Alex Williams, founder and publisher of The New Stack hosted the discussion.
Certainly, operations have changed for most of us due to the outbreak of the COVID-19 global pandemic. But this can be a good opportunity for an organization to rethink how they approach business continuing and resiliency, Arend noted. Those who were on the digital journey are getting much better through this crisis than those just starting. Now is a great time to focus on digital innovation.
Indeed, if anything, innovation is just accelerating in this time, Morello agreed. Without having the ability to interact in person, the tools that enable digital transformation — Kubernetes, containers — helps people operate more efficiently.
5/20/2020 • 58 minutes, 22 seconds
The Internet is Awesome! w/ Diane Mueller and Paris Pittman
There’s no doubt we are in weird times. There are a lot of stressors, but the majority of the tech community has more opportunities than ever to do what we do working from home. Just last week Google and Twitter announced that employees can work from home for the rest of the year or even indefinitely. For The New Stack Publisher Alex Williams, there’s one resounding reason why — the internet is awesome. And what’s driving much of that awesomeness right now is no doubt Kubernetes and its highly distributed community.
In this episode of The New Stack Makers, Williams sits down, over Zoom, with two grounded frequent fliers — Diane Mueller, director of community development at Red Hat and co-chair of OKD, Red Hat’s distribution of Kubernetes, and Paris Pittman, developer relations program manager at Google and a leader in CNCF’s Kubernetes contributor strategy. They spoke fresh off of running a particularly successful, inclusive — and very big — Red Hat Summit.
“Now, because it’s virtual, there’s no reason for them not to participate. And so we saw like this phenomenal exponential growth of people coming and participating at Red Hat,” Mueller said.
5/18/2020 • 40 minutes, 40 seconds
Episode 117: Is Kubernetes the New App Server?
Welcome to The New Stack Context, a podcast where we discuss the latest news and perspectives in the world of cloud native computing. For this week’s episode, we spoke with Tina Nolte, vice president of product, for Kubernetes management service Spectro Cloud, about why we shouldn’t think of containers/Kubernetes as just another form of virtualization.
TNS editorial and marketing director Libby Clark hosted this episode, alongside TNS managing editor Joab Jackson.
Nolte recently wrote a popular post for us on why we shouldn’t think of containers and Kubernetes as just another form of virtualization — that it opens up a whole new way to think about application development and deployment. So we wanted to find out more about this concept.
“Kubernetes is really about that middle area between infrastructure and application. So the applications themselves are enabled to be differently architected because of that operational PaaS layer if you will,” she explained. “It’s not just a lift-and-shift of old apps into new infrastructure.”
Focusing too much on the infrastructure side of Kubernetes ultimately misses its true value, an insight Nolte gleaned, in part, from working for a well-regarded OpenStack-based start-up, Nebula, that ultimately shuttered.
5/15/2020 • 28 minutes, 21 seconds
SaltStack - On How to Fix the Gaps in Kubernetes Infrastructure Management
The hype around Kubernetes has created many repercussions in the IT industry — while not all of the effects have been net positive for many organizations and DevOps teams. Infrastructure management is a prime example. Missing too often are security management tools for Kubernetes deployments and infrastructure management. Ultimately, these tools should have the capacity to replace security and IT skills gaps and talent shortages by automating vulnerability detections and fixes, for example.
“Kubernetes is something powerful and impactful, but has too many components and moving pieces,” Moe Abdula, vice president of engineering, SaltStack, said. “How do you ensure that you can build an architecture and a system around something like a Kubernetes that is easy to maintain, easy to support, easy to extend?”
In this The New Stack Makers podcast, we speak with Abdula and Gareth Greenaway, vice president of engineering, SaltStack, about how and why the infrastructure- and security-management aspects of Kubernetes, as well as infrastructure, have been neglected, what the risks are and what can be done to fix it.
5/13/2020 • 49 minutes, 23 seconds
Why Error Monitoring Must Be Close To Your Code Path w/ Ben Vinegar from Sentry
Rare is the DevOps team that has the bandwidth to manually parse through and prioritize what needs to be fixed among what can number millions of application-error alerts. This includes distinguishing between minor glitches and those errors that can bring to a screeching halt an organization’s capacity to meet its customers’ needs and expectations.
A viable error-monitoring system should, ideally, automate the communication of error data in a way that indicates what must be done to make a fix.
A system might be able to signal every single error, perhaps totaling millions of alerts.
The error alerts users receive must be “actionable,” Ben Vinegar, vice president of engineering, for Sentry, said. “That’s a really hard problem,” Vinegar said.
5/11/2020 • 31 minutes, 37 seconds
Episode 116: AWS Bottlerocket and the Age of the Linux Cloud Distributions
Welcome to The New Stack Context, a podcast where we discuss the latest news and perspectives in the world of cloud native computing. For this week’s episode, we spoke with Deepak Singh, Amazon Web Services’ vice president for containers and open source, and Peder Ulander, AWS general manager for open source, about the company’s recently released Bottlerocket Linux distribution for the cloud.
TNS editorial and marketing director Libby Clark hosted this episode, alongside founder and TNS publisher Alex Williams, TNS senior editor Richard MacManus, and TNS managing editor Joab Jackson.
5/8/2020 • 37 minutes, 10 seconds
Dries Buytaert - Why Open Source Is Recession-Proof
Twenty years ago, Dries Buytaert founded Drupal right at the dot-com bust. Then in 2008, at the start of the so-called Great Recession, he started Acquia, a digital experience platform for Drupal sites. Some would say those are unlucky times to start businesses. Not Buytaert. He’s convinced well-loved free and open source software or FOSS is recession-proof. And that’s what this episode of The New Stack Makers dives into.
5/4/2020 • 19 minutes, 19 seconds
Episode: 115 Serverless Application Flows in the Cloud
Welcome to The New Stack Context, a podcast where we discuss the latest news and perspectives in the world of cloud native computing. For this week’s episode, we spoke with Sebastien Goasguen, co-founder and chief product officer, TriggerMesh, about how to build applications from serverless functions that span multiple clouds, using the company’s software.
TNS editorial and marketing director Libby Clark hosted this episode, alongside founder and TNS publisher Alex Williams, TNS senior editor Richard MacManus, and TNS managing editor Joab Jackson.
We spoke with Goasguen about the role that TriggerMesh plays for GitLab and enterprise customers. Last month, TriggerMesh released the Cloud Native Integration Platform as well as the AWS Event Sources for OpenShift, timing the release with the virtual Red Hat Summit. With the latter offering, TriggerMesh brings Amazon EventBridge-like functionality to the OpenShift ecosystem allowing developers to trigger functions across clouds and legacy data centers. TriggerMesh users can now link events from anywhere to Red Hat OpenShift workloads.
“Serverless is not just function-as-a-service. It’s not just functions. It’s actually an integration problem. We call TriggerMesh a cloud-native integration platform: We compose cloud services together, glue them together thanks to an event-driven architecture,” Goasguen said.
Then, later in the podcast, we discuss the top podcasts and news stories from the site, including an interview with agile expert Emily Webber on remote work, how serverless can help embed security into the development process, the idea of offering databases as a serverless service, and the importance of standards in serverless adoption.
5/1/2020 • 42 minutes, 32 seconds
Building a Remote-First World the Right Way w/ Lisette Sutherland
What is remote-first? In normal times, it’s an organization that is built in such a way that anyone can go remote if necessary.
“Whether or not you want to allow your employees to go remote, you should have the processes in place to be able to just-in-case because you see transportation problems loom all over the world, weather problems all over the world, sick children at home. There were all kinds of reasons why a business should be putting remote processes into place,” said Lisette Sutherland on this episode of The New Stack Makers.
For Sutherland, founder of Collaboration Superpowers remote team workshops, longtime remote work podcast host, and author of A Handbook on Working Remotely — Successfully — for Individuals, Teams, and Managers, we’ve been technologically ready for a remote-first world for about five years now. And she says there’s always been logic in factoring a remote-first mindset into your business continuity planning. Plus, giving the option of remote work often makes for a much more inclusive workplace that in turn empowers a business to hire the best candidate no matter where they live.
With remote work, “people can hire people who love what they do, rather than people who are just doing their job,” Sutherland said.
4/29/2020 • 37 minutes, 59 seconds
Emily Webber on Inclusion at Remote Scale
How do we promote diversity and inclusion from the comfort of our homes? How do we recreate those important hallway moments within a virtual environment? How do we continue to consider the consequences of what we’re building? We begin to answer these questions and more in this episode of The New Stack Makers, where we interview Emily Webber, independent agile delivery and digital transformation consultant, coach and trainer, and author of the book Building Successful Communities of Practice.
Webber, like everyone The New Stack is interviewing as of late, was calling in via Zoom from her home office. Usually, she’d be working between her clients’ offices in London and in India. Even for someone who has built part of her brand on remote meet-ups, nothing about this is business as usual.
Webber is based in London, where, in normal times, it’s completely common to see people eating lunch at their desks or even while walking back to work from the takeaway shop. But at her client in India, they all take breaks and share meals together in the canteen. There she’s not only experienced a huge leveling up in terms of cuisine, but she’s witnessed the innovation that comes from those chance encounters around the hallway, cafeteria, and water cooler.
Webber has borrowed Etsy’s John Goulah’s term “assisted serendipity” and applied it to our temporarily remote-first world. This can be using a Slack app like Donut or Shuffl to facilitate random coffee pairings. Or host remote coffees or happy hours. It may be assisted, but it is an efficient way of crossing cross-departmental silos, as well as to fight isolation during these trying times.
4/28/2020 • 28 minutes, 2 seconds
Episode 114: Program the Infrastructure with an Actual Programming Language
Welcome to The New Stack Context, a podcast where we discuss the latest news and perspectives in the world of cloud native computing. For this week’s episode, we spoke with Joe Duffy, Founder and CEO of Pulumi, and Sophia Parafina, Pulumi’s technical marketing manager. In this convo, we delve into the recent Pulumi 2.0 release, which allows teams to reuse code, apply policies and do integration testing of infrastructure the same way they do for application development, a concept known as “architecture as code.”
TNS Editorial and Marketing Director Libby Clark hosted this episode, alongside founder and TNS publisher Alex Williams and TNS managing editor Joab Jackson.
4/24/2020 • 42 minutes, 38 seconds
Pancake Podcast: Cassandra and the Need for a Kubernetes Data Plane
What is the role that the data plane plays in a Kubernetes ecosystem? This was the theme for our latest (virtual) pancake breakfast and panel discussion, sponsored by DataStax, the keeper of the open source Cassandra database.
Last month, Datastax released a Kubernetes operator, so that the NoSQL database can be more easily installed, managed, and updated in Kubernetes container-based infrastructure.
The Panelists for this discussion:
Kathryn Erickson, DataStax senior director of partnerships.
Janakiram MSV, principal analyst of Janakiram & Associates.
Aaron Ploetz, Target NoSQL lead engineer.
Sam Ramji, DataStax chief strategy officer.
Alex Williams, publisher for The New Stack served as moderator for this panel, with the help of TNS Managing Editor Joab Jackson.
in 2015, Ramji worked at Google and oversaw the business development around its then-newly open source project, Kubernetes, which was based on its internal container orchestrator, the Borg. The Borg provides Google a single control pane for dynamically managing all its many containerized workloads, and its scale-out database, Spanner, offered the same for the data plane.
“The marriage of those two things made compute and data so universally addressable so easy to access that you could do just about anything that you could imagine,” Ramji explained.
4/22/2020 • 51 minutes, 49 seconds
Anil Dash and James Turnbull - How Glitch Might Remove the Stress of Accessing Full Stack Code
In this The New Stack Makers podcast, we speak with Glitch’s CEO Anil Dash and James Turnbull, vice president of engineering, about how Glitch could help developers remove much of the pain associated with installing and accessing application code and how it serves as an extension of GitHub.
Glitch, which was originally called Gomix created under the Fog Creek Software umbrella — along with Stack Overflow and Trello — has served as the platform for over five million apps, according to Dash.
Glitch can potentially take some of the pain out of application development since developers can begin working directly on abstraction layers while “taking away the the kind of boring, repeatable part of being a developer,” Dash said, who estimates about 80% of all code written is identical elsewhere. “Glitch provides people with a platform they can build on top of it without having to worry about installing this dependency or worrying about how this thing works,” Dash said. “That’s the way that a lot of the world has been moving and how the abstraction layer is moving further up the stack.”
TensorFlow, Google’s machine learning (ML) and artificial intelligence (AI) framework, serves as a good case study, Dash said. “Frankly, when TensorFlow first came out, I had tried to get it running on my dev environment and gave up after several hours of frustration — which made me feel dumb and was probably not their intended goal,” Dash said. Now, for access to the JavaScript framework for TensorFlow, Google has embedded examples of the code for TensorFlow with Glitch, similar to how YouTube code is embedded for video. “So, where you would embed a YouTube video, we’ve got an app running instead,” Dash said. “And it’s showing you how to build a model around your ML libraries and how to actually get up and running.”
For those seeking just to study how certain code and apps work, Glitch can “make it really easy for folks who are like journalists to go: ‘okay, I don’t really understand how this AWS thing works, but I’ve got an example of someone using this Python app to to map all this data together,’” Turnbull said. “I can create a visualization from that. And I think that’s an example of a strong use case framework-wise.”
Ultimately, for the developer, the creative — or for many — the fun part of development work could potentially be more accessible. Applications are “built on top of the scaffold,” Dash said. “I think what we’re seeing here is that we can provide that abstraction layer and we can take away the the kind of boring, repeatable part of being a developer,” Dash said. “We can provide people with a platform that they can take and build on top of it without having to worry about things like ‘I need to install this dependency or I need to worry about how this thing works, or I need to set up this framework or, or this template.'”
4/21/2020 • 46 minutes, 18 seconds
Episode 113: Stress, Resilience and the Network Effects of COVID-19
Welcome to The New Stack Context, a podcast where we discuss the latest news and perspectives in the world of cloud native computing. For this week’s episode, we spoke with The New Stack correspondent Jennifer Riggins about all of the excellent reporting she and others on the TNS team have been doing recently on the effects that COVID-19 is having on the tech industry.
TNS editorial and marketing director Libby Clark hosted this episode, alongside founder and TNS publisher Alex Williams and TNS managing editor Joab Jackson.
For this episode, we wanted to look not only discuss the changing patterns in network traffic that the global COVID-19 pandemic has wrought, due to factors such people staying at home and working from home, and the sudden acceleration of e-learning. As Riggins writes in a recent post:
For a lot of tech and infrastructure teams, they not only are going through the stress of the collective trauma we’re sharing in, but they are struggling to keep up with ever-scaling, extreme strains on their systems. Simply put, no one could have predicted this uptick.
One big theme that kept popping up was “resiliency,” not only from an individual psychological perspective, as well from organizational and systems views as well.
Here are some of the other posts we discussed:
The Network Impact of the Global COVID-19 Pandemic: How has the worldwide pandemic stressed our networks? In multiple ways, according to this report from our London correspondent Mary Branscombe. Internet traffic is generally 25% to 30% higher than usual. You can also see the change in where people are connecting from; usage is up in residential areas but visibly down in downtown San Francisco, downtown San Jose and especially the Cupertino and Mountain View neighborhoods where Apple and Google have their campuses.
U.S. Unemployment Surge Highlights Dire Need for COBOL Skills: One of the surprise stories coming from the global pandemic has been the dire need for COBOL developers. Who would have seen that one coming? New Jersey Governor Phil Murphy is now asking for volunteers with COBOL skills. New Jersey’s 40-year-old mainframe benefits system was besieged by a 1,600% increase in usage, as over 371,000 people have filed claims in the past month.
Keep Your Endpoints Secure During the COVID-19 Pandemic: We are also seeing more reports of security breaches indirectly due to the spreading virus. In this contributed post from CalSoft’s Sagar Nangare, he notes that People are scared and hungry for more information around events like COVID-19. In panic mode, they surf the internet, visit fake pages, and fall prey to phishing scams. Also, endpoints for remote access have increased due to remote working, increasing surface areas for cybercriminals to target.
How Kubernetes Prepared 8×8 for a 50x Spike in Videoconferencing Usage: The New Stack spoke to 8×8, a cloud communications and video collaboration provider to learn how the company phased in remote-by-default, and how it is creating systems and team resiliency during a 50-fold increase in traffic over less than a month. One answer? Kubernetes.
Chaos, Hugs and Interruptions: Dev Folks Work from Home with Kids: Working at home is nothing new to the cloud native computing community, which has always been about the distributing workloads. But adding children, who all of sudden were home full time as well when the schools closed, adds another stress to already frazzled IT pros. Here are some tips on getting by.
SaltStack’s CTO on Pandemics, the End of Empires and Software’s Future: Here’s an interview with Thomas S. Hatch, founder and Chief Technology Officer of SaltStack where he discusses how software engineers’ lives have changed (or not), the folly of forcing workers to come to the office when they really do not need to and his observations of network infrastructure saturation in the wake of the COVID-19 pandemic.
4/17/2020 • 26 minutes, 51 seconds
Polystream's Cheryl Razzell - How to Work Your Way to the Top of the Tech Heap
Today, we speak with Cheryl Razzell, director of platform and Live Ops, at Polystream who has a particularly interesting narrative to share in what remains nevertheless a renaissance era in computing. Indeed, the assumption we can make is open source tools, platforms and, especially, talent will underpin how data is processed and managed as we win this war against the pandemic.
As the ravages of COVID-19 continue to take their toll in London, where she is based, and worldwide, Razzell speaks of women in tech, and her career path from tech support to working at Apple, Microsoft, HBSC, DevOps and continuing her high-level IT career tech at Polystream, a 3D content platform provider.
As mentioned above, regardless of whatever happens during the next few weeks and months, open-source development and tools will continue to serve as the foundation for what is yet to come. And during what will be a recovery eventually, organizations that thrive in the future will only do so by continuing to operate as software companies. But, this is just the context — lest we forget — that can often obscure how open source development and tools are only as good as the talents of the people constituting the DevOps teams that make the magic happen.
4/15/2020 • 40 minutes, 5 seconds
Automating Infrastructure That Dates Back 100 Years - w/ Bill Mulligan of Loodse
In this, The New Stack Makers podcast, Bill Mulligan, who plays a key role in helping customers to automate their IT operations with Kubernetes and operators for Loodse, discusses his background — taking him from the University of Wisconsin-Madison via the University of Oxford to his life in Berlin today — and the key role Cloud Native plays in supporting telcos and other Loodse customers.
4/13/2020 • 18 minutes, 55 seconds
Episode 112: Derek Weeks VP Sonatype - The Secrets of a Successful DevSecOps Shop
Welcome to The New Stack Context, a podcast where we discuss the latest news and perspectives in the world of cloud native computing. For this week’s episode, we spoke with Derek Weeks, vice president at Sonatype, about the results of a new community survey the company just released on DevSecOps that provides some insights on how teams are incorporating automated security tools and how that shift affects company culture and developer happiness.
Sonotype’s Nexus open source governance platform helps more than 1,000 organizations and 10 million software developers simultaneously accelerate innovation and improve application security. This is the seventh year that Sonatype has done this DevSecOps report, and, according to the company, it’s the longest running community survey on this topic. We discuss with Weeks how the practice of DevSecOps changed since the company started doing the survey, and the challenges organizations face in embedding security within their DevOps practices. We also ponder the reasons behind the puzzling finding that those companies with mature DevSecOps actually have more security breaches.
TNS editorial and marketing director Libby Clark hosted this episode, alongside founder and TNS publisher Alex Williams and TNS managing editor Joab Jackson.
4/10/2020 • 46 minutes, 11 seconds
Git is 15 Years Old: What Now?
Linus Torvalds first released his Git version control software 15 years ago, on April 7, 2005, in an effort to foster a more creative spirit in Linux kernel development. Since then, Git's role in software development has emerged well beyond its roots as a version control system and a software repository. It's become a cornerstone in how software is developed today by distributed teams and open source developers around the world.
In this The New Stack Makers podcast, we spoke with three Git thought leaders who about Git’s roots, its present context and its future. We learned that despite its present-day success, Git's future is not certain.
Guests on this episode are:
Jason Warner, CTO, GitHub.
Cornelia Davis, CTO, Weaveworks.
Sid Sijbrandij, Co-founder and CEO, GitLab.
For many, the possibilities that Git offers are exciting, both on an individual and macro level when many parties must collaborate on a project, particularly for CI/CD. You can use Git to upload personal pet projects, such whether you want to share a simple code sample or just documentation for something not necessarily related to software. For a large enterprise, developer teams can collaborate on application development concurrently whether they are scattered around the world or separated by only cubicle walls.
4/8/2020 • 46 minutes, 30 seconds
Sysdig's Kris Nóva - How We Can Never Be Prepared But Open Source Can Help
In this episode of The New Stack Makers, we talk to Nóva, chief open source advocate at Sysdig, about the progression of the open source world and her perspective examining it through the lens of San Francisco’s COVID-19 lockdown. She calls the book she wrote with Justin Garrison a kind of thesis that looks to predict the infrastructural patterns that could solve a lot of the challenges cloud-native infrastructure teams face.
4/6/2020 • 13 minutes, 59 seconds
Episode 111: A Remedy for Outdated Vulnerability Management
For more episodes listen here: https://thenewstack.io/podcasts/
Welcome to The New Stack Context, a podcast where we discuss the latest news and perspectives in the world of cloud native computing. For this week’s episode, we spoke with a couple of folks from cloud workload protection platform provider Rezilion: CEO Liran Tancman, and Chief Marketing Officer Tal Klein. We discuss how current best practices in security are actually outdated and how they think companies should be approaching security practices in the age of DevOps.
TNS editorial and marketing director Libby Clark hosted this episode, alongside founder and TNS publisher Alex Williams and TNS managing editor Joab Jackson.
Klein wrote a contributed article for TNS on “Why Vulnerability Management Needs a Patch,” where he argues that current best practices and tools around security patching, such as the CVSS system for rating vulnerabilities, are outdated, particularly for modern DevOps shops.
As Klein says in the interview:
When you’ve got vulnerabilities, it’s very tough to figure out which ones to fix first, and the fact is that more and more vulnerabilities are discovered every year. So there’s, there’s a greater amount of things to patch and if you don’t know which ones to patch first, you’re never going to be able to address the full patching needs of your organization. And that’s been a cat and mouse game for a long time.
Then later in the show, we discuss some of our top podcasts and stories of the week. Our sister podcast, The New Stack Makers, posted an interview with DevRel trailblazer (and Coder-Twitter celeb) Cassidy Williams, on building software communities. COVID-19 continues to tear through the IT community, and so we look at the shifting network traffic patterns that have come about from the pandemic, as well as the additional babysitting duties that many IT professionals have to now mix into their daily work from home routines. Finally, we discuss The Eclipse Foundation’s Theia code editor, which has been billed as “a true open source alternative to Visual Studio Code.”
4/3/2020 • 33 minutes, 5 seconds
Lightstep CTO Daniel Spoonhower - The 3 Pillars of Observability
Listen to more episodes here: https://thenewstack.io/podcasts/
In this episode of The New Stack Makers podcast, Daniel Spoonhower, CTO of Lightstep, discussed and described what the “three pillars” concept means for DevOps, how monitoring is different, Lightstep’s evolution in developing observability solutions and a number of other related themes.
Spoonhower — whose experience in developing observability tools traces back to work as a software engineer at Google — makes it clear that a “three pillar” observability solution consisting of metrics, logs, and distributed tracing represents, in fact, separate capabilities.
“I think the thing that we’ve kind of seen is that thinking of those as three different tools that you can just kind of squish together is not really a great solution. I mean, the way that I think about observability is I like to get away from the what the specific tools are, and just say that observability is the thing that helps you connect the effects that you’re seeing — whether that’s performance or user experience, or whatever, connecting those effects back to the causes,” Spoonhower said. “And the thing that happened with deep systems is that it’s not like there are five or 10 potential causes to those problems, but there are thousands or tens of thousands of those things. And so you need a tool to help you find those.”
4/2/2020 • 31 minutes, 26 seconds
Cassidy Williams - Developer Communities Now and Always
Listen to more episodes here: https://thenewstack.io/podcasts/
This episode of The New Stack Makers focuses on more the community and less on the tech side of the tech community — which we think matters now more than ever. Williams has dedicated her career to teaching, mentoring and helping others find the right roles in tech. In fact, she’s written a step-by-step guide on how to get your first job in this industry. This episode dives into how to build the right network
Software Engineer and Developer Advocate Cassidy Williams started this decade looking forward to a year of global travel for React training, workshops, and public speaking gigs. She spent January in Boston, DC, and Austria. February saw her speaking in France and Ireland. Then, suddenly, the small consultancy she worked at went from having overbooked their March to an empty schedule. And the full-time staff was let go.
4/1/2020 • 46 minutes, 54 seconds
Volterra's CEO Ankur Singla - What COVID-19 Means for Microservices, Multi-Cloud and Kubernetes
Listen to All New Stack Podcasts here: https://thenewstack.io/tag/podcast
Kubernetes has emerged as the de-facto option for managing containers, while microservices serve as the underlying distributed architecture of Kubernetes clusters. The continued rise of multi-cloud infrastructures is also seen as a conduit for the continued adoption of microservices for Kubernetes deployments for such widely distributed infrastructures. The need to create applications and manage such diverse infrastructures in this rapidly expanding multi-cloud universe.
But, suddenly, the coronavirus worldwide pandemic has turned the world on its head in ways we have yet to fully realize,
In this, The New Stack Makers podcast, Ankur Singla, founder and CEO at software as a service (SaaS) provider Volterra, discusses the profound influences Kubernetes, microservices, multi-cloud environments, and open source have had on computing and IT today — and what their impacts may be in a Covid-19 world.
"Covid-19 throws a big wrench" into everything, Singla said. "The first thing we realized with lots of our large enterprise customers is that corporate networks are becoming a big bottleneck," Singla said. "And in order to reduce the load, many of the enterprises are already looking at how can they quickly migrate their apps from private data centers to the cloud, because the network to the cloud is a lot better than a network to private networks."
The migration to the cloud also involves a "move to SaaS services," Singla said. "SAS services obviously require scale, and more and more of them are going microservices," Singla said.
It is safe to assume that a particular technology or architecture has become mainstream, after surviving an initial hype cycle and becoming uniformly accepted and reliable. A technology associated with cost savings and demonstrably improved efficiencies is also another criterion used to determine whether a particular technology has acquired mainstream status. One can thus arguably assume microservices are on their to achieving mainstream status. "More and more enterprises are migrating to SaaS services, and SaaS is all about scale — and scaling is a lot easier with microservices," Singla said.
Singla began to see the potential of microservices about five years ago when large enterprises largely had adopted microservices. "But it was becoming very clear to me that that would be an architecture paradigm, and increasingly so with serverless — so, then, the paradigm shift was starting to happen. And we thought that was a great opportunity to start a new company that helped solve many of the problems of going mainstream to multiple cloud providers and being able to build highly distributed edge locations...with the convergence of distributed applications and data," Singla said. "And we said, 'it's a great time to start a company to solve the problem of distributed application data. So, that's the background on Volterra."
Feature image by from Pixabay.
3/30/2020 • 38 minutes, 29 seconds
Episode 110: Kelsey Hightower and Ben Sigelman Debate Microservices vs. Monoliths
Listen to ALL of our shows here: https://thenewstack.io/podcasts/
Welcome to The New Stack Context, a podcast where we discuss the latest news and perspectives in the world of cloud native computing. For this week’s episode, we spoke with Kelsey Hightower, a developer advocate at Google, and Ben Sigelman, CEO and co-founder of observability services provider LightStep, about whether or not teams should favor a monolith over a microservices approach when architecting cloud native applications.
Hightower recently tweeted a prediction that “Monolithic applications will be back in style after people discover the drawbacks of distributed monolithic applications.” It was quite a surprise for those who have been advocating the for operational benefits of microservices. Why go back to a monolith?
As Hightower explains in the podcast: “There are a lot of people who have never left a monolith. So there’s really not anything to go back to. So it’s really about the challenges of adopting a microservices architecture. From a design perspective, like very few companies talk about, here’s how we designed our monolith.”
Sigelman, on the other hand, maintained that microservices are necessary for rapid development, which, in turn, is necessary for sustaining a business. “It’s not so much that you should use microservices, it’s more like, if you don’t innovate faster than your competitors, your company will eventually be erased, like, that’s the actual problem. And in order to do that, you need to build a lot of differentiated technology,” he said. Microservices is the most logical approach for maintaining a large software team while still maintaining a competitive velocity of development.
Later in the show, we discuss some of the top TNS podcasts and news posts of the week, including an interview with IBM’s Lin Sun on the importance of the service mesh, as Sysdig’s offer of a distributed, scalable Prometheus, a group of chief technology officers who want to help the U.S. government with the current COVID-19 pandemic, and the hidden vulnerabilities that come with open source security.
TNS editorial and marketing director Libby Clark hosted this episode, alongside founder and TNS publisher Alex Williams and TNS managing editor Joab Jackson.
3/27/2020 • 40 minutes, 11 seconds
IBM's Lin Sun - Master Inventor Compares Service Meshes to ‘Storage Boxes’
Listen to All TNS podcast here: https://thenewstack.io/podcasts
In this episode of The New Stack Makers podcast, we spoke with IBM’s Lin Sun, whose official title is a senior technical staff member and “master inventor” for a comprehensive overview on what service meshes are for those not completely familiar with the topic, may have some familiarity and want to know more or want to also know more about emerging use cases.
Sun’s expertise in service meshes largely draws upon her role as an Istio project maintainer and is also on the Istio Steering Committee and Technical Oversight Committee.
Sun’s IBM title as “master inventor” may sound unusual or even arguably pretentious for some, or even “really cool,” as Sun describes it. But at IBM, the status as “master inventor” represents specific merits those who hold the title must first attain.
“‘Master inventor’ is a title for someone who demonstrates the mastery of the IBM inventor process and is able to mentor other people to be successful in the invention process, and to be able to be productive yourself,” Sun said. Among the other requirements, a “master inventor” must also first file about a dozen patents and to have at least one issued patent, Sun said. You must also have worked with a ‘review board to reveal any incoming patent disclosures on behalf of IBM
3/26/2020 • 38 minutes, 32 seconds
Microsoft's Asim Hussain - The Making of a Green Developer
Listen to all of our podcasts here: https://thenewstack.io/podcasts/
There’s a common misconception that the individual consumer’s actions will dramatically affect climate change. That we should recycle more and avoid plastic straws and bottles. These are nice-to-haves, but they don’t make an impact on the systemic contamination of industries like agriculture, travel, and, yes, tech. On the other hand, when scandal strikes tech, we point blame at the top, and we don’t drill down into the individual responsibility. What we found with the Volkswagen emissions scandal is that even the person who writes the code can be culpable.
If we each bear some individual responsibility in the code we release, is there power in the green developer? In this episode of The New Stack Makers, we sit down with Microsoft Green Cloud Advocacy Lead Asim Hussain to talk about what a green developer is. And we try to uncover what does that actually look like for a web developer, a machine learning engineer, a DevOps person, or a department with a huge fleet of Internet of Things fleet devices.
Hussain says to start it’s not about a lack of motivation.
“They care, they want to do something. And one of the questions I get asked a lot is from developers and all kinds of developers working on all different aspects of applications are, What can I do now?”
Hussain said that then they end up only focusing on their own role when they should be looking at things end to end.
He continued that “I used to think full stack meant like a website to a database. And now I understand full stack means like, from user behavior to how electricity is bought and sold on a grid.”
In fact, Hussain predicts a whole new role emerges: sustainable software engineer. Or even better a multi-department team that looks to piece together the full software development lifecycle, from sourcing hardware materials to powering data centers to the deprecation of the tools and devices.
This can start with just ardent, cross-functional green-conscious volunteers who make themselves know in an organization and who try to piece together this lifecycle. That’s how it started with Microsoft, growing into a 2,000-person green team.
Where do you get started? Hussain says to start by examining the carbon efficiency and the carbon intensity of your application. Hussain points to little moves that have a big impact like choosing when to run your workloads, which “depending upon the renewable mix and the energy grid, you can, just by changing when you run a workload, you can reduce the carbon emissions by 48 percent per application.”
And don’t just assume this is for the most modern microservices, he says this can even be more impactful when you are running certain jobs on legacy applications.
Hussain continues to talk about the creation of a green public agreement. He also offers Microsoft’s sustainability calculator which allows you to start to measure because, as we’ve learned with the agile movement, you can’t improve what you can’t measure.
3/23/2020 • 29 minutes, 21 seconds
Episode 109 : DevOps - Who Should Own Security ?
Listen to more from The New Stack here: https://thenewstack.io/podcasts
Welcome to The New Stack Context, a podcast where we discuss the latest news and perspectives in the world of cloud native computing. For this week’s episode, we spoke with Liran Tal, a developer advocate at container security platform provider Snyk and a member of the Node.js security working group, about who should own security in the DevOps process — the security team or the development?
TNS editorial and marketing director Libby Clark hosted this episode, alongside founder and TNS publisher Alex Williams and TNS managing editor Joab Jackson.
Tal wrote an article for us recently, “‘DevSecOps Insights 2020’: Who Really Owns Security in DevOps,”which summarized the results of a survey the company carried out covering security, development and operations. The post included a couple of surprising survey results, namely that only 14% of respondents reported that they test for known vulnerabilities in container images, and 38% of respondents don’t integrate automated security scanning into their DevOps pipeline.
As Tal writes in the post:
When that many respondents agree security is a major concern when trying to deliver software quickly, it means we need to scale up security to enable fast delivery of security fixes. The key to doing that is developers, as they ultimately fix security issues in an application’s source code.
We also get Tal’s views on incorporating security into the a Continuous Integration/Continuous Delivery (CI/CD), the need for development speed, as well as his thoughts on the recent purchase of npm by GitHub.
Then, later in the show, we discuss some of the top podcasts and news stories from the site. An episode of The New Stack Analysts podcast provides fodder for discussing service mesh adoption. Also on the agenda: Frustrations mount over Python 3 migrations; Project Calico offers a faster data plane with the help of eBPF; and an excellent side-by-side comparison offered by StackRox’s Karen Bruner of the managed Kubernetes offerings from Amazon Web Services, Microsoft Azure and Google Cloud.
3/20/2020 • 37 minutes, 2 seconds
SupportOps Drive NinjaRMM's Customer Success Rate
Last week we wrote about how a true DevOps transformation doesn’t just focus on developers and operations but looks to unclog cross-organizational bottlenecks. One of those areas often overlooked — the one with so much of that coveted rapid feedback — is support. In this episode of The New Stack Makers, we talk to Michael Shelton, VP of global customer support at NinjaRMM, about closing the cultural distance to reach support teams to drive the post-customer experience.
When Shelton joined NinjaRMM five years ago, it was still a tiny team working on the then new remote monitoring and management platform. They didn’t have a support team yet — everyone was support. He admitted that back in the day they had a lot of bugs, but they used that to broach stronger customer relationships.
Shelton said they built an ethos that continues today, talking to customers like partners and, sometimes even therapists:
“You’re not wrong. Sounds like you’re having a really tough time. And sounds like we’re part of the cause of that. Let’s work together to figure out what the solution is.”
The NinjaRMM team realized they could use the close relationship between support and the customer to drive the product. What do the customers love? What are they super frustrated about? What are their use cases?
3/18/2020 • 23 minutes, 17 seconds
Well-Oiled DevOps Rides on Immutable Infrastructure
To hear more podcasts listen here: https://thenewstack.io/podcasts/
Prisma, from Palo Alto Networks, sponsored this podcast, following its Cloud Native Security Live, 2020 Virtual Summit held Feb. 11, 2020.
The adoption of “immutable infrastructure” has emerged as a viable way to improve DevOps processes and culture. By introducing more of a standardization in application deployment and management, immutable infrastructure helps, among other things, to foster a better collaborative environment among developers, operations, security team and other stakeholders.
“Immutable infrastructure gives you the ability to have a consistent environment, across your entire fleet of systems, which gives you a simpler and more predictable deployment,” Mike Liedike, manager, Deloitte Consulting’s Innovations and Platforms team, said. “It allows you to do the testing more consistently and promote your environments from development to test to prod.”
In other words, the adoption of immutable infrastructure is often a hallmark of a highly functional DevOps.
In this edition of The New Stack Makers podcast recorded live at Palo Alto Networks’ studio in Santa Clara, CA, Liedike offers further insight and analysis of what the adoption of an immutable infrastructure can mean for your organization.
A good starting point to describe how immutable infrastructure works is by first detailing how it does not work — or more specifically, what “mutable” infrastructure is and how it differs compared to immutable infrastructure. Using the example of Apache servers, Liedike noted how admins might upgrade the servers by installing the latest version of the Web server software with configuration-management tools. The problem, Liedike said, is that “across 1,000 instances, you have a lot of room for error and inconsistency.” “With immutable infrastructure, instead of doing those changes in place, you would actually build a new server, with all the upgrades already in place, and then deploy your systems and decommission the old ones,” Liedike said.
Sponsor Note