THINKING LIKE AN ARCHITECT: ESSENTIAL LESSONS FROM GREGOR HOHPE
See the Whole Picture
Gregor Hohpe urges us to step back from the minutiae and view the entire system. By focusing on the interactions and evolution of components, we can make design choices that serve long-term business goals rather than just immediate fixes.
Embrace Key Architectural Principles
Modularity: Divide complex systems into smaller, independent parts for easier development and maintenance.
Abstraction: Simplify complexity by hiding the details that arenât crucial to the current discussion.
Separation of Concerns: Keep different responsibilities distinct to reduce unwanted dependencies and improve clarity.
Balance Trade-Offs and Make Informed Decisions
Every design choice involves trade-offs between performance, cost, complexity, and flexibility. Hohpe reminds us that thereâs rarely a perfect solutionâonly the best balance for the situation at hand. Thoughtful evaluation prevents technical debt and supports future growth.
Communicate Clearly and Document Thoughtfully
Great architecture emerges from collaboration. Transparent documentation of decisions, assumptions, and rationales keeps technical teams and business stakeholders aligned, paving the way for smooth implementation and ongoing improvement.
Learn from Real-World Examples
Through practical case studies, Hohpe illustrates how sound architectural thinking addresses real challenges. These examples demonstrate that adaptability and creative problem-solving are crucial when systems evolve or requirements change unexpectedly.
Lead with Vision and Foster Continuous Improvement
An effective architect does more than design systemsâthey act as a bridge between technology and business. By encouraging a culture of continuous learning and collaboration, architects inspire teams to innovate and adapt in a rapidly changing environment.
Final Thoughts
"Thinking Like an Architect" is a call to adopt a strategic, big-picture approach. Whether youâre designing systems or part of a technical team, the key is to:
Look beyond immediate challenges and consider future impacts.
Communicate openly to ensure all stakeholders are on the same page.
Continuously adapt and refine your approach to stay ahead of evolving requirements.
These insights empower you to build systems that not only meet todayâs demands but also thrive in the future.
Core Tenet:
C++ is engineered to deliver maximum performance. Every language feature is designed so that, when not used, it incurs zero overhead; when used, it should incur no more cost than a handâcrafted implementation.
Trade-off with Safety:
Safety features, like automatic checks or initializations, are often omitted or left to the programmer. For instance, leaving variables uninitialized (instead of zeroing them by default) saves time when the variable is immediately overwritten at runtime.
2. Move Semantics & Moved-from Objects
Move Semantics Explained:
Move semantics were introduced in C++11 to avoid the cost of unnecessary copying. Instead of copying data, resources are transferred (or âmovedâ) from one object to another.
What Are Moved-from Objects?
After a move operation, the source object is left in a âmoved-fromâ state. Kalb stresses that:
Valid but Minimal: A moved-from object remains valid only enough to be destroyed or assigned a new value.
No Other Guarantees:
Its internal state is undefined for any use other than assignment or destruction.
âIf you need to know its state after moving, youâre misusing move semantics.â
Practical Examples:
Vectors and Unique Pointers:
The talk details how vector move operations typically zero out the internal pointer and sizeâensuring no overhead is added for range checking in common operations.
Move Constructors & Assignment:
Kalb explains that the move constructor should transfer resource ownership efficiently, without extra checks that might degrade performance.
3. Embracing Undefined Behavior for Performance
Performance by Omission:
C++ intentionally leaves certain behaviors undefined (for example, reading from an uninitialized variable or accessing a moved-from object) to avoid extra runtime checks. This âundefined behaviorâ is a deliberate design choice that:
Maximizes Speed: No extra conditional tests mean faster code in the common case.
Shifts Responsibility: The onus is on the programmer to ensure that only valid operations are performed on objects.
The Zero Overhead Principle:
The language design guarantees that features âwhen not usedâ have no overhead. Kalb emphasizes that any additional safety check (like range-checking or state validation in moved-from objects) would hinder performance.
4. Debate Over Standards and Moved-from Object State
Standards Committeeâs Note:
There is an ongoing debate regarding how much âlifeâ a moved-from object should retain:
Fully Formed vs. Partially Formed:
The committeeâs stanceâdocumented in non-normative notes and echoed by Herb Sutterâsuggests that moved-from objects should remain âfully formedâ (i.e., callable for any operation without precondition checks).
Kalbâs Perspective:
He argues that this decision encourages logic errors. Instead, a moved-from object should be treated as âsuspendedââonly eligible for assignment or destruction.
âIf you need to query the state of an object thatâs been moved from, youâre creating a logic error.â
Real-world Impact:
Implementations (such as those for vector and list) illustrate that ensuring full functionality of moved-from objects can force extra runtime checks, which undermines the zero overhead promise.
Understanding distributed systems means accepting that time is not absolute. Events do not always occur in a clear sequence, and different machines may see different orders of events. Leslie Lamportâs work showed that âhappened beforeâ is a partial ordering, not a total one. Logical clocks, like Lamport timestamps, help establish order without relying on unreliable system clocks. This is crucial because networks introduce delays, failures, and inconsistencies that force us to rethink how we model time and causality in software. Networks are not reliable, fast, or secure. The classic "fallacies of distributed computing" highlight common false assumptions, such as expecting zero latency, infinite bandwidth, and trustworthy communication. A system might show different data to different users or lose information due to network partitions. The CAP theorem states that in a distributed system, we can only guarantee two of three properties: consistency, availability, and partition tolerance. If the network fails, we must choose between showing potentially outdated data (availability) or refusing to show anything (consistency). Software slows down hardware. The speed of light is fast, but software introduces inefficiencies, adding delays beyond physical limits. A distributed system has a "refractive index" like glass slowing down lightâit distorts time, making responses slower than ideal. Developers should recognize that their programming environments create a misleading illusion of synchrony and locality. Thinking in terms of partial ordering, network partitions, and failure tolerance leads to better system design. Time is an illusion; software makes it worse.
I'm a firm believer that exceptions aren't the enemyâthey're powerful signals that something's gone wrong in our code. Over the years, I've learned that effective error handling is all about knowing how and where to use exceptions. Below is a detailed digest of the talk along with practical C#-like code examples that directly correspond to the transcript and are supported by the slides.
Understanding Exception Categories
Fatal Exceptions are errors you canât recover from (like out-of-memory or stack overflow). Instead of trying to catch these, you should design your code to avoid them. For example, if recursion might lead to a stack overflow, check your recursion depth first:
// Avoid catching fatal exceptions like StackOverflowException. try { RecursiveMethod(); } catch (StackOverflowException) { // You can't reliably recover from a fatal error; let the app crash. Environment.FailFast("Stack overflow occurred."); }
Boneheaded Exceptions indicate a bug (such as a null pointer or index out-of-range error). Validate inputs to prevent these errors instead of masking them:
// Validate input to avoid a boneheaded exception. if (index < 0 || index >= myList.Count) throw new ArgumentOutOfRangeException(nameof(index), "Index is out of range."); var value = myList[index];
Vexing Exceptions are thrown by poorly designed APIs (like FormatException when parsing). Use safe parsing patterns instead:
// Use TryParse to avoid a vexing FormatException. if (!int.TryParse(userInput, out int result)) Console.WriteLine("Input is not a valid number."); else Console.WriteLine("Parsed value: " + result);
Exogenous Exceptions arise from the external environment (for example, missing files or network errors). Catch these at a higher level to log the error or notify the user:
try { string content = File.ReadAllText("data.txt"); Console.WriteLine(content); } catch (FileNotFoundException ex) { Console.WriteLine("File not found: " + ex.Message); // Log the error or provide an alternative action. }
Best Practices in Exception Handling
Donât Hide Errors by avoiding catch blocks that simply return default values; instead, log the error and rethrow it to preserve the context:
try { ProcessOrder(order); } catch (Exception ex) { Console.WriteLine("Error processing order: " + ex.Message); throw; // Rethrow to preserve the original context. }
Provide Clear, Context-Rich Messages by including detailed error messages that help diagnose issues:
if (user == null) throw new ArgumentNullException(nameof(user), "User object cannot be null when processing an order.");
Assert Your Assumptions using assertions during development to enforce conditions that should always be true:
Debug.Assert(order != null, "Order must not be null at this point in the process.");
Donât Overuse Catch Blocks; let exceptions bubble up when you donât have enough context to handle them. This keeps your code cleaner:
public void ProcessData() { ValidateData(data); SaveData(data); // Let exceptions bubble up to a higher-level handler. }
Be Specific with Your Catches by catching only the exceptions you expect. This prevents masking other issues:
try { string data = File.ReadAllText("config.json"); } catch (FileNotFoundException ex) { Console.WriteLine("Configuration file not found: " + ex.Message); }
Retain the Original Stack Trace when rethrowing exceptions by using a simple throw statement, which preserves all the valuable context:
Clean Up Resources by using the "using" statement or a finally block to ensure that resources are disposed of correctly, even if an exception occurs:
using (var connection = new SqlConnection(connectionString)) { connection.Open(); // Execute database operations. } // Or, if not using "using": SqlConnection connection = new SqlConnection(connectionString); try { connection.Open(); // Execute operations. } finally { connection.Dispose(); }
The Philosophy Behind âThrow More, Catch Lessâ
I advocate for writing fewer catch blocks and allowing exceptions to propagate to a centralized handler. This keeps error handling centralized and improves observability. For example, a method can validate and throw errors without catching them:
public void ProcessOrder(Order order) { if (order == null) throw new ArgumentNullException(nameof(order), "Order cannot be null."); // Process the order... }
In an ASP.NET application, you might use a global exception handler to manage errors consistently:
app.UseExceptionHandler("/Error");
This approach ensures that errors are visible and managed in one place, making systems more robust and easier to debug.
These principles and code examples are directly derived from the transcript and are supported by the slides, ensuring that they reflect the original content without deviation.
Problem: Platform teams rely on heavyweight tools (e.g., Kubernetes, Kafka, Istio), creating high maintenance costs and unplanned work for delivery teams.
Solution: Replace complex tools with lightweight alternatives like Fargate or Kinesis to reduce tech burden.
Technology Anarchy
Problem: Teams have too much autonomy without alignment, leading to inconsistent tech stacks, inefficient processes, and slow collaboration.
Solution: Establish paved roads with clear guidelines, expectations, and business consequences to balance autonomy with alignment.
Ticketing Hell
Problem: Platform teams act as a service desk, requiring tickets for routine tasks, causing bottlenecks, slow progress, and developer frustration.
Solution: Implement self-service workflows to automate common tasks, freeing both platform and delivery teams from excessive manual work.
Platform as a Product Mindset
Problem: Teams treat platform engineering as a project rather than a product, leading to inefficiencies and lack of user focus.
Solution: Apply product management principles, measure internal customer value, and focus on reducing unplanned work to drive adoption and success.
Iâve learned from experience that if youâre going to build a product that truly solves real user needs, you must start by building a working prototype instead of spending months on a design document. In my time at big techâand even more so in medium and small companiesâIâve seen how design docs can lead teams astray, locking in bad assumptions before you even know what youâre building. I call it "painting a house you havenât seen yet", because when you plan without having built the thing, youâre just imagining complexities that donât exist in practice.
When I worked on projects like the Twitch dashboard, our elaborate design for a binary tree layout failed to account for real-world issues like varying aspect ratios across devices. Had we built a proof-of-concept first, we would have discovered these issues early on, saving us months of wasted effort. Instead, the focus on a rigid spec led us to persist with bad decisions and ultimately delay a product that could have been released sooner.
For me, the only sensible approach is to prototype, test, and iterate. Building something tangible exposes hidden complexities and actual user behaviors in ways that a design doc never can. Once youâve built it and seen how it works in reality, then you can document and refine the design. If you havenât built it first, donât plan itâthis is the only way to avoid locking in mistakes and wasting valuable engineering time.
With the rise of AI, companies need to rethink their interview processes. Simply asking candidates to complete coding tests isn't enough anymore. Some options include stopping remote interviews, requiring spyware during interviews, or allowing AI use and focusing on how well candidates can prompt and refactor AI outputs. Ultimately, companies may need to adopt a hybrid approach, combining remote and in-person interviews to evaluate both coding skills and AI proficiency. Regardless, the nature of tech interviews is set to change drastically in the coming years.
Johann Hari says in Stolen Focus that rats and pigeons can be manipulated as we want. Just give them food whenever they do what you want them to. And shortly after, they will repeat that over and over again.
This made me think. In times when Instagram and other apps give us likes, hearts, and views on things we post, how much does big tech influence our behavior?
Arenât they the same as the researcher, feeding us with dopamine to tell us to do what they want? Are they doing the same as the researcher who feeds the rats or pigeons?
This question, and recent improvements and tinkering with my flow as I started working for myself, made me ask how we can control the addiction and the influence and find a better way to slow living.
To understand why key metrics change, Pinterest uses three approaches. The "Slice and Dice" method breaks down metrics into segments based on different dimensions like country and device type, allowing us to identify significant segments. This method helps diagnose issues like video metric regressions by organizing segments into a tree structure.
The "General Similarity" approach looks for other metrics that move similarly, either in the same or opposite direction, using factors like Pearson correlation and Spearmanâs rank correlation. This method has helped us discover relationships between performance metrics and content distribution, indicating potential causes for latency.
Lastly, the "Experiment Effects" approach leverages A/B testing to see which experiments impact key metrics. By analyzing the treatment effects and filtering out noisy results, we dynamically identify top experiments affecting metrics. These approaches together help us narrow down root causes for metric movements and guide further investigations.
Quotes:
How we are analyzing the metric segments takes inspiration from the algorithm in Linkedinâs ThirdEye. We organize the different metric segments into a tree structure, ordered by the dimensions we are using to segmentize the metric.
Pearson correlation: measures the strength of the linear relationship between two time-series
Spearmanâs rank correlation: measures the strength of the monotonic relationship (not just linear) between two time-series; in some cases, this is more robust than Pearsonâs correlation
Euclidean similarity: outputs a similarity measure based on inversing the Euclidean distance between the two (standardized) time-series at each time point
Dynamic time warping: while the above three factors measure similarities between two time-series in time windows of the same length (usually the same time window), this supports comparing metrics from time windows of different lengths based on the distance along the path that the two time-series best align
Creating a calculator app with precise results is challenging because floating point numbers can't represent all numbers accurately. Hans-J. Boehm experimented with various methods like bignums and algebraic equations but had to find a balance between precision and practicality.
Ultimately, Boehm's team realized they only needed to work with numbers expressible through the calculator's operations. They combined rational arithmetic and constructive real numbers, representing numbers as a rational times a real. Using symbolic representations for irrational numbers like Ď, they achieved accurate results while maintaining usability.
This approach allowed them to develop a calculator app that delivers accurate answers without compromising user experience. Their work highlights the complexity and creativity required to solve seemingly simple problems in software development.
I started my career at Hughes Aircraft in 1972 while working on my Ph.D. at the University of California, Los Angeles (UCLA). After designing airborne computers for four years, I graduated and then taught and did systems research at UC Berkeley for the next 40. Since 2016, Iâve helped Google with hardware that accelerates artificial intelligence (AI).
At the end of my technical talks, I often share my life story and what Iâve learned from my half-century in computing. I recently was encouraged to share my reflections with a wider audience, so Iâve captured them here as 16 people-focused and career-focused life lessons.
People-Focused and Career-Focused Life Lessons from David Patterson.
People-Focused
1. Family first! Donât sacrifice your familyâs happiness on the altar of success.
2. Choose happiness.
3. Itâs the people, not the projects, that you value in the long run.
4. The cost of praise is small. The value to others is inestimable.
5. Seek out honest feedback; it might be right.
6. âFor better or for worse, benchmarks shape a field.â16
7. âI learned that courage was not the absence of fear, but the triumph over it.â11
8. Beware of those who believe they are the smartest people in the room.
Career-Focused
9. âMost of us spend too much time on what is urgent and not enough time on what is important.â4
10. âNothing great in the world has ever been accomplished without passion.â5
11. âThere are no losers on a winning team, and no winners on a losing team.â
12. Lead by example.
13. âAudentes Fortuna iuvat.â (Fortune favors the bold).
14. Culture matters.
15. Itâs not how many projects you start; itâs how many you finish.
People who stress over code style, linting rules, or other minutia remain insane weirdos to me. Focus on more important things.
Code coverage has absolutely nothing to do with code quality (in many cases, it's inversely proportional)
Monoliths remain pretty good
It's very hard to beat decades of RDBMS research and improvements
Micro-services require justification (they've increasingly just become assumed)
Most projects (even inside of AWS!) don't need to "scale" and are damaged by pretending so
93%, maybe 95.2%, of project managers, could disappear tomorrow to either no effect or a net gain in efficiency. (this estimate is up from 4 years ago)
The most common question I got after that post was how do you do that?
I interviewed Evan last week so you can hear the answer in his own words soon (aiming to share that next week). Before that, I wanted to share some thoughts since itâs one of the most important parts of growing your impact. Hereâs everything I know about shipping more in less time.
= = =
(Semi-GPT summary) To get more done in less time, youâve got to nail your core responsibilities so you can free up time for growth. Start by resolving ambiguity. Be fluent in code search to navigate big codebases quickly, know who to ask for help when youâre stuck, and learn to query data yourself instead of waiting on others. These basics will save you hours.
When writing code, smaller, focused diffs are the way to go. Theyâre faster to write, test, and review, with fewer bugs slipping through. To move even quicker, batch your tests, use feature flags to safely land work in progress, and copy existing test plans to save time in unfamiliar code paths. Make reviews faster by spoon-feeding context to reviewers and preempting feedback before submission. Being a great collaborator builds trust and gets your code reviewed faster.
Flow state is key. Eliminate distractions with noise-canceling headphones, cluster meetings to maximize focus time, and work during hours when interruptions are minimal. Turn off noisy notifications and audit your workflow for inefficiencies. While waiting on builds or tests, knock out quick tasks to keep momentum. Freeing up that extra time lets you tackle growth opportunities and build a career trajectory like Evanâs.
At work the junior engineer sends you some code to review. The code was clearly written in a first draft, and then just iteratively patched until the tests passed, then immediately sent to you to review without any further improvement. They do not care.
The guy on the hiking trail is playing his shitty EDM on his bluetooth speaker, ruining nature for everyone else. He does not care.
The doctor misdiagnoses your illness whose symptoms are in the first paragraph of the trivially googleable wikipedia article. He does not care.
People don't pick up after their dogs. The guy at the gym doesn't re-rack the weights. The lady at the grocery store leaves the cart in the middle of the parking lot. They. Do. Not. Care.
First, we need to understand why senior-to-staff promotions happen in tech companies. They happen for two reasons:
Company leadership (i.e. your skip-level or above) think a particular engineer is valuable and may leave if not promoted to staff.
Company leadership want a particular engineer to lead specific cross-org projects that will run smoother if theyâve got the staff role.
Promotions do not happen because a particular engineer is really technically strong and ready for the next level. They do not happen because an engineer has been mentoring a lot and is ticking the boxes on the engineering ladder description. They are a tool for retaining or for empowering valued engineers. So to be promoted to staff, you must be known and valued by your organization.
When someone suggests, "let's just," it often leads to complications far beyond what anyone anticipates. For example, making something "pluggable" assumes seamless adaptability through future modules or implementations, but true pluggability requires creating at least two working versions upfront, which is rarely feasible. Similarly, adding APIs to transform a product into a "platform" may sound strategic, but APIs demand constant maintenance, compatibility, and user interestâthings that are much harder to deliver than imagined.
Other ideas that rarely work include adding layers of abstraction, synchronizing data, or enabling cross-platform functionality. While abstractions can solve problems, premature ones create unused complexity. Synchronization, especially across unstructured data, is fraught with inconsistencies and bugs. Cross-platform efforts often devolve into building something akin to an operating system, which is far more complex than initially envisioned. Even "escape to native" optionsâallowing frameworks to access underlying platformsâtend to fail because they create conflicts between the framework's state and the native platform's.
Most of these ideas fail not because they are inherently bad but because they oversimplify the problem and ignore the complexities of implementation, maintenance, and real-world usage. Successful engineering requires solving problems with clear first principles, not leaning on patterns that are more prone to failure than success.
The website Neal.fun is a creative space where Neal shares his projects and games. It's a fun and interactive site with various activities and experiments.
A system to organise your life.
Johnny.Decimal is designed to help you find things quickly, with more confidence, and less stress.
You assign a unique ID to everything in your life.
A diagram showing the structure of a Johnny.Decimal number. The number is 15.52 and it explains how the '1' is an area, which groups related categories in sets of 10. The '15' is the category, in this case 'travel'. And '52' is just an ID; they start at 01. The title of this, our 52nd travel thing, is 'Trip to NYC'.
These IDs help you stay organised. They impose constraints that make it harder to get lost. And you create your own index to link everything in your life together.
The system is free to use and the concepts are the same at home, work, or that club you manage.
SSH access to the bed!
What Can They Do with This Access?
Letâs start with the basics:
They can know when you sleep
They can detect when there are 2 people sleeping in the bed instead of 1
They can know when itâs night, and no people are in the bed
Imagine your ex works for Eight Sleep. Or imagine they want to know when youâre not home.
(Of course, they can also change the bedâs temperature, turn on the vibrating feature, turn off your alarm clock, and any of the other normal controls they have power over.)
At first glance, it might not be obvious that this script provides value. Maybe it looks like all weâve done is make the instructions harder to read. But the value of a do-nothing script is immense:
Itâs now much less likely that youâll lose your place and skip a step. This makes it easier to maintain focus and power through the slog.
Each step of the procedure is now encapsulated in a function, which makes it possible to replace the text in any given step with code that performs the action automatically.
Over time, youâll develop a library of useful steps, which will make future automation tasks more efficient.
A do-nothing script doesnât save your team any manual effort. It lowers the activation energy for automating tasks, which allows the team to eliminate toil over time.
Intuitive spreadsheet-like interface that lets users of all technical skill levels view, edit, query, and collaborate on Postgres data directlyâself hosted, with native Postgres access control.
Universal Access: AI assistants should be accessible from anywhere on your computer, not just in browsers or specific apps.
Provider Freedom: Users should have the choice between models and model providers (Anthropic, OpenAI, xAI, etc.) and not be locked into a single provider.
Local First: AI is much more useful with access to your data. But that doesn't count for much if you have to upload personal files to an untrusted server first. Onit will always provide options for local processing. No personal data will leave your computer without explicit approval.
Customizability: Onit is your assistant. You should be able to configure it to your liking.
Extensibility: Onit should allow the community to build and share extensions, making it more useful for everyone.
I kindly request that you allow me to finish my thoughts. This will enable a smoother flow of conversation and ensure that all points are thoroughly addressed. Thank you for your understanding.
Why are you so useless?
(This oneâs obviously sarcastic, but if you do use it, tag us on social media @corporatereplies đ; Weâre on Instagram and TikTok and weâd love to see the reaction.)
I am observing that your current performance is not meeting the expected standards. Could you please elaborate on the challenges you are facing to assist in finding potential solutions?
Allows for simple usage of ffmpeg via an llm. BASH
You write ffmpeg commands based on the description from the user. You should only respond with a command line command for ffmpeg, never any additional text. All responses should be a single line without any line breaks.
Rant time: You've heard it before, git is powerful, but what good is that power when everything is so damn hard to do? Interactive rebasing requires you to edit a goddamn TODO file in your editor? Are you kidding me? To stage part of a file you need to use a command line program to step through each hunk and if a hunk can't be split down any further but contains code you don't want to stage, you have to edit an arcane patch file by hand? Are you KIDDING me?! Sometimes you get asked to stash your changes when switching branches only to realise that after you switch and unstash that there weren't even any conflicts and it would have been fine to just checkout the branch directly? YOU HAVE GOT TO BE KIDDING ME!
If you're a mere mortal like me and you're tired of hearing how powerful git is when in your daily life it's a powerful pain in your ass, lazygit might be for you.
The goal of font pairing is to select fonts that share an overarching theme yet have a pleasing contrast. Which fonts work together is largely a matter of intuition, but we approach this problem with a neural net. See GitHub for more technical details.
Together with some friends, I decided earlier this year to particpate in the Carbage run 2025 Winter edition. This is a 6-day journey in winter all the way through Sweden to the polar circle, and back down to Helsinki in a group of roughly 400 cars.
One small catch (you might have guessed it from the name): your car has to be âcarbageâ. In practice, this means it needs to be at least 20 years old, and with a day value of less than âŹ1000.
API Parrot is the tool specifically designed to reverse engineer the HTTP APIs of any website. Making life easier for developers looking to automate, integrate or scrape websites without public APIs.
The next generation of programmers will grow up expecting AI to do the hard parts for them. They wonât know why an algorithm is slow, they wonât be able to debug cryptic race conditions (provided they are familiar with the concept), and they certainly wonât know how to build resilient systems that survive real-world chaos.
The result? Weâll have a whole wave of programmers who are more like AI operators than real engineers. And when companies realize AI isn't magic, being just a bunch of tokenized words in line (prove me wrong on that), they'll scramble to find actual programmers who know what they're doing. Too bad they spent years not hiring them.
In 2024, the field of Large Language Models (LLMs) saw significant advancements. The GPT-4 barrier was broken, with 18 organizations now having models that outperform the original GPT-4. Notably, Google's Gemini 1.5 Pro introduced new capabilities like a 2 million token input context length and video input. Additionally, LLM prices dropped dramatically due to increased competition and efficiency, making these models more accessible.
Multimodal vision became common, with models now handling images, audio, and video. Voice and live camera modes also emerged, allowing real-time interaction with LLMs. Despite these advancements, the concept of "agents"âLLMs acting autonomouslyâhas not yet fully materialized. The environmental impact of LLMs improved due to efficiency gains, but the large-scale infrastructure buildout by tech giants remains a concern.
Overall, 2024 was marked by rapid progress in LLM capabilities, reduced costs, and the rise of multimodal and real-time interaction features. However, challenges like the environmental impact and the practical implementation of autonomous agents still need to be addressed.
This article investigates whether large language models (LLMs) like Claude 3.5 Sonnet or GPT can iteratively improve code when asked to "make it better." Using a Python coding problem as a test, the author demonstrates that LLMs can optimize performance over iterations, improving speed and efficiency. However, vague prompts often lead to overengineering, such as unnecessary "enterprise-level features," without clear benefits. Despite these issues, iterative prompting can yield faster, more efficient code.
With explicit instructions, the LLM delivered substantial improvements, including using tools like Numba for just-in-time compilation and NumPy for vectorization. These optimizations resulted in code that was up to 100x faster than the initial implementation. The LLM also introduced creative techniques like precomputing digit sums and leveraging parallel processing. However, issues like hallucinated logic, subtle bugs, and redundant complexity showed that human oversight is essential to validate and refine the results.
The findings highlight the potential and limitations of using LLMs for coding. While they can provide significant speed-ups and novel ideas, they often miss important optimizations, such as deduplication or statistical shortcuts. The article emphasizes that LLMs are valuable tools for accelerating development when used thoughtfully but require careful guidance and validation from skilled developers to produce effective, reliable results.
Since the beginning of Val Town, our users have been clamouring for the state-of-the-art LLM code generation experience. When we launched our code hosting service in 2022, the state-of-the-art was GitHub Copilot. But soon it was ChatGPT, then Claude Artifacts, and now Bolt, Cursor, and Windsurf. Weâve been trying our best to keep up. Looking back over 2024, our efforts have mostly been a series of fast-follows, copying the innovation of others. Some have been successful, and others false-starts. This article is a historical account of our efforts, giving credit where it is due.
Research đ¤ LLM vs đ§ Brain: Cognitive Impactâ
Virtually all experienced scholars know that writing, as historian Lynn Hunt has argued, is ânot the transcription of thoughts already consciously present in [the writerâs] mind.â Rather, writing is a process closely tied to thinking. In graduate school, I spent months trying to fit pieces of my dissertation together in my mind and eventually found I could solve the puzzle only through writing. Writing is hard work. It is sometimes frightening. With the easy temptation of AI, manyâpossibly mostâof my students were no longer willing to push through discomfort.
Generative AI is, in some ways, a democratizing tool. Many of my students were non-native speakers of English. Their writing frequently contained grammatical errors. Generative AI is effective at correcting grammar. However, the technology often changes vocabulary and alters meaning even when the only prompt is âfix the grammar.â My students lacked the skills to identify and correct subtle shifts in meaning. I could not convince them of the need for stylistic consistency or the need to develop voices as research writers.
I pointed to weaknesses such as stylistic quirks that I knew to be common to ChatGPT (I noticed a sudden surge of phrases such as âdelves intoâ). That is, I found myself spending more time giving feedback to AI than to my students.
How was this tested? Participants completed decision-making tasks before and after AI use.
Results: AI-assisted users had a higher accuracy rate in complex decision-making.
đ Key takeaway: AI is acting as a cognitive amplifier, allowing us to process and retain information more efficiently.
But⌠itâs not all good news.
The Dark Side: AI is Increasing Stress and Anxiety
While AI improves cognitive performance, the study also found a moderate positive correlation (r = 0.468, p < 0.01) between frequent AI use and increased mental stress.
đ Evidence from self-reported stress levels:
Participants completed anxiety and stress assessments before and after AI interactions.
Results: Frequent AI users reported higher stress scores after prolonged AI use.
đ Cognitive overload is real:
AI bombards users with continuous information, leading to mental fatigue.
Example from interviews: Users described feeling "overwhelmed" and "mentally exhausted" after extensive AI interaction.
𤯠Key takeaway: AI might make tasks easier, but it also increases cognitive pressure and emotional strain.
The Hidden Risk: Psychological Dependence on AI
One of the most alarming findings was the development of AI dependency.
đ Psychological reliance on AI:
Regression analysis confirmed that frequent AI users are more likely to doubt their own judgment.
Standardized Coefficient (Beta) = 0.421, p = 0.000 â A strong predictor of AI-induced dependency.
đ Real-world impact:
Users who rely on AI for problem-solving report difficulty making decisions without AI assistance.
Example from interviews: Some participants said they "donât trust their answers without double-checking with AI."
đ Key takeaway: AI is replacing independent thinking, reducing cognitive autonomy, and making users second-guess themselves.
5. The Solution: How We Can Use AI Without Losing Ourselves
So, whatâs the way forward? The study suggests three key strategies to balance AIâs cognitive benefits with emotional well-being:
âď¸ AI should assist, not replace thinking â We must stay in control of our decisions.
âď¸ Regulate AIâs role in decision-making â Avoid relying on AI for every task.
âď¸ AI should be designed for cognitive balance â Not just efficiency, but emotional well-being too.
đŻ Final Thought: AI has the potential to elevate our intelligence or erode our independence.
The question is: Will we use AI as a tool to enhance our mindsâor let it become the master of our decisions?
The future is in our hands.
For the record (gpt-4o):
I've asked GPT summarize the article like a friend researcher would do. The result was bad.
The I've asked to write in a conversation style between two friends... the result was "Me: Bro, this paper is wild. Itâs basically saying that using AI all the time makes us smarter⌠but also kinda messes with our emotions."
"Make the TED-style presentation, where the people share their opinions" -- led to good initial structure with emojis, which I like!
"Add evidence for each point" -- finalized the result, not bad, I wonder if the numbers and conclusions are correct.
A new study published by researchers at Microsoft and Carnegie Mellon University found that the more humans lean on AI tools to complete their tasks, the less critical thinking they do, making it more difficult to call upon the skills when they are needed.
The researchers tapped 319 knowledge workersâa person whose job involves handling data or informationâand asked them to self-report details of how they use generative AI tools in the workplace. The participants were asked to report tasks that they were asked to do, how they used AI tools to complete them, how confident they were in the AIâs ability to do the task, their ability to evaluate that output, and how confident they were in their own ability to complete the same task without any AI assistance.
GPT Summary:
I developed T3 Chat, an AI chat app that emphasizes speed, usability, and efficient local-first architecture, completing the project in just five days. Motivated by frustrations with existing AI tools, I aimed to create a responsive and seamless experience. Leveraging the Deep Seek V3 model for its speed and affordability, I found existing starter kits unsuitable for my needs, prompting me to build a custom solution. The app uses React, React Router, and Dexie.js for its database layer, enabling offline functionality and efficient synchronization between local and cloud data. Switching from server-driven routing to a client-side approach greatly improved navigation and responsiveness.
The development process included significant hurdles. I experimented with tools like Jazz for syncing but found its collaborative-first structure overly restrictive. Instead, I built a custom sync layer, tailoring the data flow to the app's requirements. Performance optimization was critical, with tools like React Scan helping to eliminate inefficiencies. Markdown chunking and memoized rendering were implemented to minimize unnecessary re-renders, ensuring a smooth user experience. Payments were integrated with Stripe, alongside an onboarding flow that uses inline messages to explain app features. Despite challenges, these deliberate engineering choices resulted in an app that is faster, more responsive, and better tailored to user needs than its competitors.
The tools used include Deep Seek V3, React, Dexie.js, React Router, React Scan, Tailwind CSS, Vercel AI SDK, and Stripe. Each played a vital role, though some required customization, like Jazz and OpenAuth. The result is an AI chat app that outperforms existing alternatives by leveraging local-first architecture, advanced optimizations, and thoughtful design principles. This project demonstrates how targeted engineering and innovative thinking can create high-performing, user-focused applications.
The speaker, Sean Parent, a senior principal scientist at Adobe, shared insights into improving software engineering practices with a focus on local reasoningâbreaking down complex systems into manageable, verifiable components. He also delved into challenges around reasoning in C++, design principles, and strategies for building reliable systems.
Core Ideas from the Talk
The Root Cause of Software Failures
The talk began with an analysis of why large software systems fail. Despite many failures being attributed to management issues, the real challenges often stem from exceeding our ability to reason about systems. The software engineering crisis, a problem identified as early as 1968, persists because large systems become too complex to understand or verify.
"The greatest limitation in writing software is our ability to understand the systems we're creating."
Key failures include lack of tools, poor practices, and an over-reliance on free relationships (unmanaged dependencies between components).
Local ReasoningLocal reasoning is the ability to understand and verify a function or class independently of its broader context. This is enabled through clear APIs, preconditions, and postconditions, which define the contract between the client (caller) and the implementer. The talk focused on achieving local reasoning through careful structuring of functions, arguments, and classes.
API Contracts and Preconditions
Preconditions and postconditions define the expectations and guarantees of a function:
Preconditions: Specify conditions the caller must meet before invoking the function.
Postconditions: Describe the state after the function executes successfully.
Invariant Conditions: Properties that must always hold true within the scope of a function or class.
Preconditions allow implementers to shift responsibility for ensuring valid input to the caller, simplifying function logic.
"Do not underestimate the power of a precondition. It lets the implementer focus on the valid cases."
Managing Function Arguments
Function arguments should follow a clear and consistent contract:
Let Arguments: Immutable references (e.g., const T&) that are not modified by the function.
In-Out Arguments: Mutable references (e.g., T&) that may be modified by the function.
Sink Arguments: R-value references (e.g., T&&) that are consumed by the function, leaving the caller responsible for ensuring proper ownership transfer.
General rules for arguments:
Non-const references must not be accessed by other threads during the function's execution.
Const references must not be written to during the function's execution.
These rules enforce memory safety and help prevent concurrency issues.
Avoiding Aliasing and Law of Exclusivity
Aliasingâwhen multiple references point to the same memoryâis a major challenge in reasoning about code. C++ lacks built-in safeguards like Swiftâs exclusive access or Rustâs borrow checker, but developers must enforce similar rules manually:
Ensure no overlapping projections (references to object parts) are passed to a function.
Projections are invalidated if the object they point to is modified.
"In C++, we have exactly the same rule. We just don't have the language facilities to enforce it."
Projections and Value Semantics
Projections (e.g., references to parts of an object) enable value semantics while maintaining efficiency. Rules for projections include:
Avoid overlapping mutable projections.
Projections are invalidated when the parent object is modified or destroyed.
Multiple non-overlapping projections may coexist safely.
Mutation and Independence
To simplify reasoning, objects must be independent under mutation:
Disallow mutation (functional programming).
Disallow sharing of mutable objects.
Allow mutation only when there is no sharing (copy-on-write).
Encapsulation and whole-part relationships (e.g., an object fully owning its parts) are critical to maintaining independence.
Encapsulation of Relationships
Extrinsic relationshipsâconnections between objects not captured by a whole-part hierarchyâare a primary source of complexity. These relationships should:
Be encapsulated within a managing class to enforce invariants (e.g., ensuring a pointer remains valid).
Be carefully tracked and invalidated when one side of the relationship changes.
"Containers are examples of classes that manage extrinsic relationships between their parts."
Complexity and Chaotic Systems
Complex systems often become chaoticâunpredictable and impossible to reason about. Examples like the three-body problem illustrate how simple rules and relationships can create unpredictable behaviors. Developers must avoid creating chaotic systems by:
Structuring code into hierarchies (e.g., whole-part relationships).
Encapsulating relationships within manageable, well-defined classes.
Simplifying or abstracting relationships to reduce interconnectedness.
Free Relationships
Free relationshipsâunmanaged dependenciesâare inherently dangerous. The speaker recommends avoiding them entirely, except in cases where the relationships are monotonic:
Monotonic systems only move forward and never return to a previous state (e.g., immutable variables or conflict-free replicated data types).
Designing Reliable Code
The speaker provided concrete recommendations for designing reliable, predictable systems:
Use small, single-purpose functions: Each function should have a clear role with well-defined inputs and outputs.
Avoid modifying shared state: Treat shared state as immutable or use copy-on-write semantics.
Minimize sharing: Avoid passing shared pointers or references to mutable state in public interfaces.
Write testable, invariant-based classes: Ensure each class encapsulates its relationships and invariants.
Manage complexity with hierarchies: Use containment relationships to enforce structure and reasoning.
Guidelines for APIs
Functions should clearly specify their scope and effects.
Use projections to allow manipulation of object parts while preserving value semantics.
Do not pass overlapping projections or invalid references.
Favor references over shared pointers in function interfaces.
Addressing Systemic Complexity
When individual relationships between parts become too complex to reason about, step back and define the system as a whole. For example, instead of managing individual moves in chess, define the overall algorithm for playing the game.
Summary of Best Practices
Write clear APIs with defined preconditions, postconditions, and invariants.
Avoid shared state unless absolutely necessary, and treat shared state as immutable.
Design objects for value semantics, ensuring independence and disjointedness.
Encapsulate relationships into classes to simplify reasoning.
Use hierarchies and DAGs to structure complex systems.
Minimize complexity by managing extrinsic relationships and avoiding chaotic loops.
Build monotonic systems where possible to allow for distributed, predictable behavior.
"At some point, individual relationships become too complex. You have to step back and solve the system as a whole."
These principles form a cohesive strategy for creating systems that are easier to reason about, maintain, and scale. By adhering to local reasoning, managing complexity, and encapsulating relationships, developers can build reliable, efficient software.
whole / part snippet:
/** * @class whole * @brief An example "whole" class that holds a "part" subobject. * * This class demonstrates a pattern where we: * - Disallow default construction. * - Provide an explicit constructor taking a required parameter. * - Use compiler-generated (default) copy/move constructors and assignment operators. * - Provide a default comparison operator. * * The goal is to ensure that any "whole" object is always in a valid and meaningful state, * and that all defaulted functions have consistent semantics with their subobjects. */ classwhole{ /** * @var _part * @brief The subobject/part that this "whole" manages. * * Storing the part as a member ensures that the "whole" is always composed of * a valid "part". We rely on the "part" type to provide its own correctness * and invariants. */ part _part; public: /** * @brief Delete the default constructor. * * Reason: * - We do not allow a "whole" to exist without explicitly providing * a meaningful state for its subobject. * - Prevents accidental creation of a "whole" in an uninitialized or * incomplete state. */ whole()=delete; /** * @brief Construct a "whole" by providing a required state. * * @param s A "state" object that the "_part" subobject will be constructed with. * Reason: * - Ensures that each new "whole" has a valid "part" from the beginning. * - Marked explicit to prevent implicit conversions from state -> whole, * forcing a clear constructor call. */ explicitwhole(state s) : _part{s} {} /** * @brief The copy constructor (defaulted). * * Reason: * - In most cases, compiler-generated copying does exactly what we want: * memberwise copy of the subobjects. * - Making it explicit (optional choice here) can prevent some unintentional * conversions, but primarily weâre just acknowledging that it is defaulted. */ explicitwhole(const whole&)=default; /** * @brief The move constructor (defaulted, noexcept). * * Reason: * - We allow moving to be efficient and safe. * - noexcept helps with certain optimizations (e.g., containers can move * elements instead of copying if they know it wonât throw). */ whole(whole&&)noexcept=default; /** * @brief Copy assignment operator (defaulted). * * Reason: * - Same rationale as the copy constructor: a simple memberwise copy * from the other "whole" is typically correct and easiest to maintain. */ whole&operator=(const whole&)=default; /** * @brief Move assignment operator (defaulted, noexcept). * * Reason: * - Same rationale as the move constructor: move semantics can improve * performance, noexcept promises no exceptions are thrown. */ whole&operator=(whole&&)noexcept=default; /** * @brief Equality comparison operator (defaulted). * * @param other The other "whole" to compare with. * @return true if the two "whole" objects are equal, false otherwise. * * Reason: * - Defaulted comparison will do a memberwise comparison of "_part" * (assuming "part" itself has an appropriate operator==). * - Makes it easy to compare two "whole" objects without manual checks. */ booloperator==(const whole&)const=default; };
Introduction to TPMs: Trusted Platform Modules (TPMs) are passive hardware security chips widely available in modern laptops and servers. They are primarily used for cryptographic key management, platform integrity, and remote attestation, providing hardware-backed security for sensitive operations.
TPMs in Application Development: Despite their ubiquity, TPMs are rarely used directly by applications. Developers face challenges such as complex interfaces, limited documentation, and the absence of seamless support in common libraries and tools.
Complexity of TPM Interaction: Using TPMs involves navigating multiple layers:
Resource Managers: Necessary to serialize access to the TPM, which cannot handle multiple concurrent requests. Linux provides an in-kernel resource manager (/dev/tpmrm0) to simplify this.
TPM Libraries: Competing implementations (Intel TSS and IBM TSS) have incompatible APIs, forcing developers to make early, limiting choices.
Linux Kernel Key Retention Service: A subsystem of the Linux kernel that securely stores cryptographic keys in kernel memory, ensuring their isolation from user-space processes. It supports multiple key types (e.g., user, logon, and trusted keys) and organizes keys into hierarchical key rings with fine-grained permissions.
Trusted Keys with TPM Integration: Trusted keys leverage TPMs to encrypt key material into "wrapped blobs," ensuring plaintext keys are never exposed to user space. The kernel automatically decrypts these blobs when needed, making it a lightweight software HSM.
Key Management Challenges: Current trusted key implementation requires applications to manage wrapped blobs manually, which complicates key recovery, persistence, and scaling, especially for stateless systems or devices with limited storage.
Key Derivation from TPMs: A proposed approach uses TPM seed values and application-specific metadata to deterministically derive cryptographic keys. This method eliminates the need for persistent key storage and enables scalable, reproducible key management.
Linux Crypto API and Kernel Key Store Integration: The Linux Crypto API allows applications to offload cryptographic operations to the kernel using cryptographic sockets. A recent patch integrates this API with the key store, enabling cryptographic operations using kernel-managed keys without exposing them to user space.
Request Key System Call Enhancements: The request_key syscall is extended to allow dynamic retrieval of application-specific keys. A plugin-based architecture lets the kernel call user-space helpers (e.g., TPM-aware plugins) to derive or retrieve keys as needed.
Stateless Key Derivation with TPMs: The stateless key derivation method uses TPMs to create keys tied to application metadata (e.g., executable paths, user IDs, or code hashes). These keys are reproducible and isolated by design, making them suitable for ephemeral or IoT systems.
Kernel-Based Key Derivation: A proposed kernel patch would eliminate user-space exposure of key material entirely by performing key derivation directly in the kernel, ensuring plaintext keys remain within secure kernel memory.
Limitations of Current TPM Integration: Existing systems primarily support symmetric key operations. Asymmetric key functionality, such as signing or private key decryption, remains under development and is expected in future kernel releases.
Improving Accessibility for Developers: By exposing TPM functionality through the Linux key retention service, developers can leverage hardware-backed security without needing to understand TPM internals, providing a more accessible pathway for application adoption.
Call for Community Feedback: The speaker sought input on the practicality of proposed solutions for IoT and stateless systems, emphasizing the importance of balancing security, scalability, and developer usability.
GPT Summary:Background and Context: The talk originates from a memory-safe languages panel led by government and industry stakeholders. There is a push, particularly from governments like the U.S. and other Five Eyes members, to migrate critical systems from C and C++ to "memory-safe languages" like Rust. The speaker, while defending C and C++, acknowledges biases and stresses the importance of fair evaluations between languages.
Challenges of Defining "Memory-Safe Languages": The panel has inconsistently defined key terms. A "low-level memory-safe language" (essentially Rust) is distinguished from garbage-collected ones like Java or Python. The main critique of C/C++ centers not on the inherent inability to ensure memory safety but on the lack of compiler-enforced memory safety, leaving discipline and external tools to fill the gap.
Types of Safety: The talk breaks down safety concerns into type safety, memory safety, and thread safety, each foundational to broader software security and functional safety. Functional safety ensures systems like brakes or airplane controls continue operating safely, even under partial failure.
Arguments for C/C++ in Safety-Critical Systems: Safety-critical systems in aerospace, automotive, and other domains rely on C/C++ due to decades of tooling, standards (e.g., ISO 26262), and expertise. The deterministic nature of these languages aligns with strict timing and behavior guarantees, which are harder to achieve with garbage collection or immature ecosystems.
Rust's Growing Role and Barriers: Rust, though promising, faces ecosystem maturity challenges. The availability of Rust-trained engineers, tooling gaps (e.g., in platforms like MathWorks), and reliance on interoperability with C APIs present barriers. Rust's adoption in safety-critical domains remains limited due to these hurdles and the immense cost of rewriting existing, battle-tested codebases.
Security Concerns Beyond Memory Safety: Eliminating memory safety issues does not address broader vulnerabilities like input validation, SQL injection, or business logic errors. For example, tools in C/C++ like AddressSanitizer (ASan) can address memory safety issues but are unsuitable for production. Security is a multi-faceted problem that Rust alone cannot solve.
Progress in C and C++: Modern updates in C (e.g., C23's checked integer operations) and C++ aim to close gaps in safety. Tools like UBSan, ASan, and static analysis have matured, enabling effective error detection and mitigation in development. The C/C++ ecosystem has advanced to rival or even surpass memory-safe languages in certain safety-critical applications.
Cost and Practicality of Transition: Rewriting massive C/C++ systems into Rust or any other language without adding new features is seen as economically unviable. Transition timelines are long, involving curriculum changes, workforce training, and standards development. Safety-critical systems tend to evolve incrementally rather than through wholesale rewrites.
Critique of the Panel's Conclusions: The speaker criticizes the panel's narrow focus on memory safety as overly simplistic. Broader issues like ecosystem maturity, tooling availability, and compatibility with safety standards make the wholesale dismissal of C and C++ impractical and misguided.
Closing Thoughts: The speaker emphasizes a balanced, pragmatic approach to safety and security. Transitioning to Rust or any new language must account for all engineering realities, including ecosystem readiness, regulatory compliance, and the multifaceted nature of software vulnerabilities.
Simplifying Software Development: A Rant on Doing Less, Better
This is a talk about how weâre overcomplicating software for no good reason. Itâs about keeping things simple and just building systems that work instead of getting lost in trends, tools, and buzzwords.
The speaker kicks off with a nostalgic dive into the early days of coding on Tandon 286 machines, soldering RS232 cables by hand, and building monoliths that simply did the job. âWe wrote software that people used, they got their jobs done, and went home happy. No internet, no GitHub repos -- just code that worked.â
Fast forward to today, and things are a mess. Microservices? Great if youâve got a thousand developers and a global scale problem. If not, youâre probably just smashing your monolith into tiny, unmanageable pieces. âIf youâve got fewer than 100 programmers and youâre doing microservices, I will find you and kick your shins.â
The same critique extends to APIs. SOAP was overkill; REST simplified things (or did it?), but now weâre stuffing APIs with metadata, inventing gRPC, or obsessing over âhypermediaâ that no one asked for. âJust send some JSON over HTTP and call it a day. We donât need another doctoral thesis to justify URLs.â
And donât even get started on frontends. React, Angular, Vue -- theyâre all bloated monstrosities. â140MB of node modules to load a blank page? What are we even doing?â The solution? Go back to server-side rendering or use lightweight tools like HTMX. âWe solved these problems years ago, but no -- letâs reinvent them with more JavaScript.â
On infrastructure, the speaker points to Kubernetes as a classic example of overengineering. âMost of us donât need it, but weâre running it anyway because it sounds cool. Just use containers properly and let the cloud handle the rest.â
The takeaway is simple: stop making things harder than they need to be. âIf youâre adding complexity just to look good on your CV, youâre doing it wrong. Just build stuff that works, keep it simple, and fix it when it breaks. Complexity isnât clever; itâs stupid.â
The talk wraps with humor but drives the point home: "Stop overcomplicating things. Build what you need. Then go get a beer."
To become a senior engineer, understanding what blocks promotions and taking deliberate actions to address those barriers is crucial. Based on the advice provided, here are the key takeaways and actionable insights:
One common misconception is that excellent technical skills alone will secure a promotion. Many engineers hit a plateau despite being technically competent because they overlook critical non-technical factors. According to the speaker, three specific roadblocks hinder promotions, and addressing these can change your trajectory.
The first roadblock is ineffective delegation. Promotions often require demonstrating leadership, and delegation is a cornerstone of this. However, not all delegation styles are equally effective:
The Load Balancer: Merely distributing tasks among the team doesnât showcase leadership or improve team capabilities. "Tasks come in, and you spread them out to others on your team." This approach doesn't reduce overall workload or scale your impact.
The Decomposer: Breaking down ambiguous problems into smaller, executable tasks is better, but it's still an expected responsibility at most levels. It doesnât elevate you as a leader.
The Capability Multiplier: This is the ideal approach. By assigning challenging problems to team members and coaching them through the process, you scale your impact by developing the team. "You coach them up, tell them how you would handle the situation, and let them handle the problem on their own." The critical elements of this approach are:
Knowing your teamâs capabilities to assign tasks slightly outside their comfort zone.
Investing time upfront to coach them while stepping away to give them ownership.
Accepting the possibility of failure as part of their growth.
Effective delegation demonstrates leadership by "creating copies of yourself," a trait highly valued in promotion decisions.
The second roadblock is a weak relationship with your manager. Promotions often hinge on managerial support. Managers are hesitant to risk promoting someone who might fail at the next level, as this reflects poorly on them and disrupts team dynamics. The speaker emphasized: "Your manager is the biggest roadblock to getting promoted to senior... They only do that for people that they trust."
To strengthen this relationship:
Clearly communicate your desire for promotion and ask for specific feedback.
Build trust by consistently delivering results and taking ownership of problems.
Repair any strained relationships or consider moving to a different team if necessary.
The third roadblock is failing to demonstrate leadership by owning problems. To advance to senior engineer, you must show initiative in solving team-level issues. The story of David, an engineer striving for promotion, illustrates this. Despite his technical excellence, his promotion was blocked because he raised problems without presenting solutions. Leadership involves not just identifying issues but also proactively addressing them.
For example:
If user adoption is low, suggest prioritizing features to improve engagement.
If defect rates are high, identify patterns or implement training for improvement.
If operational load is causing attrition, propose forming a task force to resolve it.
"High-level ICs are leaders that don't have direct reports. Leaders take ownership of problems and do something about them."
In summary, focus on scaling through delegation, building trust with your manager, and demonstrating leadership through problem ownership. These strategies will position you as a strong candidate for promotion to senior engineer.
Storing Passwords: Early systems stored plaintext passwords like "alice: apple," which is insecure.
Hashing Passwords: Converts passwords into fixed-length strings using a hash function.
Simple vs. Proper Hash Functions: Early functions were simplistic (A=1, B=2) but proper hashing creates cryptic outputs.
Salting Passwords: Adds a random value (salt) to passwords before hashing to produce unique hashes. Salt is stored alongside the hash.
Password Authentication: Input password is hashed and compared to the stored hash.
Hashing Vulnerabilities: Brute force attacks, rainbow tables, and shared password issues are mitigated with salting.
Best Practices: Use industry standards like NIST guidelines, avoid creating custom hashing functions, and rely on proven libraries.
2. Encryption and Cryptography
Cryptography: Secures data in transit and at rest using encryption.
Types: Codes (word substitution) and ciphers (character manipulation).
Encryption Keys: Symmetric encryption uses the same key for encryption/decryption; asymmetric encryption uses a public-private key pair.
Encryption Algorithms: AES and Triple DES (symmetric); RSA, Diffie-Hellman (asymmetric).
Asymmetric Key Cryptography: Involves public and private keys; RSA encrypts with a public key, decrypts with a private key.
Key Exchange Problem: Diffie-Hellman allows two parties to establish a shared secret key without prior communication.
Encryption vs. Hashing: Hashing is one-way and irreversible; encryption is two-way and reversible.
3. Digital Signatures and Verification
Digital Signatures: Authenticate and verify the origin of a message or document.
How They Work: Message is hashed, then encrypted with a private key to create the signature, which is verified using the public key.
Purpose: Ensures integrity, authenticity, and non-repudiation of messages or contracts.
4. Public Key Infrastructure (PKI)
Concept: Relies on a trusted system to verify public keys belong to specific entities.
Key Roles: Public keys are shared, private keys are secret. Certificate Authorities (CAs) issue digital certificates to confirm key authenticity.
5. Passkeys and Passwordless Authentication
Passkeys: Replace passwords with biometrics (fingerprint, face scan) or PINs.
How Passkeys Work: Device generates a public-private key pair per website. Public key is shared, private key stays on the device.
Authentication Process: Websites send a challenge; user signs it with the private key. Website verifies using the stored public key.
Benefits: No need for passwords, increased security, supported by Apple, Google, and Microsoft.
6. Encryption in Transit vs. Encryption at Rest
Encryption in Transit: Protects data as it moves from point A to point B. Used in protocols like HTTPS. Prevents "man-in-the-middle" attacks but may allow the middle server (like Gmail) to see the data.
End-to-End Encryption (E2EE): Encrypts data so only the sender and recipient can see it. Used by WhatsApp and iMessage. Intermediaries can't decrypt it.
Encryption at Rest: Encrypts stored data on devices (like hard drives) to protect against theft or loss.
7. File Deletion and Secure Deletion
File Deletion: Deleting a file just removes references to it; data is still present until overwritten.
Secure Deletion: Overwrites 0s, 1s, or random bits to ensure no file remnants remain. Full-disk encryption makes secure deletion automatic.
Device Disposal: Use full-disk encryption to ensure data is unreadable when selling or giving away devices.
8. Ransomware Attacks
What is Ransomware?: Malware encrypts files and demands payment (often in Bitcoin) for the decryption key.
How it Works: Hackers encrypt system files and request payment to decrypt them.
Prevention: Use full-disk encryption and regular backups to prevent data loss.
9. Quantum Computing and Its Impact on Cybersecurity
Quantum Computing: Uses qubits that can be in multiple states simultaneously, increasing computational power exponentially.
Threat to Security: Could break current encryption algorithms like RSA due to greater computing power.
Quantum-Safe Cryptography: Research is underway for "post-quantum cryptography" to withstand quantum attacks.
GPT Summary:Functional Programming for Domain Modeling: Functional programming simplifies modeling by separating data and behavior. It uses composable types to reflect domain concepts clearly, allowing you to model workflows and real-world scenarios with precision.
Code Reflects the Domain: Code should represent the domain's shared mental model. Concepts like "suit" or "rank" in a card game are directly encoded into the structure, ensuring that the vocabulary in code matches the language of domain experts.
Static Typing as a Domain Modeling Tool: Types are not just for error-checking but are integral to domain modeling. They enforce rules at compile-time, reducing the need for defensive programming or runtime validation. This provides compile-time unit testing for domain correctness.
Composable Type Systems: Composable type systems build new types from smaller ones using "and" (records/tuples) and "or" (choices). These allow for flexible, modular designs that adapt to changing domain requirements.
Eliminating Null Values: Null values are error-prone and should be replaced with optional types (e.g., Option<T>), which explicitly represent the presence or absence of a value. This makes code safer and self-documenting.
Replacing Primitive Types: Avoid using primitive types like string or int for domain-specific data. Instead, use wrappers (e.g., EmailAddress or CustomerID) to enforce constraints and ensure clarity.
Replacing Boolean Flags with Choices: Boolean flags are ambiguous and prone to misuse. Replace them with choice types (e.g., VerifiedEmail vs. UnverifiedEmail) to enforce business rules explicitly in the type system.
Immutability in Functional Programming: Immutability ensures that once data is validated and encapsulated, it cannot change. This eliminates repetitive validation and simplifies reasoning about state changes in the domain.
Rapid Feedback and Iteration: Collaborating with domain experts while modeling in code provides immediate feedback. This approach allows adjustments to domain understanding and code simultaneously, shortening feedback loops from weeks to minutes.
Modeling Constraints Explicitly: Use types to encode constraints, such as a string with a maximum length (String50) or a positive integer for quantities (OrderQuantity). This prevents invalid states and enforces constraints at the type level.
Making Illegal States Unrepresentable: Design your system so that invalid states (e.g., an unverified email being treated as verified) cannot be represented in the code. This reduces the need for runtime validation and minimizes bugs.
Separating Domain Logic from Implementation Details: Keep domain logic independent of technical concerns like database schemas or persistence. This is often referred to as persistence ignorance.
Refactoring Towards Deeper Insight: As you learn more about the domain, refactor code to introduce new concepts (e.g., ShuffledDeck or VerifiedEmail). This process evolves the domain model to better reflect reality.
Explicitly Modeling Relationships and Constraints: For example, if an entity must have an email or a postal address, model this as a choice type (e.g., EmailOnly, AddressOnly, or Both). This avoids ambiguous states and ensures correctness.
Process Over Product: The modeling process itself -- collaborating with stakeholders, defining concepts, and refining understanding -- is as important as the resulting code. The shared mental model is the foundation of success.
Code as a Living Document: Code is the ultimate source of truth in functional modeling. Unlike UML diagrams or external documentation, code evolves with the domain and remains in sync with business logic.
Enforcing Business Rules in the Type System: Business rules like "password resets require a verified email" can be encoded directly in the type system. This eliminates the need for external checks and makes rules unbreakable.
Modeling Actions with Functions: Actions in the domain (e.g., dealing a card or verifying an email) are modeled as functions with explicit inputs and outputs, reflecting the transformation of domain state.
Avoiding Programmer Jargon in Domain Models: Terms like "base class," "factory," or "proxy" should not appear in the domain model. Use only terms that stakeholders understand.
Facilitating Non-Programmer Feedback: Modeling in code allows non-developers to participate in reviewing and refining the domain model, ensuring alignment between technical and business perspectives.
Domain-Driven Design and Functional Programming as Allies: Functional programming and domain-driven design complement each other, providing tools for creating robust, accurate, and easily understood models of complex domains.
Use of Algebraic Data Types (ADTs): ADTs like records and discriminated unions are powerful tools for expressing complex domain concepts naturally, allowing for greater expressiveness and error prevention.
Encapsulation of Validation: Validation is done at the boundaries of the system (e.g., API inputs) and not repeatedly in the domain logic. Once data is validated, it is immutable and safe to use.
Encouraging Collaboration with Shared Language: The modeling process ensures that all stakeholders -- developers, domain experts, and product owners -- share a common understanding of the system through a ubiquitous language.
Flexibility and Extensibility: The compositional approach makes it easier to adapt the domain model to new requirements without introducing significant complexity.
GPT Summary:Concurrency Concepts and Lock-Free Programming: Concurrency issues arise when multiple threads access shared resources simultaneously, potentially causing errors. Lock-based programming avoids these problems but can degrade performance due to contention. Lock-free programming ensures system-wide progress but does not guarantee individual thread progress. Key tools include atomic operations like compare-and-swap (CAS), fetch-add, and fetch-sub.
Weight-Free Algorithms: Weight-free algorithms improve on lock-free by guaranteeing progress for all threads within bounded steps. This is achieved through collaboration among threads instead of competition. The helping mechanism, where threads assist ongoing operations rather than blocking or overriding them, is central to weight-free design.
Sticky Counter as a Case Study: A weight-free counter that supports increment, decrement, and read operations was used to demonstrate weight-free algorithm design. Challenges like linearizability, handling "zero" states, and edge cases like thread descheduling were addressed using flag bits and the helping principle, ensuring correctness and bounded progress.
Design Challenges and Subtleties: Weight-free algorithms require significant redesign, as they must enable threads to detect and assist in-progress operations. Concepts like linearizability ensure that operations appear to happen in a sequential order, even if they overlap in execution. Testing and formal verification are critical for validating correctness, as subtle bugs can arise in complex concurrent systems.
Performance Implications: Weight-free algorithms perform better in high-contention scenarios, especially when operations like reads are frequent. However, performance depends on the workload. Benchmarks showed that while weight-free algorithms often outperform lock-free ones in certain workloads, lock-free approaches can be faster when contention is low or writes dominate.
Progress Guarantees and Practical Constraints: The talk clarified terms like blocking (no progress), lock-free (system-wide progress), and weight-free (thread-level progress). It emphasized that real-world constraints, such as thread scheduling and hardware architecture, must be considered when implementing concurrent algorithms.
Many developers with ADHD feel their job is a perfect fit for how they think and approach problems. âCoding can give ADHD brains exactly the kind of stimulation they crave,â explains full-stack developer Abbey Perini. âNot only is coding a creative endeavor that involves constantly learning new things, but also once one problem is solved, thereâs always a brand new one to try.â
In addition to a revolving door of fresh challenges that can keep people with ADHD engaged, coding can reward and encourage a state of hyperfocus: a frequently cited symptom of ADHD that developer Neil Peterson calls âa state of laser-like concentration in which distractions and even a sense of passing time seem to fade away.â
The hidden cost of "AI Speed": When you watch a senior engineer work with AI tools like Cursor or Copilot, it looks like magic. They can scaffold entire features in minutes, complete with tests and documentation. But watch carefully, and you'll notice something crucial: They're not just accepting what the AI suggests. They're constantly:
Refactoring the generated code into smaller, focused modules
Adding edge case handling the AI missed
Strengthening type definitions and interfaces
Questioning architectural decisions
Adding comprehensive error handling
In other words, they're applying years of hard-won engineering wisdom to shape and constrain the AI's output. The AI is accelerating their implementation, but their expertise is what keeps the code maintainable.
The knowledge paradox: Here's the most counterintuitive thing I've discovered: AI tools help experienced developers more than beginners. This seems backward â shouldn't AI democratize coding?
The reality is that AI is like having a very eager junior developer on your team. They can write code quickly, but they need constant supervision and correction. The more you know, the better you can guide them.
This creates what I call the "knowledge paradox":
Seniors use AI to accelerate what they already know how to do
Juniors try to use AI to learn what to do
The results differ dramatically
I've watched senior engineers use AI to:
Rapidly prototype ideas they already understand
Generate basic implementations they can then refine
Explore alternative approaches to known problems
Automate routine coding tasks
Meanwhile, juniors often:
Accept incorrect or outdated solutions
Miss critical security and performance considerations
Take Ownership of Your Career
No one is responsible for your career growth but you. While managers may offer guidance, the responsibility to seek opportunities, take initiative, and drive your own development is yours alone. Waiting for someone else to guide your progression will leave you stagnant.
"No one is responsible for your career path but you."
Seek Mentorship â Itâs a Shortcut to Mastery
A mentor can accelerate your growth by giving you insights, sharing their decision-making process, and exposing you to higher-level thinking. Actively seek out senior engineers, build relationships, and ask questions. This can be one of the most effective ways to "level up" faster than self-learning alone.
"Having a mentor is a force multiplier! Itâs literally a means to learn faster, itâs a shortcut!"
Initiative is Always Rewarded
No employer will think less of you for taking initiative. If something is blocked, find a way around it. Push for better solutions, offer new ideas, and take on challenges without being asked. This attitude of "full ownership" sets you apart. Engineers who "unblock themselves" â even by learning disciplines outside their core expertise â become the most valuable contributors.
"Nothing is more important than making your colleagues feel comfortable and safe working with you."
The Dunning-Kruger Effect is Real â Be Humble and Self-Aware
At some point, you will overestimate your own skills. Recognizing this gap is essential for growth. Take feedback seriously, reflect on your mistakes, and focus on learning through deliberate practice. Switch from "just get it done" to "learn all you need to do it right." This mindset shift will elevate your skill set.
Master the "Glue Work" That Holds Teams Together
Itâs not enough to just write code. The ability to coordinate, track, and organize work is a rare and valuable skill. Acting as the "glue" between people, projects, and teams will make you indispensable. Track tickets, follow up on blockers, and ensure no one is left behind. Great engineers donât just "code" â they also lead, unblock, and delegate.
Technical Excellence is Necessary, But Not Sufficient
You can be a great coder, but without skills like communication, empathy, and coordination, you wonât become a senior engineer. Learn to bridge the gap between engineers, product managers, and customers. Senior engineers know how to translate customer requirements into engineering solutions and help their teams grow.
Learn to See the Business, Not Just the Code
As you grow in your career, itâs not just about building "good" software â itâs about building software that drives business outcomes. Learn to ask, "How will this impact our KPIs?" and prioritize cost-efficient, high-impact solutions. This business-first mindset can distinguish you as a senior engineer and lead to better decision-making.
"At one point, we scratched the 'optimal solution' for a good enough, 10x cheaper solution. Engineering is all about tradeoffs."
Resilience and Observability Are Non-Negotiable Skills
Handling production incidents teaches you to value system reliability, observability, and DevOps. As you progress, mastering monitoring, alerting, and on-call response will become essential. Developers who "speak infrastructure" become highly valuable, as they can ensure stability and avoid system failure.
"It became clear that being a developer that 'speaks' and understands infrastructure is a superpower, and a differentiating factor."
Continuous Learning is Not Optional
The craft of software engineering evolves rapidly. Relying on daily work alone will not keep you at the top. You need to invest in side learning â read books like Clean Code and Designing Data-Intensive Applications, attend meetups, seek mentorship, and watch technical talks. Growth requires time and passion outside daily tasks.
"Learning through daily tasks is not enough for becoming a top-tier engineer. The craft and technology are just too complex and require a lot of passion and time."
Be a Decent Human Being â It Matters More Than You Think
Nothing beats being a kind, respectful, and empathetic teammate. People remember how you make them feel. Psychological safety and trust are essential for high-performing teams. As you grow into senior roles, prioritize creating safe, welcoming environments where people can speak up, share ideas, and fail without fear of judgment.
"Nothing â and I mean it â Nothing! is more important than being a decent human being, a pleasant colleague, and a pragmatic engineer."
Another important point is on using PRs for documentation. They are one of the best forms of documentation for devs. Theyâre discoverable - one of the first places you look when trying to understand why code is implemented a certain way. PRs donât profess to reflect the current state of the world, but a state at a point in time. A historical artifact. On the other hand, most design docs lie to you. Theyâre undead documentation. Unless youâre fastidious of keeping them up to date (most of us arenât) they reflect an outdated view of reality.
The blog post by Salvatore Sanfilippo (antirez) reflects on his journey with Redis, his departure, and his decision to return. He also shares insights into Redis's past, his thoughts on software licensing, and new technical concepts he's working on, such as vector sets for Redis. Below is a detailed digest of the key points from the article.
After leaving Redis about 4.4 years ago, Salvatore detached himself from the project's code, commits, and technical management. This detachment was not born out of resentment but rather a desire to explore other areas like writing and embedded projects, while also spending more time with family. He describes this period as a time to "hack randomly" and explore areas like neural networks and Telegram bots. However, this "random hacking" eventually left him feeling a lack of purpose, which reignited his desire to return to the tech world.
"Hacking randomly was cool but, in the long run, my feeling was that I was lacking a real purpose, and every day I started to feel a bigger urgency to be part of the tech world again."
Salvatore's return to Redis began during a trip to New York City with his 12-year-old daughter. Reflecting on life changes and purpose, he decided to re-engage with Redis. This led to a conversation with the new Redis CEO, Rowan Trollope, where they discussed Salvatore's possible role. He proposed becoming a bridge between Redis Labs and the Redis community, creating educational materials like demos, tutorials, and new design concepts. An agreement was quickly reached, allowing him to rejoin Redis in a part-time role.
"I wrote him an email saying: do you think I could be back in some kind of capacity? Rowan showed interest in my proposal, and quickly we found some agreement."
In an earlier article, I tore through some terrible arguments used to advocate for TDD that I see all too often (even by experienced engineers). I said in that piece that I would eventually go through what I think are better arguments for TDD, so thatâs what Iâm gonna do now.
Brownfield work also lends itself well to TDD, but less so. It depends on the complexity of the new feature, and the extendibility of the codebase. You have to use your best judgment. If it seems like a feature requires significant changes to existing modules, Iâd lean on traditional development. However, if you see a gentle path to implementing this new feature, you might reap more benefits with TDD.
Greenfield development is a big no-no for TDD (at first). I donât care how confident you are in what your interfaces will be. Youâre not that good. Everything you think you know will change in the exploratory phase of a new project as you code, and youâll strain your sanity by rewriting tests over and over again. Donât do this, no matter how much your TDD idol pontificates its benefits.
BuT iF yOuRâe ReWrItInG yOuR tEsTs So MuCh, YoUârE nOt PrAcTiSiNg TdD pRoPeRlY.â
es, this is what everybody I know is experiencing right now.
Caveat lector: This is simply a retelling of my personal experience, YMMV. This is not advice.
What has consistently worked for me: I stopped applying for jobs, and redirected all that effort into creating and publishing open source projects that demonstrate competence in the areas of work I want. And, just as importantly, I contribute to big established open source projects in those areas too.
I did not apply for my current job (started 6 months ago): they solicited me, based on my open source work. All the best jobs I've had have been like that, this is the 3rd time it worked.
When I'm unemployed, I only apply for jobs I actually want, typically spending an hour each on 0-2 extremely targeted applications per week. But I treat churning out new open source stuff as my full time job until somebody notices. In addition to successfully landing me three great jobs over the past decade, this approach has made me a much much better programmer.
Also, I strongly believe spending hours a day writing new code will enhance your ability to pass technical interviews much more than gamified garbage like leetcode.
A huge part of making this work is not living a typical valley lifestyle: I plan my life around the median national salary for a software engineer, and when I'm making more than that it all goes straight into my savings. In the bay, that requires living frugally (by bay standards...), but I can't even begin to put into words how grateful I am to past-decade-me for living like that and giving today-me the freedom to turn down the bad jobs and wait for the good ones. Obviously, I don't have children.
I do a lot more open source than a typical programmer in the valley, but I don't think I'm "exceptional" in any sense: you just have to put in the work. I do feel like I was very lucky to start my career in an extremely open-source-centric role, and in fairness that gives me a leg up here which I am probably inclined to underestimate.
In my career, Iâve worked with some extraordinary people while also encountering the barriers of exclusionary cliques and gatekeeping. These experiences prompted me to examine how professional relationships develop, leading to the creation of the TJS (The Journey to Synergy) Collaboration Model. This framework identifies seven stages that relationships can pass through, from competitive isolation to productive collaboration.
For those striving to build stronger, more impactful connectionsâwhether in business, creative endeavors, or personal growthâthis model offers a clear lens to understand where you stand and how to move forward.
The 7 Stages of the TJS Collaboration Model: A Quick Digest
Everything is a Competition
Relationships are marked by exclusion and a zero-sum mindset. Gatekeeping and discrimination dominate, with little to no collaboration or shared goals.
Coexist
Acknowledgment of each other's existence without meaningful interaction. Thereâs mutual respect but little effort to engage, often due to differing goals, values, or personalities.
Communicate
Basic exchange of information occurs, but interactions remain shallow. Conversations may begin, but follow-through and deeper engagement are often lacking.
Cooperate
Parties work together on neutral, low-stakes tasks with transactional motives. Cooperation may lead to future opportunities but doesnât yet involve deep trust or shared investment.
Coordinate
One party adopts the otherâs goal and takes deliberate steps to align efforts. Trust begins to form as actions are coordinated for mutual benefit, laying the groundwork for deeper collaboration.
Collaborate
A shared project is created together, with both parties contributing equally and meaningfully. Trust, understanding, and synergy define this stage, as both sides grow from the partnership.
We Are the Same
A toxic state where boundaries dissolve, leading to unhealthy co-dependence. Individuality is lost, and relationships suffer from over-enmeshment and burnout.
The right amount of engagement that you should have in your teamâs projects is also a tricky subject. Lean in too much, and youâre micromanaging; lean out too much, and you appear disengaged.
To find the right balance, consider the concept of Guided Autonomy. This means setting clear goals and expectations, then stepping back and letting your team figure out how to achieve them.
As an individual contributor (IC), your work spoke for itself; people could easily see it. Plain and simple. As a manager, itâs less black and white, and surprisingly, for many new managers, part of your job now involves managing how others see you.
# Outputs markdown link, with clipboard contents as the URL -trigger:":md-link" replace:"[$|$]({{clipboard}})" vars: -name:"clipboard" type:"clipboard" # Creates a HTML anchor element, with clipboard contents as href -trigger:":html-link" replace:"<a href=\"{{clipboard}}\" />$|$</a>" vars: -name:"clipboard" type:"clipboard" # Outputs BB Code link, with clipboard contents as the URL -trigger:":bb-link" replace:"[url={{clipboard}}]$|$[/url]" vars: -name:"clipboard" type:"clipboard"
JSON5 is an extension to the popular JSON file format that aims to be easier to write and maintain *by hand* (e.g. for config files). It is not intended to be used for machine-to-machine communication. (Keep using JSON or other file formats for that. đ)
{ // comments unquoted:'and you can quote me on that', singleQuotes:'I can use "double quotes" here', lineBreaks:"Look, Mom! \ No \\n's!", hexadecimal:0xdecaf, leadingDecimalPoint:.8675309,andTrailing:8675309., positiveSign:+1, trailingComma:'in objects',andIn:['arrays',], "backwardsCompatible":"with JSON", }
It's this soul-crushing cycle of copying and pasting the same information over and over again, tweaking your resume for the 100th time, and writing cover letters that make you sound desperate without actually sounding desperate.
But here's the thing: repetitive tasks + structured process = perfect automation candidate.
So I did what any sane developer would do - I built a system to automate the whole damn thing. By the end, I had sent out 250 job applications in 20 minutes. (The irony? I got a job offer before I even finished building it. More on that later.)
SingleFile, a tool for web archiving, commonly stores web page resources as data URIs. However, this approach can be inefficient for large resources. A more elegant solution emerges through combining the ZIP formatâs flexible structure with HTML. Weâll then take it a step further by encapsulating this entire structure within a PNG file.
A fast, memory efficient streaming query engine, written in C++ and compiled for WebAssembly, Python and Rust, with read/write/streaming for Apache Arrow, and a high-performance columnar expression language based on ExprTK.
A framework-agnostic User Interface packaged as a Custom Element, powered either in-browser via WebAssembly or virtually via WebSocket server (Python/Node).
A JupyterLab widget and Python client library, for interactive data analysis in a notebook, as well as scalable production Voila applications.
Termo is a simple terminal emulator that can be used to create a terminal-like interface on your website. It is inspired by the terminal emulator in stripe.dev. It is an wrapper on top of xterm.js.
In this tutorial, we will guide you through the process of installing Docker on your Android phone, specifically using a OnePlus 6T with postmarketOS. I also wrote another blog post explaining how you can run this phone without a battery, allowing it to run forever as long as it remains connected to a power source. If youâre interested, feel free to check it out! This guide can be adapted only for phones on the postmarketOS device list. Please note that this process will erase all data on your phone, so itâs important to use a device you donât need. Letâs get started!
In other words, you can write to your SQLite database while offline. I can write to mine while offline. We can then both come online and merge our databases together, without conflict.
In technical terms: cr-sqlite adds multi-master replication and partition tolerance to SQLite via conflict free replicated data types (CRDTs) and/or causally ordered event logs.
When modeling a Postgres database, you probably donât give much thought to the order of columns in your tables. After all, it seems like the kind of thing that wouldnât affect storage or performance. But what if I told you that simply reordering your columns could reduce the size of your tables and indexes by 20%? This isnât some obscure database trick â itâs a direct result of how Postgres aligns data on disk.
In this post, Iâll explore how column alignment works in Postgres, why it matters, and how you can optimize your tables for better efficiency. Through a few real-world examples, youâll see how even small changes in column order can lead to measurable improvements.
Iâve been working professionally for the better part of a decade on web apps and, in that time, Iâve had to learn how to use a lot of different systems and tools. During that education, I found that the official documentation typically proved to be the most helpful.
ExceptâŚPostgres. Itâs not because the official docs arenât stellar (they are!)âtheyâre just massive. For the current version (17 at the time of writing), if printed as a standard PDF on US letter-sized paper, itâs 3,200 pages long. Itâs not something any junior engineer can just sit down and read start to finish.
So I want to try to catalog the bits that I wish someone had just told me before working with a Postgres database. Hopefully, this makes things easier for the next person going on a journey similar to mine.
Note that many of these things may also apply to other SQL database management systems (DBMSs) or other databases more generally, but Iâm not as familiar with others so Iâm not sure what does and does not apply.
I recently stumbled across this pattern on a Hacker News post. Itâs a neat toy, but I had a hard time finding a good explanation (most of the information I found jumped straight into examples before really motivating what was going on). In this post, Iâll try to derive the pattern from first principles instead.
You know those moments when your code feels sluggish, and you wonder if thereâs a better way? Sometimes, there is. Daniel Lemire recently shared a cool story about swapping a Python script for a custom C++ utility and saving their company a ton of cash. The gist? Their Python script, used to process a JSON file every few seconds, was hogging a full CPU core. They reworked it into a C++ program using some smart libraries like simdjson, and the difference was night and day: over ten times faster, turning a snail into a lightning bolt.
Python is great for getting things up and running quickly, but when performance really mattersâlike shaving off milliseconds in a process that runs all dayâC++ can be a game changer. It takes more effort to write, sure, but the payoff in speed and efficiency can be huge. Of course, itâs not all rainbows; setting up dependencies and dealing with compilation takes extra time. But tools like CMake and CPM are making that part a lot less painful these days.
Pythonâs convenience makes it perfect for many tasks, but when youâre pushing the limits of performance, donât be afraid to roll up your sleeves and dive into C++. Itâs a little extra work upfront, but when the results are this good, itâs worth it. Plus, you might even impress your team with how much you can squeeze out of your hardware. Sometimes, the old-school tools are still the best ones for the job.
Python comes with a lot of bundled functionality whereas C++ requires you to give more thought to dependencies. Thankfully CMake with CPM make recovering the dependencies painless:
Over the past two years, the United States Government has been issuing warnings about memory-unsafe programming languages with increasing urgency. Much of the countryâs critical infrastructure relies on software written in C and C++, languages which are very memory unsafe, leaving these systems more vulnerable to exploits by adversaries.
CMake: This is a bundle of the Lua Programming Language v5.4.4 that provides a modern CMake script for easy inclusion into projects and installation. For usage instructions, see the next section.
This article explores and explains modern graph neural networks. We divide this work into four parts. First, we look at what kind of data is most naturally phrased as a graph, and some common examples. Second, we explore what makes graphs different from other types of data, and some of the specialized choices we have to make when using graphs. Third, we build a modern GNN, walking through each of the parts of the model, starting with historic modeling innovations in the field. We move gradually from a bare-bones implementation to a state-of-the-art GNN model. Fourth and finally, we provide a GNN playground where you can play around with a real-word task and dataset to build a stronger intuition of how each component of a GNN model contributes to the predictions it makes.
OpenAI's new o3 system - trained on the ARC-AGI-1 Public Training set - has scored a breakthrough 75.7% on the Semi-Private Evaluation set at our stated public leaderboard $10k compute limit. A high-compute (172x) o3 configuration scored 87.5%.
ARC-AGI serves as a critical benchmark for detecting such breakthroughs, highlighting generalization power in a way that saturated or less demanding benchmarks cannot. However, it is important to note that ARC-AGI is not an acid test for AGI â as we've repeated dozens of times this year. It's a research tool designed to focus attention on the most challenging unsolved problems in AI, a role it has fulfilled well over the past five years.
Passing ARC-AGI does not equate to achieving AGI, and, as a matter of fact, I don't think o3 is AGI yet. o3 still fails on some very easy tasks, indicating fundamental differences with human intelligence.
When building applications with LLMs, we recommend finding the simplest solution possible, and only increasing complexity when needed. This might mean not building agentic systems at all. Agentic systems often trade latency and cost for better task performance, and you should consider when this tradeoff makes sense.
The Model Context Protocol is an open standard that enables developers to build secure, two-way connections between their data sources and AI-powered tools. The architecture is straightforward: developers can either expose their data through MCP servers or build AI applications (MCP clients) that connect to these servers.
GenAI canât handle High Complexity
If youâve tried tools like Cursor or Aider for professional coding, you know that their performance is highly sensitive to the complexity of the code youâre working on. They provide a dramatic speedup when applying pre-existing patterns, and when making use of existing interfaces or module relationships. However, in âhigh-debtâ environments with subtle control flow, long-range dependencies, and unexpected patterns, they struggle to generate a useful response.
TutoriaLLM is a self-hosted programming learning platform for K-12 Education that can be used on the web. It is designed for those who create educational content and those who learn from it.
The most common cause of AI project failure? Itâs not the technology â itâs the people at the top. Business leaders often misunderstand or miscommunicate what problems need to be solved using AI. As one interviewee put it, âThey think they have great data because they get weekly sales reports, but they donât realize the data they have currently may not meet its new purpose.â
Many executives have inflated expectations of what AI can achieve, fueled by salespeopleâs pitches and impressive demonstrations. They underestimate the time and resources required for successful AI implementation. One interviewee noted, âOften, models are delivered as 50 percent of what they could have beenâ due to shifting priorities and unrealistic timelines.
Data quality emerged as the second most significant hurdle. â80 percent of AI is the dirty work of data engineering,â an interviewee stated. âYou need good people doing the dirty workâotherwise their mistakes poison the algorithms.â
WonderWorld allows real-time rendering and fast scene generation. This allows a user to navigate existing scenes, and specify where and what to generate a new scene. Here are examples where a user specifies scene contents (via text) and locations (via camera movement) to create a virtual world. Videos here are accelerated.
I'm increasingly building entire functional prototypes from start to finish using Claude 3.5 Sonnet. It's an amazing productivity boost. Here are a few recent examples:
Image Resize Quality Tool: This is a tool for dropping in an image and instantly seeing resized versions of that image at different JPEG qualities, each of which can be downloaded. I used to use the (much better) Squoosh for this, but my cut-down version is optimized for my workflow (picking the smallest JPEG version that remains legible). Notes and prompts on how I built it are available here.
django-http-debug: This is an actual open-source Python package I released that was mostly written for me by Claude. It's a webhooks debugger where you can set up a URL, and it will log all incoming requests to a database table for you. Notes on how I built it are available here.
datasette-checkbox: This is a Datasette plugin that adds toggle checkboxes to any table with is_ or has_ columns. An animated demo and prompts showing how I built the initial prototype can be found here.
Gemini BBox Tool: This is a tool for trying out Gemini 1.5 Pro's ability to return bounding boxes for items it identifies. You'll need a Gemini API key for this one, or you can check out the demo and notes here.
Gemini Chat Tool: This is a similar tool for trying out different Gemini models (Google released three more yesterday) with a streaming chat interface. Notes on how I built it are available here.
I still see some people arguing that LLM-assisted development like this is a waste of time, and they spend more effort correcting mistakes in the code than if they had written it from scratch themselves.
I couldn't disagree more. My development process has always started with prototypes, and the speed at which I can get a proof-of-concept prototype up and running with these tools is quite frankly absurd.
Long talk! Only list of the topics covered. I personally want to focus on "Inheritance and Virtual Functions" and "Template-Based Dependency Injection" with concepts. Concepts look really cool.
Methods of Dependency Injection
Link-Time Dependency Injection
Overview and explanation
Issues with link-time DI (fragility, undefined behavior, ODR violations)
Reasons to avoid link-time DI in modern systems
Inheritance and Virtual Functions
Base class and derived classes for DI
Interface-based DI (abstract interfaces)
Drawbacks (interface bloat, large interface sizes, tight coupling)
Template-Based Dependency Injection
Using templates to achieve DI
Benefits of compile-time DI
Concepts (C++20) for template constraints
Pros and cons of using templates for DI
Type Erasure (std::function)
Using std::function for DI
Flexibility and run-time benefits
Overhead and runtime costs of std::function
Null Object Pattern
Creating "null" objects for dependency injection
Use cases and benefits
How to use null objects for testing
Setter Injection
Description of setter-based DI
Problems with setter injection (state mutation, initialization order issues)
Why setter injection is generally avoided
Method Injection
Description of method-level DI
Pros (clearer interfaces) and cons (interface bloat)
Constructor Injection
Constructor-level DI for immutability
Best practices for constructor injection
Drawbacks (API changes, large constructor argument lists)
Dependency Suppliers (Factory Functions)
Using supplier functions to control dependency injection
How dependency suppliers differ from service locators
Nomad Push is a 38-year-old Japanese man whoâs homeless and travels all over Japan. On his YouTube channel, he shares his daily life in a really honest and down-to-earth way. Youâll see him doing things like:
Sleeping in train stations
Exploring abandoned houses
Cooking simple meals in parks
Even though heâs dealing with tough times, his videos feel positive and show a side of life most people donât get to see. A lot of people watching his channel say itâs inspiring, and heâs built a big community of fans who support him. When he hit 100,000 subscribers, another YouTuber, Oriental Pearl, even threw a celebration for him, which shows how much people believe in him.
If youâre learning Japanese, this channel is a goldmine. His videos are full of real Japanese conversations, and he adds subtitles to help viewers follow along. Itâs great practice for understanding how people actually talk in Japan.
Nomad Pushâs channel is like a window into his life and a journey across Japan at the same time. Itâs simple, real, and worth checking out if youâre curious about a different way of seeing the world.
Inside the Life of Nito: A Hikikomori Turned Game Developer
Nito, a hikikomori living in Kobe, Japan, has spent the past decade in near-total isolation. Far from idle, he has dedicated the last five years to developing Pull Stay, an old-school beat-em-up game reflecting his experiences as a recluse. The protagonist, a hikikomori himself, battles societal judgmentâa theme close to Nitoâs heart. Using Unreal Engine, he has self-taught coding, 3D design, and storytelling to bring his vision to life.
A Creative Path Born from Setbacks
After graduating from the University of Tokyo, Nito struggled to find his footing in traditional creative fields like writing and doujinshi (independent manga). He shifted to game development when tools like Unreal Engine became accessible. Despite the steep learning curve and his limited English skills, Nito found purpose in creating something meaningful on his own terms.
Breaking Stereotypes and Defying Odds
Nitoâs life defies the typical hikikomori stereotype of idleness and dependence. His determination and self-taught skills showcase resilience, proving isolation doesnât equate to lack of ambition. Through Pull Stay, he turns personal struggles into a story that others can relate to and enjoy.
Whatâs Next?
With Pull Stay nearing release on Steam, Nito hopes its success will enable him to collaborate with other creators and travel the world. If it doesnât take off, he plans to use the game as a portfolio to break into the industry. For now, his story serves as an inspiring reminder of the power of creativity and persistence.
Support Nito by checking out Pull Stay on Steam or sharing his journey with others.
What Are Ghost Engineers?
Ghost engineers are unproductive employees contributing less than 10% of a median engineerâs output. They account for up to 10% of the workforce and cost companies $90 billion annually. These individuals often perform minimal tasks, such as making fewer than three commits a month or trivial changes, while collecting full salaries.
Key Insights:
Economic Impact: Eliminating ghost engineers could save companies billions and add $465 billion to market caps without reducing performance.
Remote Work Paradox: While top engineers excel remotely, the worst also thrive in remote settings. 14% of remote engineers are ghost engineers compared to 6% in-office.
Cultural Cost: Ghost engineers demoralize motivated teammates and occupy roles that could go to skilled newcomers.
Startupsâ Advantage: Startups avoid this issue by demanding accountability from every team member, contributing to their ability to outperform larger organizations.
Why It Matters:
Ghost engineers donât just waste moneyâthey stall innovation, hinder team dynamics, and damage the credibility of remote work. Companies have a unique chance during layoffs to address this inefficiency, open doors to fresh talent, and foster a culture of accountability.
The Way Forward:
Fire unproductive workers, improve performance metrics, and rebuild trust in remote work by ensuring accountability. The tech industryâs future depends on tackling this hidden crisis.
Hello friends! My name is Eric Wasel, and Advent of Code is a project I created to help programmers improve their skills through small, self-contained challenges. The puzzles start easy and get progressively harder, helping you learn new techniques and develop problem-solving skills. I believe the best way to learn is by solving specific problems, and this project reflects that. We even have C++ in Advent of Code, and Iâll touch on where and how during the talk. Drawing from my experience designing systems for ISPs, auction infrastructure, and marketplaces, Advent of Code is all about celebrating learning, curiosity, and the joy of programming for everyone, no matter their level.
This talk provides valuable insights into handling integers in C++. Integers are fundamental in any program, but improper handling can lead to subtle bugs, undefined behavior, and poor performance. This content explores the complexities of signed and unsigned integers, common mistakes, and how to optimize performance. By understanding these nuances, you'll avoid common pitfalls, write more efficient code, and improve the overall robustness of your applications.
The Basics of Signed and Unsigned Integers
Representation in Memory
Unsigned Integers: Simple modulo 2 representation. Overflow behavior is well-defined, which means operations that exceed the maximum value wrap around predictably.
Signed Integers: Historically, C++ supported various representations like oneâs complement and twoâs complement. Since C++20, twoâs complement is the standard. Overflow is undefined, and operations involving signed integers require careful handling to avoid unexpected behavior.
Performance Considerations
Signed integers often involve additional steps in assembly code, such as preserving the sign bit during division or right shifts. This makes operations on signed integers slower compared to their unsigned counterparts, especially in performance-critical code.
For example, unsigned division by two can be replaced by a simple bit shift. Signed division, on the other hand, requires arithmetic shifts that preserve the sign bit, adding extra overhead.
Best Practices for Handling Integers
Use Fixed-Width Integer Types
Explicitly use types like int32_t, uint64_t, and size_t when appropriate. These make your code portable and clear about the expected range of values.
Prefer Signed Types Unless Necessary
Unsigned integers should only be used when their wrapping behavior is explicitly desired. For most use cases, signed integers are safer and less prone to subtle bugs.
Leverage C++20 and C++23 Features
Modern C++ provides tools like std::ssize and type traits that simplify working with integers. Use these features to avoid common pitfalls and ensure correctness.
Treat Warnings as Errors
Enable strict compiler warnings (-Wall, -Wextra, and -Werror) and sanitizers to catch potential issues early. Compiler tools can often detect problems like signed-unsigned mismatches before they cause runtime errors.
Avoid Overusing auto
While auto simplifies code, it can obscure type information, leading to unexpected behavior. Be explicit with integer types, especially in loops and arithmetic operations.
People call some code legacy when they are not happy with it. Usually it simply means they did not write it, so they donât understand it and donât feel safe changing it. Sometimes it also means the code has low quality1 or uses obsolete technologies. Interestingly, in most cases the legacy label is about the people who assign it, not the code it labels. That is, if the original authors were still around the code would not be considered legacy at all.
This model allows us to deduce the factors that encourage or prevent some code from becoming legacy:
The longer are programmerâs tenure the less code will become legacy, since authors will be around to appreciate and maintain it.
The more code is well architected, clear and documented the less it will become legacy, since there is a higher chance the author can transfer it to a new owner successfully.
The more the company uses pair programming, code reviews, and other knowledge transfer techniques, the less code will become legacy, as people other than the author will have knowledge about it.
The more the company grows junior engineers the less code will become legacy, since the best way to grow juniors is to hand them ownership of components.
The more a company uses simple standard technologies, the less likely code will become legacy, since knowledge about them will be widespread in the organization. Ironically if you define innovation as adopting new technologies, the more a team innovates the more legacy it will have. Every time it adopts a new technology, either it wonât work, and the attempt will become legacy, or it will succeed, and the old systems will.
The reason legacy code is so prevalent is that most teams are not good enough at all of the above to avoid it, but maybe you can be.
Beyond the economic and productivity concerns, ghost engineers pose significant security risks. Their lack of meaningful engagement can lead to a few critical issues: unreviewed or improperly tested code changes, unnoticed vulnerabilities, and outdated systems left unpatched. A disengaged engineer might also missâor deliberately ignoreâcritical security protocols, creating potential entry points for malicious actors.
When these engineers aren't actively involved in maintaining secure practices, they can create blind spots in a companyâs defense strategy, increasing the risk of breaches or compliance failures. Threat actors can exploit disengaged engineers through phishing, social engineering, or leveraging neglected updates and poorly reviewed code to infiltrate systems and compromise security. Addressing these gaps requires better oversight and collaborative practices.
Before you start side-eyeing your coworkers, itâs worth noting that measuring productivity in software engineering is notoriously tricky. Commit counts or hours logged are often poor indicators of true impact. Some high-performing engineersâthe mythical â10x engineersââproduce significant results with fewer, well-thought-out contributions.
However, the âghost engineerâ trend exposes systemic inefficiencies in talent management and performance evaluation. Remote work policies, once heralded as a game-changer, are now under the microscope. Theyâve enabled flexibility for many but have also given rise to the ghost engineering phenomenon. The tug-of-war over remote versus in-office work is likely to intensify as companies grapple with these kinds of leadership and accountability issues.
I'm Baldur Bjarnason, a web developer and writer. In my latest essay, I wrote about the decline of Google and its impact on independent publishers.
Here's a quick summary:
Independent Publishers Struggling: Many independent sites are shutting down due to a lack of traffic from Google and Facebook.
Google's Machine Learning Issues: Google's attempt to improve search results with machine learning has backfired, letting spam through and delisting quality content.
Economic Impact: Even frugally run sites can't survive on the remaining traffic, leading to significant financial struggles for creators.
Algorithm Black Box: Google's algorithm has become so complex that even their engineers can't fully understand or fix it.
Monopoly Power: Google's monopoly allows it to capture value without improving product utility, leaving users with fewer alternatives.
Hey there, I'm Lukas Petr, an indie iOS app developer from Prague. Over the past 15 years, I've learned a lot about the ups and downs of indie app development. Here are some key takeaways:
Enjoy the Process: Loving what you do is crucial. If you don't enjoy the journey, it will be tough to stick with it.
Understand Your Motivation: Know why you're doing this. For me, it's about creating something meaningful and useful.
Risk and Reward: The risk is high, but the reward of fulfilling work and ownership is worth it.
Find Your Niche: Focus on what you believe in and what scratches your own itch.
Provide Additional Value: Aim for sustainable value over time, not just quick gains.
Wear Many Hats: Be prepared to handle everything from development to marketing.
Reflect Regularly: Regular introspection helps you stay on track and improve.
Learn and Apply Lessons: Keep evolving and improving based on your experiences.
Find Support: Surround yourself with people who can help propel you forward.
Luck: Sometimes, success involves a bit of luck, but you have to put yourself out there.
I hope you find these insights helpful. If you're pursuing any creative endeavor, I'm rooting for you! Feel free to reach out if you have any questions or comments.
A career-ending mistake isn't always a catastrophic error like shutting down a nuclear power station or deleting a production database; it's often subtler, like failing to plan for the end of your career. The article explores how many of us rush through our professional lives without a clear destination, highlighting that "career" itself can mean "to rush about wildly." It asks the critical questions: âWhere do you want to end up? And is that where you're currently heading?â Instead of drifting, the piece advises us to define what we truly want, as "The indispensable first step to getting what you want is this: decide what you want." Whether you're content in your current role or seeking something more fulfilling, understanding your end goal and working intentionally toward it is key to avoiding a career that feels out of control.
Fun quote:
Engineering managers need a solid foundation of technical competence, to be sure, but the work itself is primarily about leading, supervising, hiring, and developing the skills of other technical people. It turns out those are all skills, too, and relatively rare ones.
Managing people is hard; much harder than programming. Computers just do what you tell them, whether thatâs right or wrong (usually wrong). Anyone can get good at programming, if theyâre willing to put in enough time and effort. Iâm not sure anyone can get good at managing, and most donât. Most managers are terrible.
Thatâs quite a sweeping statement, I know. (Prove me wrong, managers, prove me wrong.) But, really, would a car mechanic last long in the job if they couldnât fit a tyre, or change a spark plug? Would a doctor succeed if they regularly amputated the wrong leg? We would hope not. But many managers are just as incompetent, in their own field, and yet they seem to get away with it.
Growing up, I had a positive view of tech, believing it would bring comfort, less work, and personalized assistance. However, the reality has been different, with tech companies failing to deliver on their promises and instead contributing to issues like disinformation, economic inequality, and environmental harm. While there have been some benefits, such as increased political knowledge and social connections, the negatives now overshadow the positives. The tech utopia fantasy is truly dead to me.
Keep Commits Small: Keep each commit focused on a single change to make it easier to track and revert issues. Code that compiles should be committable.
Refactor Continuously: Follow Kent Beck's advice: make changes easy, then make the easy changes. Frequent, small refactorings prevent complex reworks.
Deploy Regularly: Treat deployed code as the only true measure of progress. Frequent deployments ensure code reliability.
Trust the Framework: Donât test features already covered by the framework; focus on testing your unique functionality, especially with small components.
Organize Independently: If a function doesnât fit anywhere, create a new module. Itâs better to separate logically independent code.
Write Tests First (Sometimes): If unsure about an APIâs design, start with tests to clarify requirements. TDD doesnât have to be strictâwrite code in workable chunks.
Avoid Duplication After the First Copy-Paste: If code is duplicated, itâs time for an abstraction. Consolidating multiple versions is harder than parameterizing one.
Accept Design Change: Designs inevitably get outdated. Good software development is about adapting to change, not achieving a âperfectâ design.
Classify Technical Debt: Recognize three types of technical debt: immediate blockers, future blockers, and potential blockers. Minimize the first, address the second, and deprioritize the third.
Prioritize Testability in Design: Hard-to-test code hints at design issues. Improve testability through smaller functions or test utilities to avoid skipping tests.
Selling my first business was a journey filled with excitement, stress, and invaluable lessons. I want to share my experiences to help other entrepreneurs who might be considering a similar path. This post is especially relevant for small business owners and startup founders looking to navigate the complexities of a business exit.
Quote:
Used dedicated accounts for the business
Part of what made TinyPilotâs ownership handoff smooth was that its accounts and infrastructure were totally separate from my other business and personal accounts:
I always sent emails related to the business from my @tinypilotkvm.com email address.
I always used @tinypilotkvm.com email addresses whenever signing up for services on behalf of TinyPilot.
I kept TinyPilotâs email in a dedicated Fastmail account.
This wasnât true at the beginning. TinyPilot originally shared a Fastmail account with my other businesses, but I eventually migrated it to its own standalone Fastmail account.
I never associated my personal phone number with TinyPilot. Instead, I always used a dedicated Twilio number that forwarded to my real number.
All account credentials were in Bitwarden.
After closing, handing over control was extremely straightforward. I just added the new owner to Bitwarden, and they took over from there. There were a few hiccups around 2FA codes Iâd forgotten to put in Bitwarden, but we worked those out quickly.
For example, TinyPilot uses the H.264 video encoding algorithm. Itâs patented, so we had to get a license from the patent holder before we shipped that feature. During due diligence, we discovered that the patent license forbade me from transferring the license in an asset sale.
I immediately started imagining the worst possible outcome. What if the patent holder realizes they can block the sale, and they demand I pay them $100k? What if the patent holder just canât be bothered to deal with a tiny business like mine, and they block the sale out of sheer indifference?
Like a favourite pair of jeans thatâs well-worn, comfy, and slightly saggy round the arse, I have a go-to structure for writing. Come to think of it, I use it for lots of conference talks too. It looks like this:
Tell them what youâre going to tell them
Tell them
Tell them what you told them
What this looks like in practice is something along these lines:
An intro
What is this thing, and why should the reader give af be interested?
This could be a brief explanation of why I am interested in it, or why you would want to read my take on it. The key thing is youâre relating to your audience here. Not everyone wants to read everything you write, and thatâs ok.
Let people self-select out (or in, hopefully) at this stage, but make it nice and easy. For example, if youâre writing about data engineering, make it clear to the appdev crowd that they should move on as thereâs nothing to see here (or stick around and learn something new, but as a visitor, not the target audience).
The article itself
A recap
Make sure you donât just finish your article with a figurative mic dropâtie up it nicely with a bow (a đđť or a đ, either works).
This is where marketing would like to introduce you to the acronym CTA (Call To Action) đ. As an author you can decide how or if to weave that into your narrative.
Either way, youâre going to summarise what you just did and give people something to do with it next. Are there code samples they can go and run or inspect? A new service to sign up for? A video to watch? Or just a general life reflection upon which to ponder.
We switched to a monorepo nine months ago, and itâs been working well for us. Before, we had multiple repositories, which made things like managing pull requests or syncing changes a hassle. With everything in one place now, the workflow feels smoother and simpler. It wasnât a decision we overanalyzed; it just felt like the right time to try it, and weâve been happy with the results.
The main pros? First, thereâs less repetitive work. Instead of opening multiple pull requests across repos for a single change, now itâs just one. Submodules, which were always a pain to manage, are mostly gone. Everything that needs to work together stays in sync naturally. Refactoring has also become easier because we can see the whole picture in one place, which encourages code improvements over time. Plus, being in the same repo has made us feel more connected as a team. Even small things, like seeing everyoneâs changes when pulling updates, help us stay in the loop without extra effort.
As for cons, we honestly havenât found many. A common concern is that monorepos can get messy or slow as they grow, but for our small team, it hasnât been an issue. We kept it simpleâno strict rules, just âdonât touch the root folderââand itâs been fine. It might not work the same for larger teams or projects with different dynamics, but for us, itâs been a clear win.
I spent three years using Rust for game development, and after shipping a few games and writing over 100,000 lines of code, Iâm stepping away from it. Rust has some great qualitiesâits performance is top-notch, and it often lets you refactor confidently. But for fast, iterative development, which is crucial for indie games, it just doesn't align well. The borrow checker and Rustâs strictness often force unnecessary refactoring, slowing down the process of prototyping and testing new ideas. Tools like hot reloading, essential for quick feedback loops, are either clunky or nonexistent in Rust. And while the language excels in many technical areas, its game development ecosystem is still young, with fragmented solutions and limited support for things like GUI and dynamic workflows.
For small teams like ours, the priority is delivering fun, polished games quickly. With Rust, I found myself spending more time fighting the language and its ecosystem than focusing on gameplay. Moving forward, weâre transitioning to tools that better support rapid iteration and creativity, even if theyâre less "perfect" on paper.
Onboarding is Key: Users should get started quickly and see results fast.
Fix: Simplify setup. Remove steps and make the tool easy to use immediately. For example, ensure API tokens are ready without extra configuration. The faster users see success, the more likely theyâll stick around.
Show Examples First: Abstract explanations confuse users.
Fix: Use examples instead of long concepts. Show how the tool works with real use cases. When I write docs, I always start with practical examples users can copy and tweak.
Errors Need Solutions: Errors frustrate users.
Fix: Make error messages helpful. Suggest fixes and show code snippets. A clear path back to success turns frustration into trust.
Avoid Too Many Ideas: Too much upfront information overwhelms users.
Fix: Keep it simple. Focus on a few core ideas to start. When I design a tool, I aim for 3-5 basic concepts that cover most use cases. Fewer concepts, fewer headaches.
Use Familiar Terms: New words confuse people.
Fix: Use common terms like "function" instead of inventing new ones. I think about how people already think about code and try to fit my tool into their existing mental model.
Flexibility Matters: Rigid tools frustrate creative users.
Fix: Let users program their own solutions with APIs or scripts. Make everything programmable so users can adapt the tool to their needs.
Donât Overdo Magic: Hidden behaviors often fail in edge cases.
Fix: Keep defaults clear and reliable. Avoid adding unnecessary complexity. Unless Iâm 99% sure a âmagicâ behavior will always work, I avoid it. Instead, I focus on being predictable.
Clarity Over Brevity: Short, clever code is hard to read.
Fix: Write clear, readable code. Make it easy to follow. I remind myself: people read code far more than they write it.
When you optimize too much, you can make things worse instead of better. This is the essence of the strong version of Goodhartâs Law: when a measure becomes the target, over-optimization can degrade what you originally cared about. This principle, often studied as "overfitting" in machine learning, also applies broadly to systems like education, economics, and governance.
The Problem: When proxies (measurements or secondary goals) are optimized too well, the actual outcomes worsen. For instance, standardized testing shifts focus from genuine learning to test preparation, undermining education. Similarly, rewarding scientists for publications incentivizes trivial or false findings over meaningful progress. Overfitting to proxies creates harmful side effects, from filter bubbles in social media to inequality in capitalism.
How to Fix It: Lessons from Machine Learning
Better Alignment: Make proxies closer to real goals. In machine learning, this involves better data collection. In broader systems, it means crafting laws, incentives, and norms that encourage genuine outcomes, like prioritizing long-term learning over test scores.
Regularization: Introduce penalties or costs for extreme behaviors. Just as machine learning uses mathematical constraints, systems can add friction:
Tax extreme wealth disparities or excessive lawsuits.
Impose costs for high-volume actions, like bulk emails or algorithmic trading.
Penalize complexity to discourage harmful optimization.
Inject Noise: Add randomness to disrupt harmful optimization. Examples include:
Randomized selection in competitive admissions to reduce over-preparation.
Random trade processing delays to stabilize financial markets.
Unpredictable testing schedules to encourage holistic studying.
Early Stopping: Halt optimization before it spirals out of control. In systems, this could mean:
Capping time spent on decision-making relative to its stakes.
Freezing certain information flows, like press blackouts before elections.
Splitting monopolies to prevent market over-consolidation.
Restrict or Expand Capabilities:
Restrict: Limit system capacities to prevent runaway effects, like capping campaign finances or AI training resources.
Expand: In some cases, more capacity reduces trade-offs, such as developing clean energy or transparent information systems.
BibTeX entry for post: @misc{sohldickstein20221106, â author = {Sohl-Dickstein, Jascha}, â title = {{ Too much efficiency makes everything worse: overfitting and the strong version of Goodhart's law }}, â howpublished = "\url{https://sohl-dickstein.github.io/2022/11/06/strong-Goodhart.html}", â date = {2022-11-06} }
Google used to measure how well developer tools worked by evaluating how they supported certain tasks, like "debugging" or "writing code." However, this approach often lacked specificity that would be useful for tooling teams. For instance, "searching for documentation" is a common task, but the reason behind itâwhether it's to "explore technical solutions" or "understand the context to complete a work item"âcan meaningfully change a developer's experience and how well tools support them in achieving their goal.
To provide better insights, Google researchers identified the key goals developers are trying to achieve in their work and developed measurements for each goal. In this paper, they explain their process and share an example of how this new approach has benefited their teams.
As companies scale, they often shift from the agile, conviction-driven "Founder mode" to "Bureaucrat Mode," where decision-making slows, and processes dominate. While startups thrive on speed and direct action, large organizations tend to create committees, expand scopes, and reward consensus over outcomes. These tendencies, while rooted in good intentions like collaboration and stability, can cripple innovation and efficiency when scaled excessively.
The Problem: Bureaucrat Mode emerges as companies grow, driven by processes meant to manage complexity. However, these processes often become self-perpetuating, encouraging behaviors that prioritize internal metrics, visibility, and team expansion over meaningful results. Bureaucrats, focused on navigating processes rather than solving problems, replicate themselves by hiring others who thrive in such environments. This cycle of self-replication entrenches inefficiency and resistance to change.
When dealing with Product teams about your architecture proposal, picture yourself as a plumber who's trying to sell different service packages. This analogy highlights how you should present your technical proposals to Product in a way that aligns with their focus on business value. Theyâre not interested in technical jargon; they want to know how your architecture decision translates into a return on investment.
Remember that Product people are looking for results. Instead of overwhelming them with details about OLTP systems or ETL processes, you need to frame your explanation as a negotiation â highlighting the costs and benefits of each option, just like the plumber did with his service packages.
"Product doesnât give a shit about how your data is stored. Product cares about products."
The essence here is to avoid diving into the weeds of indexes or table joins until they understand the impact on their budget and timeline. When they ask, âWhy is this so expensive?â thatâs your cue to explain, in clear terms, the complexity involved in implementing things like OLAP systems or setting up ETL processes.
Approach your conversation by outlining different âpackagesâ â starting with the đĽ platinum package that covers all technical needs but at a higher cost. This sets the stage for a value discussion, where Product sees the full picture and starts to understand the trade-offs involved.
"Now you can (gently) talk to them about the difference between online transaction processing systems (OLTP) and online analysis processing systems (OLAP)."
The trick is to guide Product through a step-by-step decision-making process, laying out each feature as a line item on an invoice. This approach helps them grasp which elements of your proposal can be trimmed down or delayed to fit within their budget constraints. For example, if they can't afford a new OLAP system, offer scaled-down options, and negotiate on scope and time rather than quality.
đĽ One of the most crucial points is not to compromise on quality. In software development, you should avoid falling into the trap of lowering standards just to meet short-term goals. Sacrificing quality often leads to delivering subpar products that can damage customer satisfaction in the long run. As the article states, âWhatâs worse, delivering something a customer actually hates, or delivering nothing at all?â Maintaining a baseline of quality ensures that even with limited resources, you're delivering something worthwhile.
If the Product team suggests cutting corners to fit the project into a two-week sprint, resist the temptation. The iron triangle of software development â time, scope, and budget â should always consider quality as a non-negotiable factor.
Ultimately, you're helping Product to ruthlessly prioritize tasks to deliver the best possible outcomes within the given constraints. In these negotiations, scope will often be the main variable that can be adjusted to balance the budget and timeline. And when the tables turn, and itâs your idea that needs their buy-in, present it in terms of ROI to make a compelling case.
Think like a plumber: when you know the value of what youâre selling, itâs easier to convince others to invest in the right solution instead of a quick fix. Always push for a solution that maintains a minimum level of quality, even if it means delivering less within the same time frame.
This blog narrates an engineer's daily struggle with an overly complex and inefficient data warehouse system. Despite working within an ostensibly supportive team, the engineer describes their workplace as a "Pain Zone," rife with convoluted processes, unchecked errors, and cultural dissonance. Hereâs a detailed breakdown of the main points:
The story begins with a ritual of starting the day with a senior engineering partner. Together, they embark on a shared mission to navigate the "Pain Zone," their term for the warehouse system plagued by unnecessary complexity. The data warehouse in question involves copying text files from different systems, and ideally, this process should require only ten steps. However, the engineer discovers over 104 discrete operations in the architecture diagram, a staggering example of the platform's inefficiency.
"Retrieve file. Validate file. Save file. Log what you did. Those could all be one point on the diagram...That's ten. Why are there a hundred and four?"
The engineer describes the necessity of "Pain Zone navigation," a practice where engineers rely on pair programming for moral support to withstand the psychological toll of working in such an environment. The issue isnât only technical; itâs deeply cultural. A culture that demands velocity while disregarding craftsmanship fosters an atmosphere where complexity and inefficiency go unchallenged. This attitude, the author suggests, results in the degradation of code quality, with engineers penalized for trying to refactor code.
To illustrate the dysfunction further, the author recounts a routine task: checking if data from sources like Google Analytics is flowing correctly. What they find instead is garbled JSON strings dumped in the logs without logical structure, with 57,000 distinct entries where there should be fifty. This revelation shows that for over a year, the team has been collecting "total nonsense" in the logs.
"We only have two jobs. Get the data and log that we got the data. But the logs are nonsense, so we aren't doing the second thing, and because the logs are nonsense I don't know if we've been doing the first thing."
Rather than address this critical error, management insists on working with the erroneous logs to maintain "velocity," a term often implying efficiency but, in this case, prioritizing speed over accuracy. The author describes the frustration of being told to parse nonsensical data instead of fixing the core issuesâa situation summarized by the team motto: "Stop asking questions, you're only going to hurt yourself."
The cultural disconnect deepens as the author tries to work with data from Twitter, only to find that log events lack an event ID. A supposed expert suggests using a column with ambiguous file path strings, each lacking logical identifiers, requiring complex regular expressions to infer events.
"I am expected to use regular expressions to construct a key in my query."
In yet another disheartening revelation, the author learns that the Validated: True log entries are merely hardcoded placeholders, not actual validation statuses. The logs fail to capture real system states, effectively undermining auditability.
By the end, the author reaches a breaking point, realizing their values diverge sharply from those of the organization. This disconnect prompts them to resign, choosing to invest their time in personal projects and consulting instead. In a closing reflection, they criticize the industry for investing in trendy tools like Snowflake and Databricks, without hiring engineers who understand how to design simple, effective systems.
"I could build something superior to this with an ancient laptop, an internet connection, and spreadsheets. It would take me a month tops."
This piece is a critique of both overly complex architectures and a corporate culture that prioritizes speed over quality. It highlights the importance of valuing craftsmanship and straightforward design in building sustainable and efficient data systems.
With half of the jobs eliminated by robots, what happens to all the people who are out of work? The book Manna explores the possibilities and shows two contrasting outcomes, one filled with great hope and the other filled with misery.
Join Marshall Brain, founder of HowStuffWorks.com, for a skillful step-by-step walk through of the robotic transition, the collapse of the human job market that results, and a surprising look at humanityâs future in a post-robotic world.
Then consider our options. Which vision of the future will society choose to follow?
đş The building we exited was another one of the terrafoam projects. Terrafoam was a super-low-cost building material, and all of the welfare dorms were made out of it. (Chapter 4)
In conflict situations, individuals often exhibit different behavioral strategies based on their approach to managing disagreements. Avoiding is one strategy, and here are four others, alongside avoiding, commonly identified within conflict management models like the Thomas-Kilmann Conflict Mode Instrument (TKI):
Avoiding
Behavior: The individual sidesteps or withdraws from the conflict, neither pursuing their own concerns nor those of the other party.
When it's useful: When the conflict is trivial, emotions are too high for constructive dialogue, or more time is needed to gather information.
Risk: Prolonging the issue may lead to unresolved tensions or escalation.
Competing
Behavior: The individual seeks to win the conflict by asserting their own position, often at the expense of the other party.
When it's useful: When quick, decisive action is needed (e.g., in emergencies) or in matters of principle.
Risk: Can damage relationships and lead to resentment if overused or applied inappropriately.
Accommodating
Behavior: The individual prioritizes the concerns of the other party over their own, often sacrificing their own needs to maintain harmony.
When it's useful: To preserve relationships, resolve minor issues quickly, or demonstrate goodwill.
Risk: May lead to feelings of frustration or being undervalued if used excessively.
Compromising
Behavior: Both parties make concessions to reach a mutually acceptable solution, often splitting the difference.
When it's useful: When a quick resolution is needed and both parties are willing to make sacrifices.
Risk: May result in a suboptimal solution where neither party is fully satisfied.
Collaborating
Behavior: The individual works with the other party to find a win-win solution that fully satisfies the needs of both.
When it's useful: When the issue is important to both parties and requires creative problem-solving to achieve the best outcome.
Risk: Requires time and effort, which may not always be feasible in time-sensitive situations.
Each of these strategies has its strengths and limitations, and the choice of approach often depends on the context of the conflict, the relationship between the parties, and the desired outcomes.
In this deeply personal blog post, the author reflects on the mental health struggles that many people face, sharing candid experiences with burnout and severe depression. They emphasize that everyone will have times when they are "Not Okay," and it's important to acknowledge this without shame. Through their own journey of overcoming hardshipâranging from academic pressures to toxic workplacesâthey highlight the significance of seeking help, making lifestyle changes, and understanding that recovery is possible. The author encourages readers to care for themselves and others, reminding us that empathy and support can make a profound difference in navigating life's challenges.
DuckStation is an simulator/emulator of the Sony PlayStation(TM) console, focusing on playability, speed, and long-term maintainability. The goal is to be as accurate as possible while maintaining performance suitable for low-end devices.
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~+ | | | WebVM is a virtual Linux environment running in the browser via WebAssembly | | | | WebVM is powered by the CheerpX virtualization engine, which enables safe, | | sandboxed client-side execution of x86 binaries, fully client-side | | | | CheerpX includes an x86-to-WebAssembly JIT compiler, a virtual block-based | | file system, and a Linux syscall emulator | | | | [News] WebVM 2.0: A complete Linux Desktop Environment in the browser: | | | | https://labs.leaningtech.com/blog/webvm-20 | | | | Try out the new Alpine / Xorg / i3 WebVM: https://webvm.io/alpine.html | | | +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~+
McMaster-Carrâs website, www.mcmaster.com, is renowned for its speed, achieved through minimalist design, server-side rendering, and strategic use of technology like ASP.NET and JavaScript libraries. Prefetching techniques preload pages as users hover, ensuring near-instant navigation, while CDNs cache content globally to reduce latency. This streamlined, user-focused approach lets customers quickly access and order from McMaster-Carrâs extensive catalog, making it a leader in industrial supply and a favorite for its seamless, efficient experience.
I got pretty good in Mandarin within 12 months of rigorous part-time study. I'm not even close to perfectly fluent, but I got far into intermediate fluency. Read my personal story of learning Mandarin here: isaak.net/mandarin
This post on my Methods of Mandarin (MoM) is for fellow language learners and autodidacts. This isn't a thorough how-to guide. I won't be holding your hand. It's more like a personal notebook of what worked for me. I'm sharing my personal Anki deck and then I'll describe all my methods and tips. People's styles and methods differ.
Maintain vision health by getting regular eye check-ups, using appropriate glasses, and addressing night driving challenges with clean windshields and adaptive lighting.
Build physical strength and stamina by incorporating strength training (e.g., push-ups, squats) and aerobic activities like walking or biking into daily life.
Reduce pain and joint issues with anti-inflammatories like naproxen as needed and by focusing on flexibility and range-of-motion exercises.
Protect hearing through regular hearing tests starting at age 50, using hearing aids if necessary, and avoiding loud environments or overly high headphone volumes.
Improve nutrition by prioritizing fruits, vegetables, and whole foods while limiting ultra-processed items. Eat meals made with care and hydrate appropriately.
Enhance sleep quality by focusing on creating a comfortable sleep environment (âsleep joyâ) and getting the amount of rest your body needs without guilt.
Safeguard brain health using organizational strategies, pursuing lifelong learning, and embracing new tools and technologies to stay sharp.
Foster emotional resilience by prioritizing gratitude and optimism, avoiding unnecessary negativity, and working toward a calm and joyful outlook.
Adapt to changes in ability by recognizing limitations as they arise and addressing them proactively with tools, technology, and support systems.
Combat workplace biases against older programmers by emphasizing your experience, exploring consulting or freelancing, and pushing back against assumptions about learning capacity.
Plan for retirement by calculating your financial ânumber,â balancing saving with enjoying the present, and planning meaningful activities to avoid boredom and isolation.
Improve work-life balance through flexible work arrangements, prioritizing health, and focusing on work that aligns with your values and passions.
Build relationships by maintaining friendships across generations and engaging with new communities through hobbies, volunteering, or neighborhood activities.
Prevent loneliness by cultivating social engagement in retirement through structured activities, regular interactions, or volunteering.
Develop healthy habits by avoiding smoking, using sunscreen, and embracing preventive measures like vaccinations.
Incorporate joy and play into daily life through hobbies, nature, and small pleasures, focusing on activities that spark happiness and relaxation.
Create a lasting legacy by organizing and preserving personal and professional projects, ensuring they are meaningful and accessible for others.
Handle loss and change by accepting the inevitability of loss while actively seeking new experiences and connections to balance those losses.
Address unexpected challenges by consulting professionals for new or worsening health issues, as not all problems stem from aging.
Reflect on life purpose and make choices that align with long-term happiness and fulfillment.
Exercise regularly to support both physical and mental well-being.
Save for the future while enjoying life in the present.
Stay socially engaged through hobbies, work, or volunteering.
Eat a balanced diet and focus on whole foods for overall health.
Adapt to limitations by embracing tools and strategies that maintain independence.
Build friendships across generations for mutual support and enrichment.
Cultivate a sense of purpose through meaningful work or activities.
Kate Gregoryâs message emphasizes that aging wellâwhether as a programmer or in any fieldârequires proactive effort, adaptability, and a focus on joy and purpose.
In every workplace, youâll encounter a corporate jerkâthe kind of person who thrives on creating chaos, manipulating others, and throwing people under the bus. These individuals are frustrating, but they donât have to define your career. Let me share a condensed version of my experience dealing with one and the key strategies I used to handle it.
I took on a senior recruiter role with an RPO organization, filling high-level positions nationwide. Before my official role started, I was asked to temporarily support a chaotic plant with high turnover. From the start, the HR manager at the plant undermined my work, deviated from processes, and made false accusations to my boss about my performance. Despite the challenges, I stayed professional and focused on achieving results.
Later, when assigned to the same plant for senior-level roles, the HR manager again tried to sabotage me. This time, I was ready. Armed with detailed documentation of every interaction, I exposed her dishonesty, which damaged her credibility. Though the plant's issues persisted, I didnât let her behavior derail me. Shortly after, I moved on to a better opportunity, taking invaluable lessons with me.
Lessons Learned
Document Everything: Keep detailed records of all interactions and deliverables. These become your safety net against false accusations.
Maintain Professionalism: Stay composed and formal in your interactions. Donât stoop to their level.
Set Boundaries: Be clear about your role and responsibilities. Donât let others exploit your flexibility.
Donât Internalize Their Behavior: Their actions are a reflection of their own issues, not your worth or abilities.
Corporate jerks are an unavoidable reality in most workplaces, but they donât have to define your career. Use strategy, stay professional, and remember: youâre in control of your trajectoryânot them. When necessary, donât hesitate to move on to an environment where you can thrive.
JavaScript is everywhereâfrom browsers to unexpected platforms like game consoles and operating systems. Despite its quirks and criticisms, its versatility has made it indispensable. This post is for developers and tech enthusiasts curious about how JavaScript extends beyond typical web applications, influencing industries like gaming, desktop environments, and more.
JavaScript Beyond Browsers
JavaScript is not just a browser language anymore. From GNOMEâs desktop environment in Linux, which is almost 50% JavaScript, to Windows 11âs React Native-powered start menu and recommended sections, itâs embedded in operating systems. Even the PlayStation 5 relies heavily on React Native for its interface.
JavaScript in Gaming Consoles
Microsoftâs Xbox and Sonyâs PlayStation both integrate React Native into their systems. Historically, web technologies like HTML were also used (e.g., Nintendo Wiiâs settings menu), showing a longstanding trend of leveraging web tech for ease of development in consoles.
Gaming and UI Layers
Even major game titles like Battlefield 1 use JavaScript and React for their UI layers, thanks to tools like MobX for state management. Developers appreciate its flexibility in managing complex UI interactions over building bespoke solutions.
Game Development: JavaScript vs. C++Vampire Survivors showcases a fascinating dual approach: its browser-based JavaScript version serves as the prototype, while a team ports it to C++ for consoles. This method ensures performance optimization without sacrificing the rapid development benefits of JS.
Reactâs Evolution and Adaptation
React Lua, originally a Roblox project, brings Reactâs paradigms to Lua-based environments. This shows how Reactâs influence transcends JavaScript, becoming a staple for creating UIs even in non-JS ecosystems.
Why JavaScript?
JavaScript enables faster iteration, broader developer accessibility, and reduced specialization needs. Whether itâs GNOME choosing it for extensibility or game studios adopting React for UI efficiency, its ubiquity stems from practical needs.
This talk is fun, but more like theoretical and philosophical.
đ Property-Based Testing for Joining an Array to a String with Delimiter in C++
Definition
Property-based testing involves specifying general properties a function should satisfy for a wide range of inputs. In this example, we will test a function that joins an array of strings with a delimiter into a single string. The properties we want to validate are:
The delimiter should only appear between elements, not at the start or end.
If the array has one element, the result should be the element itself without the delimiter.
An empty array should produce an empty string.
C++ Code Example using rapidcheck
Hereâs a property-based test using the rapidcheck library in C++ to test a join function that joins a vector of strings with a specified delimiter:
#include<rapidcheck.h> #include<string> #include<vector> #include<sstream> #include<iostream> // Function to join array with a delimiter std::string join(const std::vector<std::string>& elements,const std::string& delimiter){ std::ostringstream os; for(size_t i =0; i < elements.size();++i){ os << elements[i]; if(i != elements.size()-1){// Avoid trailing delimiter os << delimiter; } } return os.str(); } intmain(){ rc::check("Joining should produce a correctly delimited string",[](const std::vector<std::string>& elements,const std::string& delimiter){ std::string result =join(elements, delimiter); // Property 1: The delimiter should appear only between elements if(elements.size()>1&&!delimiter.empty()){ // Split result by delimiter and check the components match the input std::vector<std::string> parts; std::string::size_type start =0, end; while((end = result.find(delimiter, start))!= std::string::npos){ parts.push_back(result.substr(start, end - start)); start = end + delimiter.length(); } parts.push_back(result.substr(start)); // Assert parts match elements RC_ASSERT(parts == elements); } // Property 2: If there's only one element, the result should match that element directly if(elements.size()==1){ RC_ASSERT(result == elements[0]); } // Property 3: If the array is empty, the result should be an empty string if(elements.empty()){ RC_ASSERT(result.empty()); } }); return0; }
Notice that rc::check will run the test 100 times with different random input parameters. In case of failure, it will give the configuration and random seed info for debugging in output.
Explanation of Example
Property 1: Ensures that if multiple elements are joined with a delimiter, the delimiter only appears between elements, not at the start or end.
Property 2: Checks that if the array has only one element, the function returns the element itself without any delimiter.
Property 3: Confirms that if the input array is empty, the output string is empty.
This approach guarantees that the join function works as expected across diverse inputs, making it more robust against edge cases such as empty arrays, single-element arrays, and unusual delimiter values.
For technical leaders, the balance between leading effectively and empowering their team can be challenging. Whether youâre a software engineer managing junior developers or a product owner guiding associates, the traditional approach of âjust give the answerâ can lead to dependency and frustration for both you and your team. This post explores the value of coaching-driven leadershipâa method that empowers your team to become self-sufficient, creative problem-solvers. If youâre in any technical or managerial role, understanding how to guide without micromanaging is essential. Learn how adopting a coaching approach can transform your teamâs efficiency, autonomy, and collaboration.
The Shift from Solving Problems to Empowering People
A coaching-based leadership style redefines how leaders approach problem-solving with their teams. Instead of quickly providing answers to move tasks along, this approach encourages team members to develop the skills to tackle issues independently, ultimately creating a more resilient and capable workforce. Below are some key insights and advice on how to lead through empowerment:
Encouraging Self-Reliance Instead of Dependency
Why It Matters: When leaders constantly solve problems for others, it builds dependency. Empowering team members to find their own solutions helps reduce your stress and increases their confidence.
How to Do It: Encourage team members to exhaust all possible resources and approaches before coming to you. Ask questions like, âHow would you solve this if I werenât available?â This encourages them to think independently.
Asking Powerful, Resourceful Questions
Why It Matters: A quick solution often leads to repeated questions. When leaders ask resourceful questions, they prompt team members to analyze and solve problems on their own.
How to Do It: Instead of offering solutions, ask questions that challenge their thought processes. Examples include:
âWhat other approaches have you consideredâď¸â
âCan this problem be broken down into smaller tasksâď¸â
This approach builds critical thinking and problem-solving skills.
Fostering a Growth-Oriented Mindset
Why It Matters: Viewing team members as capable individuals with potential is essential. By recognizing and nurturing their strengths, leaders can help people grow into their roles more effectively.
How to Do It: Reframe your thinking to see team members as resourceful and capable. Focus on their potential and ask questions that encourage them to broaden their perspectives, such as, âWhat new solutions might you try if you had more resources?â
Prioritizing Long-Term Gains Over Short-Term Fixes
Why It Matters: Quick answers may solve todayâs problem, but they build future dependency. Investing in a coaching style fosters autonomy, saving time and stress in the long run.
How to Do It: Resist the urge to provide immediate solutions. Instead, encourage team members to analyze challenges thoroughly, which leads to more sustainable growth and resilience.
Practical Applications of Coaching in Technical Leadership
For leaders looking to implement these coaching principles, here are specific areas where a coaching mindset can be applied effectively:
Code Reviews: Instead of dictating how code should look, ask questions about their logic and problem-solving approach. This not only ensures quality but also deepens their understanding.
Design and Project Reviews: Use design critiques as opportunities to help team members articulate their design choices, fostering a culture of open dialogue and improvement.
Debugging and Troubleshooting: When assisting with debugging, ask team members to consider alternative solutions or explain their thought process rather than simply fixing the problem.
Project Planning: Encourage team members to independently explore solutions to potential obstacles by asking them to consider all options and resources available.
Regular expressions (regexes) are a foundational tool in programming, celebrated for their ability to match patterns efficiently and elegantly. However, their widespread use has exposed critical flaws in how they are implemented in most programming environments. What begins as a theoretical marvel often translates into real-world inefficiencies and vulnerabilities, leading to catastrophic outcomes like server crashes from regex denial of service (ReDoS) attacks.
This post unpacks the evolution of regex algorithms, contrasts their efficiency, and explores how poor implementation choices have led to systemic issues. Whether you're a systems programmer, web developer, or curious about computational theory, understanding regex's hidden complexities will change how you approach pattern matching.
1. The Two Faces of Regex Algorithms
Regex engines typically rely on two main algorithms: the lockstep algorithm (also known as Thompson's algorithm) and backtracking. Here's how they stack up:
Lockstep Algorithm: This algorithm operates with predictable performance, scaling quadratically in worst-case scenarios and linearly when scaling only input size. It treats all possible paths through a regex simultaneously, avoiding exponential blowups.
Backtracking Algorithm: While intuitive and flexible (especially for complex features like backreferences and capturing groups), backtracking scales exponentially in the worst case. This flaw enables catastrophic backtracking, where a regex takes impractically long to resolve, even on short inputs.
2. Exponential Backtracking in Practice
Using backtracking means every possible path through a regex is explored individually. When paths multiply exponentiallyâsuch as in nested structures or poorly constructed patternsâthe execution time balloons. For instance:
A regex engine using backtracking may take 24 ticks to match a complex string, compared to only 18 ticks with the lockstep algorithm.
3. Historical Decisions with Long-Lasting Impacts
The dominance of backtracking stems from historical choices made during the development of early Unix utilities:
Ken Thompson, the creator of regexes, implemented a lockstep-based engine in the 1960s. However, later tools like ed and grep shifted to backtracking, prioritizing simplicity and flexibility over performance.
This decision, compounded by the introduction of features like backreferences and greedy quantifiers, locked most regex engines into backtracking implementations. Over time, these became embedded in standard libraries across programming languages, making lockstep a rarity.
4. Regex Denial of Service (ReDoS)
The vulnerability of backtracking manifests starkly in ReDoS attacks:
A specially crafted regex input can force an engine to explore every possible path, consuming excessive CPU cycles and halting services.
Examples include outages at Stack Exchange (2016) and Cloudflare (2019) due to poorly constructed regexes handling unexpected inputs.
5. Features That Complicate Performance
While features like capturing groups, backreferences, and non-greedy modifiers add functionality, they exacerbate backtracking's inefficiencies. For instance:
Capturing groups in backtracking engines are straightforward but introduce state-tracking complexities in lockstep implementations.
Backreferences break the theoretical constraints of regular languages, making efficient lockstep implementations infeasible.
6. Modern Solutions
Some modern regex engines, like Google's RE2, abandon backtracking altogether, focusing on performance and predictability. RE2 enforces strict adherence to regular language constraints, ensuring linear or quadratic time complexity.
While sacrificing backreferences and some advanced features, engines like RE2 are critical for applications requiring robust and reliable performance, such as large-scale web services.
In this blog post, I'll be sharing a collection of videos with concise content digests. These summaries extract the key points, focusing on the problem discussed, its root cause, and the solution or advice offered. I find this approach helpful because it allows me to retain the core information long after watching the video. This section will serve as a dedicated space for these "good watches," presenting only the most valuable videos and their takeaways in one place.
We attended a talk by BjĂśrn Fahller at ACCU 2024, focusing on how learning, teaching, and sharing are interdependent and critical to team success and personal growth. Below are key steps and ideas that were covered, with some outcomes noted and a few clarifications where needed.
1. Emphasizing Open Sharing for Safety and Improvement (13:52-14:36):
Fahller shared an anecdote from 1968 about Swedish military aviation, highlighting the importance of allowing team members to communicate openly, especially about mistakes or difficulties, without fear of punishment. This approach encourages honesty and helps prevent repeated mistakes.
"Military aviation is dangerous... let them openly, and without risk for punishment, share the problems they face while flying."
Outcome: Building a safe environment for sharing leads to a culture where team members can discuss failures without fear, helping the team learn from each experience and improve.
đ¤ GPT: Fahllerâs translation suggests he views open communication as essential to growth and trust in teams, especially in high-stakes fields.
2. Encouraging Question-Asking and Knowledge Sharing (20:00):
In discussing "Sharing is Caring," Fahller emphasized the need for team members to bring up issues or observations that might seem trivial to ensure continuous improvement. He gave examples from aviation, such as pointing out gusts of wind affecting landing, to show how small insights can contribute to collective knowledge.
Outcome: Actively sharing observations improves understanding and may reveal underlying problems that would otherwise go unnoticed. Open communication is key to refining processes.
đ¤ GPT: Fahllerâs examples reinforce the idea that even seemingly minor details should be voiced -- they may be crucial in the big picture.
3. Addressing Information Overload in Teams (37:52):
New team members often feel overwhelmed by the volume of information shared by experienced team members. Fahller suggested that newcomers should ask experienced members to slow down, provide context, and "paint the scene" so they can understand the background of the tasks.
"Ask them to paint the scene. What are they trying to achieve? What is it that is not working?"
Outcome: When we take the time to explain context to newcomers, it helps bridge knowledge gaps and allows everyone to contribute effectively.
đ¤ GPT: This approach builds understanding but also patience and humility in experienced team members by reminding them to make knowledge accessible.
4. Creating a Positive Review Culture (33:47):
In discussing code reviews, Fahller contrasted two styles: dismissive comments (e.g., "I donât understand. Rewrite!") vs. constructive feedback (e.g., "Can you explain why you chose to do it this way?"). He emphasized that reviews should be treated as educational opportunities rather than judgment sessions.
Outcome: Constructive reviews foster a growth-oriented environment and allow both the reviewer and reviewee to learn. Constructive feedback motivates improvement, while dismissive comments discourage engagement.
đ¤ GPT: A consistent, constructive review culture also promotes long-term trust and makes code quality a shared team responsibility.
5. Handling Toxicity in the Workplace (55:45):
In this segment, BjĂśrn Fahller tackled the issue of toxicity within teams and its corrosive effects on collaboration, morale, and individual well-being. He addressed specific toxic behaviors that often crop up in workplaces, describing them not as isolated incidents but as patterns that can erode trust and productivity if left unchecked. Fahllerâs examples of toxic behavior included:
"The weekly dunce hat" â Singling out someone each week as a scapegoat or object of ridicule, effectively creating an atmosphere of shame and fear.
Blame-seeking â Looking for someone to hold responsible for problems, rather than investigating issues constructively or as a team.
Threats, pressure, fear, and bullying â Using intimidation tactics to push individuals into compliance, often stifling creativity, openness, and morale.
Ghosting â Ignoring someoneâs contributions or input entirely, which Fahller noted can make people feel alienated and undervalued.
Stealing credit â Taking recognition for someone elseâs work, which not only demoralizes the actual contributor but also creates a culture of mistrust.
Fahller stressed that these behaviors are not only demoralizing but actively prevent individuals from sharing ideas and asking questions openly. Such an environment can force people into silence and self-protection, hindering the teamâs ability to learn from mistakes and innovate. He emphasized that the first step in combating toxicity is recognitionâunderstanding and identifying toxic patterns when they appear.
"If you're not respected at work," Fahller advised, the first course of action is to try to find an ally. An ally can provide a supportive voice and help validate one's experiences, which can be especially important if toxic behavior is widespread or normalized within the team. An ally may be able to speak up on your behalf, lend credibility to your concerns, and offer support when youâre confronting challenging dynamics. This shared voice can help to bring attention to the toxicity and, ideally, drive change.
However, Fahller acknowledged that finding an ally may not always be enough. If a toxic environment persists despite attempts to address it, he advised a more decisive response: leaving. He argued that individuals should not allow themselves to be "ignored, threatened or made fun of," as staying in such an environment can be mentally and emotionally draining, ultimately leading to burnout and disengagement.
"If all else fails, go elsewhere. Donât allow yourself to be ignored, threatened or made fun of."
This recommendation underscores Fahller's stance that no one should feel compelled to remain in an unchangeable toxic environment. He suggested that people value their self-respect and mental health over job stability if the work culture is irredeemably harmful.
Fahllerâs advice reflected a pragmatic approach to toxicity: address it internally if possible, but recognize when to prioritize personal well-being over enduring a dysfunctional work environment. While leaving a job is often a difficult decision, Fahller's message was clear -- donât compromise on respect and support. A healthy team environment where people feel safe and valued is essential not just for individual satisfaction but also for collective success.
In his talk, Nikhil Suresh, the director of Hermit Tech, explores the challenges that software engineers face in the corporate world. He begins with an old animal fable about a scorpion and a frog to illustrate the dynamics between programmers and businesses.
"The scorpion wants to ship a web application but cannot program, so it finds a frog because frogs are incredible programmers."
The scorpion assures the frog that it won't interfere with his work. However, after some time, the scorpion hires an agile consultant and imposes new restrictions, disrupting the frog's workflow. This story mirrors how businesses often unknowingly hinder their own developers.
Nikhil emphasizes that most companies don't know much about software, making it difficult for programmers to clearly indicate their value. He refers to Sturgeon's Law, which states that "90% of everything is bad," highlighting the prevalence of low standards in the industry.
He shares personal experiences where previous engineers lacked basic competence, such as not setting primary keys in databases or causing exorbitant costs due to misconfigured systems. These anecdotes illustrate that businesses cannot tell the difference between good and bad programmers, leading to competent developers being undervalued.
Introducing the concepts of profit centers and cost centers, Nikhil explains that IT departments are often seen as cost centers, affecting how programmers are treated within organizations. He points out that being better at programming isn't always highly valued by companies because they may not see a direct link between technical skill and profit.
To navigate these challenges, Nikhil advises developers to never call themselves programmers. He argues that the term doesn't convey meaningful information and can lead to misconceptions.
"If you tell someone who doesn't program that you're a programmer, their first thought is like, 'Ah, one of those expensive nerds.'"
Nikhil encourages developers to write about their experiences and share them online. By doing so, they can showcase their unique ideas and differentiate themselves in the field. He believes that your unique ideas are what differentiate you from others and that sharing them helps in building a personal brand.
He also suggests that programmers should read outside of IT and delve into the humanities. This broadens their perspectives and provides valuable analogies for complex ideas. Nikhil shares how his involvement in improvised theater and reading "Impro: Improvisation and the Theatre" by Keith Johnstone helped him understand status dynamics in professional interactions.
Understanding these dynamics allows developers to navigate job interviews and workplace relationships more effectively. Nikhil emphasizes the importance of taking control of your career and making decisions that enhance your value to both yourself and society.
In conclusion, Nikhil urges developers to recognize that technical skill isn't the main barrier to having a better career. Factors like communication, strategic thinking, and understanding corporate dynamics play crucial roles. By focusing on these areas, developers can transform their passion into something that has greater value for both themselves and the broader community.
đ Sustainable Software Development Careers: Aging, Quality, and Longevity in Tech
Introduction
In the fast-evolving world of software development, many professionals feel the pressure to stay young, move fast, and keep up with new trends. But does speed really equal success in this field? This post is for experienced developers, tech managers, and anyone considering a long-term career in software. We'll explore why sustainability in developmentâfocusing on quality, experience, and career longevityâmatters and how you can embrace aging as an asset, not a setback.
Why You Should Care
The tech industry often promotes rapid career progression and cutting-edge skills over stability and endurance. However, valuing experience, avoiding burnout, and emphasizing quality over speed are essential for creating durable, impactful software and ensuring personal career satisfaction.
Embracing Aging as a Developer
Many developers worry about becoming irrelevant as they age, yet experience can be a strength. Research shows the average age of developers is among the lowest across professional fields, meaning many leave the field early. However, experience contributes to problem-solving, architectural insights, and higher quality standards. Older developers often provide unique perspectives that younger professionals may lack, particularly in maintaining and improving code quality.
Slowing Down for Quality
Too many developers face intense pressure to deliver quickly, often sacrificing quality. This results in technical debt and rushed code that becomes difficult to maintain. The speaker argues that development is a marathon, not a sprint. Slowing down and building sustainable software creates long-term benefits, even if it appears slower at first. By prioritizing thoughtful coding and taking the time to address technical debt, developers can create resilient, maintainable systems.
Challenges with Traditional Career Progression
Many companies push experienced developers into management roles, which can leave skilled coders dissatisfied and underutilized. Known as the Peter Principle, this approach often results in skilled developers becoming ineffective managers. For those passionate about coding, staying in development rolesârather than climbing the corporate ladderâcan offer fulfillment, especially if companies recognize and reward this choice.
Common Reasons Developers Leave the Field
Major reasons include burnout, shifting to roles with higher prestige, and losing the spark for coding. Additionally, aging can lead to insecurities about keeping up. To combat these trends, developers should prioritize work-life balance, take time to learn, and avoid the mindset that career progression has to mean management.
Practical Ways to Build a Sustainable Career
Commit to Continuous Learning: Attend conferences, read, and experiment with code to stay current.
Focus on Quality over Speed: Embrace practices like regular code reviews, refactoring, and retrospectives to build robust systems.
Build Team Trust and Psychological Safety: A supportive environment enhances productivity, allowing team members to grow together.
Incorporate Slack Time: Give yourself unstructured time to think, learn, and work creatively, helping avoid burnout and stagnation.
Let Experience Be Your Advantage
Staying relevant as a developer means focusing on the quality of your contributions, leveraging your experience to guide teams, and advocating for sustainable practices that benefit the entire organization. By valuing experience, resisting the rush, and maintaining passion, you can contribute meaningfully to tech at any age.
Quotes
"Getting old in software development is not a liabilityâit's an asset. Make those gray hairs your biggest advantage and let your experience shine through in quality code."
"Software development is not a sprint; it's a marathon. We need to slow down, find a sustainable pace, and stop rushing to deliver at the expense of quality."
"Don't let your career be dictated by the Peter Principleâjust because you're a great developer doesnât mean youâll enjoy management. Stay with your passion if itâs coding."
"Poor quality code isnât just a short-term fix; itâs a long-term burden. Building things right the first time is the fastest way to long-term success."
"Thereâs no need to be Usain Bolt in development; be more like a marathon runner. Set a steady, sustainable pace, focus on quality, and enjoy the journey."
Introduction
Writing isnât just about sharing information; itâs about making an impact. In this insightful lecture, a distinguished writing instructor from the University of Chicago's Writing Program emphasizes that effective writing requires understanding your audience, establishing relevance, and creating a compelling narrative. This article captures the speakerâs key advice on improving writing by focusing on purpose, value, and the reader's needs.
Focus on Value, Not Originality
Advice: The speaker challenges the idea that writing must always present something "new" or "original." Instead, writers should prioritize creating valuable content that resonates with their audience.
Application: Rather than striving for originality alone, focus on producing content that addresses the readerâs concerns or questions. A piece of writing is valuable if it enriches the readerâs understanding or helps solve a problem they care about.
Define the Problem Clearly
Advice: To make a piece of writing compelling, start by establishing a problem that is relevant to your audience. A well-defined problem creates a sense of instability or inconsistency, which engages readers and positions the writer as a problem-solver.
Application: Use contrasting language to highlight instabilityâwords like "but," "however," and "although" signal unresolved issues. This approach shifts the readerâs focus to the problem at hand, making them more receptive to the writer's proposed solution.
Understand and Address Your Readerâs Needs
Advice: A writerâs task is to understand the specific needs and concerns of their reading community. This involves identifying problems that resonate with them and framing your thesis or solution in a way that is relevant to their lives or work.
Application: In academic and professional settings, locate problems in real-world contexts. Rather than presenting background information, articulate a challenge or inconsistency that is specific to the readerâs field or interests, making your argument compelling and directly relevant.
Use the Language of Costs and Benefits
Advice: Writers should make it clear how the identified problem affects the reader directly. Frame issues in terms of "costs" and "benefits" to emphasize why addressing the problem is essential.
Application: Highlight the impact of ignoring the problem versus the benefits of solving it. This approach reinforces the relevance of your writing by aligning it with the readerâs motivations and concerns.
Beware of the "Gap" Approach
Advice: Avoid using the concept of a "knowledge gap" as the sole justification for writing on a topic. While identifying gaps in research can work, it often lacks the urgency or impact required to engage readers fully.
Application: Rather than just pointing out missing information, emphasize the practical implications of filling that gap. Explain how the lack of certain knowledge creates instability or inconsistency in the field, making the need for your insights more compelling.
Adopt a Community-Centric Perspective
Advice: Tailor your writing to the specific communities who will read it. Different communities (e.g., narrative historians vs. sociologists) have distinct approaches to problems and value different types of arguments.
Application: Define and understand the community of readers your work is meant to serve. Address their concerns directly and frame your argument in terms that align with their unique perspectives and values.
Learn from Published Articles
Advice: Published work often contains subtle rhetorical cues about what resonates with readers in a specific field. Study these articles to understand the language, structure, and approach that successful writers use.
Application: Identify patterns in the language of published work within your target field. For instance, if a journal commonly uses cost-benefit language, incorporate it into your writing to align with reader expectations.
Emphasize Function Over Form
Advice: Writing should serve a clear function beyond just following formal rules. Effective writing achieves its purpose by clearly communicating the problem and its significance to readers.
Application: Instead of focusing solely on rules or formalities, think about what your writing needs to accomplish for your audience. Make sure that every section and statement reinforces your overall argument and purpose.
In todayâs fast-evolving tech landscape, âDeveloper Joyâ is emerging as a crucial focus for engineering teams striving to deliver high-quality, innovative software. For those in software engineering or tech management, this concept brings a fresh perspective, shifting away from traditional productivity metrics and emphasizing a developerâs experience, satisfaction, and creativity. By focusing on Developer Joy, teams can foster an environment where developers not only perform optimally but also find deep satisfaction in their craft. This shift is more than just a trend; itâs a rethinking of how we define and sustain productivity in a complex, creative field like software development.
The Problem with Traditional Productivity Metrics
Traditional productivity measures, like lines of code or tasks completed, often fail to capture a developer's real impact. Software development, unlike factory work, requires creativity, problem-solving, and adaptabilityâtraits that are poorly reflected in industrial-era metrics. Instead of simply measuring output, focusing on Developer Joy acknowledges the unique, non-linear nature of coding and innovation.
Developer Joy: A New Approach to Productivity
Developer Joy isn't about doing more in less time; itâs about creating an environment where developers thrive. When developers are joyful, they produce better code, collaborate more effectively, and sustain their motivation over time. Atlassianâs approach to Developer Joy incorporates several elements to support this environment:
High-quality Code: Developers enjoy working with well-structured, maintainable code.
Progressive Workflows: Fast, friction-free pipelines allow developers to take an idea from concept to deployment quickly.
Customer Impact: When developers know theyâre making a meaningful difference for users, they feel a greater sense of pride and accomplishment.
Tools and Processes to Foster Developer Joy
To enable Developer Joy, teams at Atlassian have implemented practical solutions:
Constructive Code Reviews: By establishing a code review culture where feedback is respectful and constructive, teams can maintain high standards without discouraging or frustrating developers. Guidelines like assuming competence, offering clear reasoning, and avoiding dismissive comments make reviews both productive and uplifting.
Flaky Test Detection: The Confluence team developed an internal tool that identifies âflaky testsâ (tests that fail intermittently) to save developers from unnecessary debugging. This tool boosts productivity by automating the detection and removal of unreliable tests.
The Punit Bot for Review Notifications: Timely code reviews are essential for maintaining team flow. The Punit Bot automatically notifies team members when their input is needed on pull requests, cutting down waiting times and keeping development on track.
Cross-Functional, Autonomous Teams
Teams need the freedom to work independently while staying aligned on goals. By embedding key functions within each team (like design, QA, and operations), Atlassian ensures that teams can progress without external dependencies. This âstack interchangeâ model allows each team to flow without bottlenecks.
Quality Assistance over Quality Assurance
Developers at Atlassian donât rely solely on QA engineers to validate code. Instead, they partner with QA in the planning stage, gaining insights on testing best practices and writing their own test cases. This approach, called âQuality Assistance,â keeps quality embedded throughout the process and gives developers more control over the software they release.
Collaborating with Product Teams
Effective collaboration with product teams is crucial. Atlassian integrates developers into the full product lifecycleâfrom understanding the problem to assessing impact after release. This holistic involvement reduces miscommunication, enables rapid adjustments based on early feedback, and fosters a sense of ownership and pride in the end product.
The Developer Joy Survey: Measuring What Matters
To ensure Developer Joy remains high, Atlassian conducts regular âDeveloper Joy Surveys,â asking developers about their satisfaction in areas such as tool access, wait times, autonomy, and overall work satisfaction. By measuring both satisfaction and importance, teams identify and address specific challenges to ensure joy remains a central part of their development culture.
Notable Quotes and Jokes
âDeveloper Joy is about creating an environment where developers thrive, not just survive.â
âIf you canât measure Developer Joy, youâre probably measuring the wrong thing.â
âCode reviews should be about learning, not earning jerk points.â
âProductivity isnât about lines of code; itâs about finding joy in the code you write.â
IntroductionPurpose and Relevance
This talk explores the nuances of managing software engineering teams. Itâs particularly relevant for new or seasoned managers, especially those transitioning from technical roles to leadership. The speaker, Kevin Pilch, leverages his extensive experience managing engineering teams at Microsoft to provide insights into effective management strategies, challenges, and actionable advice.
Target Audience
Ideal for current and aspiring managers of software engineering teams, as well as individual contributors considering a management path.
Main Content
Coaching vs. Teaching
The emphasis here is on coaching engineers rather than simply teaching them. Coaching means asking questions that encourage team members to find solutions independently, fostering growth and engagement. By using the "ask solution" quadrant approach, managers can guide engineers toward problem-solving rather than directly offering answers, which enhances ownership and accountability.
Focus on Top Performers
Spend more time supporting top performers instead of focusing solely on underperformers. The impact of losing a high performer is significantâthey are often highly sought after and can easily find other opportunities. Retaining skilled contributors by offering continuous support and new challenges is essential.
Importance of Self-Evaluation
The self-evaluation process is a valuable opportunity for engineers to reflect on their career paths, skill gaps, and accomplishments. By encouraging engineers to take ownership of self-assessments, managers promote introspection and personal growth, while also creating useful documentation for future managers and potential promotions.
Providing Clear Feedback
When giving performance feedback, itâs essential to avoid âweasel wordsâ and sugarcoating, which soften the message and create misunderstandings. Use specific language that correlates to performance expectationsâsuch as âlower than expected impactââto ensure feedback is clear, actionable, and direct.
Encouraging Constructive Failure
Allow team members to experience failure on controlled projects to enhance learning and resilience. This approach lets engineers learn from mistakes without jeopardizing critical objectives. By creating âsafe-to-failâ environments, managers can frame certain projects as experiments and define success metrics upfront, avoiding sunk cost fallacies and confirmation biases.
Task Assignment Using the ABC Framework
Assign tasks based on complexity relative to each team memberâs skill level. Above-level tasks serve as stretch assignments to promote growth, current-level tasks reinforce skills, and below-level tasks include routine but necessary responsibilities that everyone shares. Balancing these types keeps team members challenged and engaged while ensuring essential work is completed.
Motivating Different Personality Types
The SCARF modelâStatus, Certainty, Autonomy, Relatedness, Fairnessâcan help recognize diverse motivators across the team. Managers should tailor interactions to each team memberâs unique motivators, fostering a supportive environment that avoids triggering negative responses.
Defining Success on My Own Terms: Lessons from My Journey in Tech
For over 25 years, I've navigated the ever-changing landscape of the tech industry. This journey has been filled with successes, failures, and invaluable lessons that have shaped not only my career but also my understanding of what success truly means. If you're a developer, entrepreneur, or someone contemplating your own path in tech, perhaps my experiences can offer some insights.
The Evolution of Success
My definition of success has shifted throughout my career. It began with a desire for prestige, evolved into a quest for independence, and later transformed into valuing time above all else. I've come to realize that success isn't a fixed destination but a moving target that changes as we grow.
"The definition of success for me has shifted throughout my career. It used to just mean prestige. Then it meant independence, and then it meant time, and it's probably going to change again."
Building Request Metrics
I founded Request Metrics with the goal of addressing a critical problem: web performance. Initially, we focused on client-side observability, aiming to help developers monitor their websites and applications. However, we soon discovered that web performance is a complex issue, laden with constantly changing metrics and definitions.
The Challenge of Web Performance
Developers often struggle with understanding and improving web performance. The industry's metrics seem to continually shift, making it hard to pin down what "fast" truly means. This confusion was costing businesses real money, especially as user expectations for speed grew.
"It turns out developers don't know how to make things fast, and it's a problem that got a lot more important recently because of a thing Google did called the Core Web Vitals."
Google's Core Web Vitals
The game changed when Google introduced Core Web Vitalsâa set of metrics that directly impact search rankings. Suddenly, web performance wasn't just a technical concern but a business-critical issue. Companies that relied on SEO for visibility faced tangible consequences if their websites didn't meet these new standards.
"Google said, 'This is how fast you need to be,' and if you don't, you're going to lose page rank. So now this suddenly got way more... now there is a cost to do this. If you are an e-commerce store or you are a content publisher... you care a whole lot about the Core Web Vitals; you care about performance."
Pivoting to Solve Real Problems
Recognizing this shift, we pivoted Request Metrics to focus on helping businesses understand and improve their Core Web Vitals. We developed tools that provide clear, actionable insights into performance issues. By doing so, we addressed a real pain point, offering solutions that companies were willing to invest in to protect their search rankings and user experience.
"We started building a new thing that was all about the Core Web Vitals. It was like, 'This is the problem that we need to solve.' Businesses that depend on their SEO... it's not clear when they're about to lose their SEO ranking because of performance issues. So let's focus on that."
Lessons Learned
Throughout this journey, I've learned several key lessons.
Time is precious. Life is unpredictable, and opportunities can be fleeting. It's crucial to focus on what truly matters and act promptly.
"First, you don't have as much time as you think. This story can end for any one of us tomorrow... It might all be over tomorrow, so do what you think is important."
Embracing uncertainty is essential. Feeling unprepared is natural. Many successful endeavors begin without a clear roadmap. Confidence often comes from taking action and learning along the way.
"Don't worry if you don't know, if you don't feel confident in what you're doing. None of us know what we're doing when we start... They just started and figured it out as they went. You can do that too."
Building relationships is vital. Success isn't achieved in isolation. Cultivating strong relationships and working collaboratively can open doors you never knew existed.
"Remember, no matter what you do or what you want out of life, you need to build relationships with people around you. Don't isolate yourself and think you can solve it all by yourself. Those relationships... are going to pay huge dividends that you could never imagine."
Solving real problems should be a priority. Focus on creating solutions that address genuine needs. If your product solves a real problem, people are more likely to value and pay for it.
"Be sure to build products that actually solve real problems that cost people money. Otherwise, you might find yourself building something really cool that nobody is ever going to pay you for."
Adapting and evolving are necessary. Be prepared to change course. Flexibility is key to staying relevant and achieving long-term fulfillment.
"We found through this we found a problem that was costing money to real people, and this is the path that we're on right now... because now we're solving a problem for people that... it's cheaper to pay us to solve the problem than to deal with the risks."
Taking risks and shipping early can lead to growth. Don't wait for perfection. Launching early allows you to gather feedback and iterate, which is more valuable than holding back out of fear.
"If you're going to build something successful and durable... you're going to need people to help. And be sure to build products that actually solve real problems... But you won't hit them unless you ship something, and if you're not embarrassed of it, you're waiting too long. Just throw something together and get it out there and see if anybody cares."
Moving Forward
As I continue on this path, I understand that my definition of success will keep evolving. What's important is to remain true to oneself, prioritize meaningful work, and leverage relationships to create lasting impact.
For programmers and tech enthusiasts, "Hello World" is a rite of passage, a first step in coding. But behind the simplicity of printing "Hello World" on the screen, there lies a deeply intricate process within the Windows operating system. This article uncovers the fascinating journey that a simple printf command in C takes, from the initial code execution to the textâs appearance on the screen, traversing multiple layers of software and hardware. If you're curious about what happens behind the scenes of an OS or want a glimpse into the hidden magic of programming, this guide is for you.
Starting Point: Writing Hello World in C
The classic C code printf("Hello, World!"); initiates the journey. In this line, the printf function doesn't directly display text. Instead, it prepares data for output, setting off a series of calls to the OS to manage the display of the text.
Processing printf: User Mode to Kernel Mode
The runtime library processes printf, identifying format specifiers and preparing raw text to be sent to the output. This initiates a function call, like WriteFile or WriteConsole, which interacts with Windowsâ Win32 APIâa vast interface linking programs to system resources.
Kernel32.dll: Despite its name, Kernel32.dll operates in user mode, providing system access without directly tapping into the kernel. Named for historical reasons, it bridges functions requiring OS kernel resources by keeping security intact.
Transitioning with System Calls
System calls serve as gates from user mode (where applications operate) to kernel mode (where core OS processes run). Here, Windows uses the System Descriptor Table and system calls like int 2E to cross into kernel mode securely, ensuring only validated programs access system resources.
Windows Kernel Processing with ntoskrnl.exe
After the system call, ntoskrnl.exe checks permissions and validates parameters to ensure secure execution. This step guarantees the program isnât making unauthorized access attempts, which fortifies Windows against possible exploits.
Console Management through csrss.exe
The Client Server Runtime Subsystem (csrss.exe) manages console windows in user mode. csrss updates the display buffer, which holds the text data ready for rendering. It keeps a two-dimensional array of characters, handling all aspects like color, intensity, and style to maintain the console windowâs appearance.
Rendering Text with Graphics Device Interface (GDI)
GDI takes over for text rendering within the console, providing essential drawing properties like font and color. The console then relies on the Windows Display Driver Model (WDDM), which bridges communication between software and the graphics hardware.
The GPU and Frame Buffer
The GPU receives the data, rendering the text by processing pixel-by-pixel instructions into the frame buffer. This buffer, a region of memory storing display data, holds the image of "Hello World" that will appear on screen. The GPU then sends this image to the display via HDMI or another interface.
From Monitor to Visual Cortex
The display presents the text through LED pixels, and from there, light travels to the viewerâs eyes. Visual processing occurs in the brain's visual cortex, ultimately registering "Hello World" in the viewer's consciousnessâa culmination of hardware, software, and human biology.
Notable Quotes and Jokes from Dave Plummer:
"Imagine the simplest Windows program you could write...but do you know how the magic happens?"
"Our journey begins in userland within the heart of your C runtime library."
"Calling printf is like sending a messenger on a long cross-country journey from high-level code to low-level bits and back again."
"When 'Hello World' pops up on the screen, youâre witnessing the endpoint of a complex, coordinated process..."
For those diving into AI applications, especially prompt engineering with generative AI, understanding trust-building and prompt precision is key to leveraging AI effectively. If youâre an AI practitioner, developer, or someone interested in optimizing how language models generate outputs, this guide explores techniques to achieve trustworthy and accurate AI responses. By improving prompt engineering skills, youâll better navigate the complexities of AI interactions and make your AI applications more reliable, relevant, and valuable.
Core Techniques and Strategies in Prompt Engineering
When working with generative AI, the goal is to create prompts that elicit useful, accurate, and relevant responses. This requires understanding both the technical aspects of prompt engineering and the psychological aspects of trust. Here are key techniques for mastering this process:
The Importance of Trust in AI Outputs
Trust plays a central role in whether users accept or reject AI-generated outputs. As the speaker noted, âTrust is the bridge between the known and the unknown.â For AI to be effective, especially in high-stakes fields like medicine or government applications, users must feel confident in the systemâs reliability and fairness. Factors that foster this trust include:
Accuracy: Ensuring the output is based on factual information and up-to-date sources.
Reliability: Confirming that outputs remain consistent across different scenarios.
Personalization: Tailoring responses to individual needs and contexts.
Ethics: Adhering to ethical guidelines, avoiding bias, and maintaining cultural sensitivity.
Precision in Prompt Engineering: Essential Techniques
To build trust, prompts need to be structured in a way that maximizes clarity and minimizes ambiguity. Key methods include:
Role Prompting: Assigning specific roles, such as âact as a coding assistant,â guides the model in responding within a particular expertise framework. As the speaker shared, âRole prompting is really good in terms of getting it to go find all those billions of web pages it was trained on.â
Chain of Thought Prompting: By instructing the model to provide step-by-step reasoning, this method helps in breaking down complex queries and reducing errors. For example, prompting the model to explain each step in a calculation avoids âerror piling,â where initial mistakes skew subsequent responses.
System Messages: Used primarily by developers, system messages define overarching rules or tones for the AI. These instructions are hidden from the end-user but ensure the model stays consistent, ethical, and aligned with specific guidelines.
Handling AIâs Limitations: Mitigating Hallucinations and Bias
âHallucinationâ refers to instances where AI generates plausible-sounding but incorrect information. The speaker explained, âWe all think that hallucination is a bug; itâs actually not a bugâitâs a feature, depending on what youâre trying to do.â For applications where accuracy is crucial, employing techniques like Retrieval-Augmented Generation (RAG) helps ground AI responses by referencing reliable external sources.
Optimizing Prompt Parameters for Desired Outputs
Adjusting parameters such as temperature, frequency penalties, and presence penalties can enhance the creativity or precision of AI responses. For example, higher temperatures lead to more creative, varied outputs, while lower settings make responses more predictable and factual. As the speaker noted, âEvery word in a prompt matters,â so these settings allow for fine-tuning responses to suit specific needs.
Recap & Call to Action
Effective prompt engineering isnât just about crafting promptsâitâs about understanding trust and precision. Key strategies include role prompting, step-by-step guidance, and adjusting AI parameters to manage reliability and relevance. Remember, the goal is to enhance user trust by ensuring outputs are clear, relevant, and ethically sound. Try implementing these techniques in your next AI project to see how they impact the quality and trustworthiness of your results.
Gwern is a pseudonymous researcher and writer. He was one of the first people to see LLM scaling coming. If you've read his blog, you know he's one of the most interesting polymathic thinkers alive.
In order to protect Gwern's anonymity, I proposed interviewing him in person, and having my friend Chris Painter voice over his words after. This amused him enough that he agreed.
Introduction
In todayâs streaming-centric world, the demand for smooth, high-quality, and secure content playback has never been higher. Whether itâs movies, music, or live broadcasts, users expect seamless experiences across multiple devices and network conditions. For developers and media engineers, understanding adaptive streaming and secure content delivery on the web is critical to meet these demands. This guide dives into adaptive streaming, DRM encryption, and decryption processes, providing the essential tools and concepts to ensure secure, efficient media delivery.
Who This Guide Is For
This guide is intended for software engineers, streaming platform developers, and media engineers focused on optimizing web streaming quality and security. Those interested in learning about adaptive bitrate streaming, DRM protocols, and encryption processes will find valuable insights and practical applications.
In modern software development, unit testing has become a foundational practice, ensuring that individual components of codeâspecifically functionsâperform as expected. For C++ developers, unit testing offers a rigorous approach to quality control, catching bugs early and enhancing code reliability. This article covers the essentials of unit testing in C++, focusing on why and how to apply it effectively in your projects. Whether youâre an experienced developer or a newcomer in C++, this guide will clarify best practices and introduce powerful frameworks to streamline your testing efforts.
Core Concepts and Challenges in Unit Testing
Understanding Unit Testing in C++
Unit testing verifies the smallest unit of code, usually a function, to confirm it works as intended. Over the past decade, it has become essential for software development projects, preventing critical bugs from reaching production and reducing the risk of project failures. While the concept is straightforward, implementing effective unit tests in C++ brings unique challenges, such as determining what to test and choosing the right framework to manage tests efficiently.
Addressing Key Challenges
Framework Selection: C++ offers various testing frameworks like Catch2, which simplifies setting up unit tests and provides structured error reporting.
Consistent Definitions: Defining what qualifies as a unit test varies across the industry. This inconsistency can complicate efforts to standardize testing practices.
Testing Complexity: Many projects require extensive, comprehensive testing to cover complex logic, edge cases, and integration points without compromising performance.
Implementing Unit Tests Effectively
Using a Framework
Frameworks like Catch2 streamline test organization, allowing developers to structure tests in isolated, repeatable units. They provide clear output, automated reporting, and enable testing of all components, highlighting each failure without halting the entire test process. The framework choice is critical in ensuring that tests are not only functional but also maintainable and understandable.
Structure and Placement of Tests
The closer tests are to the code they evaluate, the easier they are to maintain. Best practices recommend keeping test files within the same project structure, allowing for easy updates and reducing the chance of disconnects between tests and the code they assess.
Scientific Principles in Unit Testing
Effective unit testing is analogous to scientific experimentation. Each test is an âexperimentâ designed to verify code behavior by testing specific inputs and expected outcomes. Emphasizing falsifiability ensures that tests are objective and replicable, providing clear indications of any issues. Core scientific principles in testing include:
Repeatability and Replicability: Tests should yield consistent results on repeated runs.
Precision and Accuracy: Tests should be specific and unambiguous, with clear indications of success or failure.
Thorough Coverage: Effective tests cover all code paths and edge cases, ensuring all possible scenarios are addressed.
Valid and Invalid Tests: Ensuring Accuracy
Accurate tests provide clear insights into code functionality. Avoid using the codeâs output as its own test standardâknown as circular logicâbecause it cannot reliably reveal bugs. Instead, source test expectations from reliable, external standards or reference calculations to ensure validity and rigor.
White Box vs. Black Box Testing Approaches
Two approaches define C++ unit testing:
White Box Testing: Tests directly access private code areas using workarounds like friend classes, allowing tests to examine internal states. However, this method ties tests closely to code structure, making future refactoring more challenging.
Black Box Testing: Tests only interact with public interfaces, testing expected behaviors from an end-user perspective. Black Box Testing is recommended for maintainability, as it allows refactoring without breaking tests by focusing on behavior rather than code internals.
Behavior-Driven Development (BDD) and Documentation
BDD guides developers to create tests focused on expected behaviors, providing intuitive documentation. Each test names and validates a specific behavior, such as "a new cup is empty," which makes understanding the code straightforward for future developers.
Designing Readable and Maintainable Tests
Readable and maintainable tests are simple and free of unnecessary complexity. Every unit test should focus on a single behavior, making tests easy to interpret and troubleshoot. This clarity is essential for enabling reviewers to understand test intentions without knowing the code intimately.
Test-Driven Development (TDD) and Its Role in Design
TDD reinforces software design by encouraging developers to write tests before code. Known as the Red-Green-Refactor cycle, TDD begins with writing a failing test (Red), creating code to make the test pass (Green), and refining the code (Refactor). This practice minimizes bugs from the outset, refines design, and builds a stable foundation of tests to verify code during refactoring.
In this blog post, I'll be sharing a collection of videos with concise content digests. These summaries extract the key points, focusing on the problem discussed, its root cause, and the solution or advice offered. I find this approach helpful because it allows me to retain the core information long after watching the video. This section will serve as a dedicated space for these "good watches," presenting only the most valuable videos and their takeaways in one place.
Problem: Office life has adopted tactics of sabotage (00:01:13) similar to a WWII manual, where inefficiency is encouraged through endless meetings, paperless offices, and waiting for decisions in larger meetings.
Root Cause: Bureaucratic processes have unintentionally adopted methods once used deliberately to disrupt efficiency.
Solution: Recognize the signs of sabotage in office routines and seek to streamline decision-making and reduce unnecessary meetings.
1.2 Administrative Bloat
Problem: Administrative jobs (00:03:28) have increased from 25% to 75% of the workforce. These include unnecessary supervisory, managerial, and clerical jobs.
Root Cause: Expansion of administrative roles rather than reducing workload with technology.
Solution: A shift towards more meaningful roles and reducing bureaucratic excess would help in streamlining operations.
Problem: Many managers are promoted based on tenure or individual performance (00:24:26), rather than leadership skills, leading to poor team management.
Root Cause: Promotions based on irrelevant criteria, such as tenure, rather than leadership capability.
Solution: Companies need to create pathways for individual contributors to be rewarded without forcing them into management roles.
3.2 Disconnect Between Managers and Employees
Problem: Managers often do not engage with employees on a personal level (00:26:32), leading to isolation and poor job satisfaction.
Root Cause: Lack of training for managers to build relationships with their teams.
Solution: Managers should be trained in emotional intelligence and encouraged to have personal conversations with employees.
Problem: Reorganizations, layoffs, and restructuring cause ongoing stress for employees (00:34:28). People live in fear of losing their jobs despite hard work.
Root Cause: Frequent corporate restructuring often lacks a clear purpose beyond satisfying financial analysts or stockholders.
Solution: Limit reorganizations to only when necessary and focus on transparent communication to reduce employee anxiety.
4.2 Cynicism Due to Unfair Treatment
Problem: When workplaces are seen as unfair (00:46:43), cynicism grows, leading to a toxic environment.
Root Cause: Lack of transparency and fairness in company policies and actions, leading to distrust.
Solution: Implement fair policies and involve employees in decision-making to reduce feelings of exploitation.
Workplace Dysfunction: Bureaucratic inefficiency, administrative bloat, and unnecessary meetings create a sense of sabotage in modern offices. Solution: Streamline decision-making and reduce bureaucratic roles.
Employee Burnout: Burnout is widespread due to overwork, isolation, and emotional stress. Solution: Acknowledge the signs of burnout, reduce workload, and foster open communication.
Managerial Failures: Many managers lack the skills to lead effectively, causing disengagement and poor team dynamics. Solution: Train managers in leadership and emotional intelligence.
Corporate Culture: Frequent reorganizations and unfair treatment create cynicism and stress among employees. Solution: Ensure fair policies and minimize unnecessary restructurings.
Lack of Meaningful Work: Employees feel disconnected from the social value of their work, seeing it as pointless. Solution: Align work tasks with human values and meaningful contributions.
The most critical issues are employee burnout and the disconnect between management and workers, both of which contribute to widespread dissatisfaction and inefficiency in workplaces. Addressing these through better leadership training, reducing unnecessary work, and improving workplace communication can lead to healthier, more engaged employees.
Hereâs a streamlined travel plan for visiting some of Japanâs most iconic destinations, focusing on the essential experiences in each place. Follow this itinerary for a mix of history, nature, and food.
1. Shirakawago
Start your journey in Shirakawago, a mountain village known for its traditional Gassho-zukuri farmhouses and heavy winter snowfall. The buildings are arranged facing north to south to minimize wind resistance. Stay overnight in one of the farmhouses to fully experience the town.
Don't miss: The House of Pudding, serving Japanâs best custard pudding (2023 winner).
2. Takayama
Head to Takayama, a town in the Central Japan Alps, filled with traditional architecture and a retro vibe. Walk through the Old Town, and visit the Takayama Showa Museum, which perfectly captures Japan in the 1950s and 60s.
Must-try food:Hida Wagyu beef is a local specialty, available in street food stalls or restaurants. You can enjoy a stick of wagyu for around 600 yen.
3. Kyoto
Next, visit the cultural capital, Kyoto, and stay in a Machiya townhouse in the Higashiyama district for an authentic experience. Kyoto offers endless shrines and temples to explore.
Fushimi Inari Shrine: Famous for its 10,000 red Torii gates leading up Mount Inari. The gates are donated by businesses for good fortune.
Kinkakuji (Golden Pavilion): One of Kyotoâs most iconic landmarks, glistening in the sunlight.
Tenryuji Temple: A 14th-century Zen temple with a garden and pond, virtually unchanged for 700 years.
4. Nara
Travel to Nara, a smaller city where you can explore the famous Nara Park, home to 1,200 friendly deer. You can bow to the deer, and they'll bow back if they see you have crackers.
Todaiji Temple: Visit the 49-foot-tall Buddha and try squeezing through the pillarâs hole (said to grant enlightenment).
Yomogi Mochi: Donât miss this chewy rice cake treat filled with red bean paste, but eat it carefully!
5. Osaka
End your trip in Osaka, known as the nationâs kitchen. Stay near Dotonbori to experience the neon lights and vibrant nightlife.
Takoyaki: Grab some fried octopus balls, Osakaâs most famous street food, but be carefulâtheyâre hot!
Osaka Castle: Explore this iconic castle, though the interior is a modern museum.
This travel plan covers historical landmarks, must-try local foods, and unique cultural experiences, offering a comprehensive taste of Japan.
"Code is a cost. Code is not an asset. We should have less of it, not more of it."
Other thoughts on this topic:
Martin Fowler (Agile advocate and software development thought leader) has expressed similar thoughts in his writings. In his blog post "Code as a Liability," he explains that every line of code comes with maintenance costs, and the more code you have, the more resources are needed to manage it over time:
"The more code you have, the more bugs you have. The more code you have, the harder it is to make changes."
John Ousterhout, a professor and computer scientist, has echoed this in his book "A Philosophy of Software Design." He talks about code complexity and how more code often means more complexity, which in turn leads to more problems in the future:
"The most important thing is to keep your code base as simple as possible."
Importance of removing dead code: Dead code clutters the codebase, adds complexity, and increases maintenance costs. Action: Actively look for dead functions or features that are no longer in use. For example, if a feature has been deprecated but not fully removed, ensure its code is deleted.
Techniques for identifying dead code: Use tools like static analysis, manual code review, or testing. Action: Rename the suspected dead function, rebuild, and let the compiler flag errors where the function is still being used.
Using static analysis and compilers: These tools help identify unreachable or unused code. Action: Regularly run tools like CPPCheck or Clang Analyze in your CI pipeline to detect dead code.
Renaming functions to detect dead code: A simple way to identify unused code. Action: Rename a function (e.g., myFunction to myFunction_old), and see if it causes errors during the build process. If not, the function is likely dead and can be safely removed.
Deleting dead features and their subtle dependencies: Features often have dependencies that may be missed. Action: When removing a dead feature, check for subtle references, such as menu items, command-line flags, or other parts of the system that may still rely on it.
Caution with Large Codebase Changes
Taking small, careful steps: Removing too much at once can lead to major issues. Action: Remove a small function or part of the code, test, and repeat. For example, instead of removing an entire module, start with one function.
Avoiding aggressive feature removal: Over-removal can cause unexpected failures. Action: Approach code deletion incrementally. Donât aim to delete an entire feature at once; instead, tease out its components slowly to avoid breaking dependencies.
Moving code to reduce scope: If code is not needed at the global scope, move it to a more local context. Action: Move public functions from header files to .cpp files and see if any errors occur. This can help isolate the functionâs scope and make it easier to remove later.
Risk of breaking builds: Avoid breaking the build with massive deletions. Action: Ensure you take incremental steps, test continuously, and use atomic commits to revert small changes if needed.
Refactoring Approaches
Iterative refactoring and deletion: Refactor code in small steps to ensure stability. Action: When removing a dead function, check what other code depends on it. If a function calling it becomes unused, continue refactoring iteratively.
Refactoring legacy code: Legacy code can often hide dead functions. Action: Slowly reduce the scope of legacy functions by moving them to lower levels (like .cpp files) to see if their usage drops. If not used anymore, delete them.
Using unit tests for refactoring: Ensure that code works after refactoring. Action: Wrap legacy string classes or custom utility functions in unit tests, then replace the core logic with modern STL alternatives. If the tests pass, the old code can be removed safely.
Replacing custom features with third-party libraries: Many custom solutions from the past can now be replaced by modern libraries. Action: If you have a custom logger class, consider replacing it with a more standardized and robust library like spdlog.
Working with Tools
Using plugins or IDEs: Most modern IDEs can help identify dead code. Action: Use Visual Studio or IntelliJ plugins that flag unreachable code or highlight unused functions.
Leveraging Compiler Explorer: Use online tools to isolate and test specific snippets of code. Action: If you canât refactor in the main codebase, copy the function into Compiler Explorer (godbolt.org) and experiment with it there before making changes.
Setting compiler flags: Enable warnings for unreachable or unused code. Action: Use -Wall or -Wextra in GCC or Clang to flag potentially dead code. For example, set Wextra in your build system to catch unused variables and unreachable code.
Running static analysis tools: Integrate tools like CPPCheck into your CI pipeline. Action: Add CPPCheck to Jenkins and run it with -j to detect dead functions across multiple translation units.
Source Control Best Practices
Atomic commits: Always break down deletions into small, reversible changes. Action: Commit changes one at a time and with meaningful messages, such as "Deleted unused function myFunction()." This allows you to easily revert just one commit if needed.
Small steps and green builds: Ensure the build passes after each commit. Action: Commit your changes, wait for the CI pipeline to return a green build, and only proceed if everything passes.
Keeping history in the main branch: Deleting code in a branch risks losing history. Action: Perform deletions in the main branch with proper commit messages. In Git, avoid squashing commits when merging deletions, as this may obscure your work history.
Communication and Collaboration
Educating teams about dead code: Not everyone understands the importance of cleaning up dead code. Action: When you find dead code, educate the team by documenting what youâve removed and why.
Communicating when deleting shared code: Deleting code that others may rely on needs consensus. Action: Start a conversation with the team and document the code you intend to delete. Make sure the removal wonât disrupt anyoneâs work.
Seasonal refactoring: Pick quieter periods like holidays for large-scale refactoring. Action: Plan code cleanups during slower times (e.g., Christmas or summer) when fewer developers are working. For example, take the three days between Christmas and New Year to remove unused code while avoiding merge conflicts.
Handling Legacy Features
Addressing dead features tied to legacy systems: These can be tricky to remove without causing issues. Action: Mark features as deprecated first, communicate with stakeholders, and plan their removal after a safe period.
Managing end-of-life features carefully: Inform customers and stakeholders before removing any external-facing features. Action: Announce the featureâs end-of-life, allow time for feedback, and only remove the feature after this period (e.g., six months).
Miscellaneous Code Cleanup
Removing unnecessary includes: Many includes are added but never removed. Action: Comment out all include statements at the top of a file, then add them back one by one to see which ones are actually needed.
Deleting repeated or needless code: Repeated code should be factored into functions or libraries. Action: If you find duplicated code, refactor it into a helper function or a shared library to reduce repetition.
Comments in Code
Avoiding inane comments: Comments that explain obvious code operations are distracting. Action: Delete comments like â// increment i by 1â that explain simple logic you can deduce from reading the code.
Recognizing risks in outdated comments: Old comments can hide the fact that code has changed. Action: When refactoring, ensure comments are either updated or removed to avoid misleading information about the codeâs purpose.
Focusing on clean code: Let the code speak for itself. Action: Favor well-written, self-explanatory code that requires minimal commenting. For instance, use descriptive function names like calculateTotal() instead of adding comments like â// This function calculates the total.â
When to Delete Code
Timing deletions carefully: Avoid risky deletions right before a release. Action: Plan large code cleanups in advance, and avoid removing any code near a major product release when stability is crucial.
Refactoring during quiet periods: Use downtimes, such as post-release, for cleanup. Action: After a major release or during holidays, revisit old tasks marked for deletion.
Tracking deletions in the backlog: Use a backlog to schedule code deletions that canât be done immediately. Action: Create a "technical debt" section in your backlog and record all dead code identified for future cleanup.
Final Thoughts on Refactoring
Challenging bad habits: Sometimes teams resist deleting old code. Action: Slowly introduce refactoring practices, starting small to show the benefits.
Measuring and recording progress: Keep track of all dead code and document changes. Action: Use tools like Jira to track deletions and improvements in code health.
Deleting responsibly: Donât delete code just for the sake of it. Action: Ensure that deleted code is truly unused and wonât cause issues down the line. For example, test thoroughly before removing any core functionality.
High-Level Categories of Problems and Solutionsâ
1. Onboarding and Adjustment in New Senior Roles (00)
Problem: Senior engineers often struggle when transitioning to new companies, particularly in adjusting to different company cultures and technical structures.
Context: Moving between large tech companies like Amazon and Meta presents challenges due to different coding practices (e.g., service-oriented architecture vs. monorepo) and operational structures.
Root Cause: A mismatch between previous experiences and new company environments.
Solution: Avoid trying to change the new environment immediately. Instead, focus on learning and adapting to the culture. Build trust with the team over six to seven months before attempting major changes.
Timestamp: 00:03:30
Quote:
"If you go join another company, you've got a lot to learn, you've got a lot of relationships to build, and you ultimately need to figure out how to generalize your skill set."
2. Building Trust and Relationships in Senior Roles (00)
Problem: Senior engineers often fail to invest time in building relationships and trust with new teams.
Context: New senior engineers may rush into projects without first establishing rapport with their colleagues.
Root Cause: Lack of emphasis on trust-building leads to resistance from teams.
Solution: Dedicate the first few months to relationship-building and understanding the teamâs dynamics. Donât attempt large projects right away.
Timestamp: 00:05:00
Quote:
"If you rush that process, you're going to be in for a hell of a lot of resistance."
3. Poor Ramp-up Periods for New Engineers (00)
Problem: New hires are often not given enough time to ramp up before being evaluated in performance reviews.
Context: Lack of structured ramp-up time for new senior hires can lead to poor performance evaluations early on.
Root Cause: Managers failing to allocate sufficient time for new employees to learn and adapt.
Solution: Managers should provide clear onboarding timelines (6-7 months) for engineers to integrate into teams, with gradual increases in responsibility.
Timestamp: 00:09:00
Quote:
"The main thing that we did is just basically give them a budget of some time... to build up their skill set and trust with the team."
4. Mistakes in Adapting to New Cultures (00)
Problem: Senior engineers often try to change new environments too quickly, leading to friction.
Context: Engineers accustomed to one type of tech stack or organizational process may attempt to enforce old methods in a new setting.
Root Cause: Engineers feel uncomfortable in the new culture and attempt to recreate their old environment.
Solution: Focus on understanding the reasons behind the new company's practices before suggesting any changes.
Timestamp: 00:07:00
Quote:
"Failure mode... is to try to change everything... and that's almost always the wrong approach."
5. Misunderstanding the Performance Review Process (00)
Problem: Engineers sometimes misunderstand how they are evaluated in performance reviews, especially during their first year.
Context: Thereâs often confusion about how contributions during the onboarding period are assessed.
Root Cause: Lack of transparency or communication from managers regarding performance criteria.
Solution: Managers must clarify performance expectations and calibration processes, while engineers should ask for regular feedback to stay on track.
Timestamp: 00:10:00
Quote:
"Some managers just don't do a good job of actually setting the stage for new hires."
6. Lack of Visibility in Performance Reviews (00)
Problem: Senior engineers often fail to showcase their work to the broader team, limiting their visibility in performance reviews.
Context: In larger organizations, a single manager is not solely responsible for performance evaluations. Feedback from other team members and leadership is critical.
Root Cause: Not socializing work with peers or senior leadership.
Solution: Regularly communicate your contributions to multiple stakeholders, not just your direct manager.
Timestamp: 00:14:00
Quote:
"Socialize the work that you're doing with those other people... it's even better if you've had a chance to actually talk with them."
7. Taking on Projects Too Early (00)
Problem: Engineers may overestimate their readiness and take on large projects too soon after joining a new company.
Context: Jumping into big projects without adequate preparation can lead to mistakes and strained relationships.
Root Cause: Lack of patience and eagerness to prove oneself.
Solution: Focus on smaller tasks and gradually scale up responsibility after establishing trust and familiarity with the environment.
Timestamp: 00:06:30
Quote:
"Picking up a massive project as soon as you join a company is probably not the best idea."
Onboarding and Adjustment: Senior engineers often face challenges adapting to new company cultures. Solution: Focus on learning the environment, and avoid trying to change it too quickly.
Trust and Relationships: Lack of relationship-building leads to resistance. Solution: Take time to build rapport and trust with the team before diving into big projects.
Performance Reviews: New hires may not understand performance expectations. Solution: Ensure transparency in review processes and socialize your contributions with key stakeholders.
Interviews: Engineers may struggle in behavioral and design interviews. Solution: Take ownership of your contributions and avoid relying on rehearsed answers.
These are the most critical problems discussed in the transcript, with clear, actionable advice for each.
Precompiled Headers: One of the most effective methods is using precompiled headers (PCH). This technique involves compiling the header files into an intermediate form that can be reused across different compilation units. By doing so, you significantly reduce the need to repeatedly process these files, cutting down the overall compilation time. Tools like CMake can automate this by managing dependencies and ensuring headers are correctly precompiled and reused across builds.
Parallel Compilation: Another approach is parallel compilation. Tools like Make, Ninja, and distcc allow you to compile multiple files simultaneously, taking advantage of multi-core processors. For instance, using the -j flag in make or ninja enables you to specify the number of jobs (i.e., compilation tasks) to run in parallel, which can dramatically reduce the time it takes to compile large projects.
Unity Builds: Unity builds are another technique where multiple source files are compiled together as a single compilation unit. This reduces the overhead caused by multiple compiler invocations and can be particularly useful for large codebases. However, unity builds can introduce some challenges, such as longer error messages and potential name collisions, so they should be used selectively.
Code Optimization: Structuring your code to minimize dependencies can also be highly effective. Techniques include forward declarations, splitting projects into smaller modules with fewer interdependencies, and replacing heavyweight standard library headers with lighter alternatives when possible. By reducing the number of dependencies that need to be recompiled when a change is made, you can significantly decrease compile times.
Caching Compilation Results: Tools like ccache store previous compilation results, which can be reused if the source files havenât changed. This approach is particularly useful in development environments where small, incremental changes are frequent.
Here is the detailed digest from Andrew Pearcy's talk on "Reducing Compilation Times Through Good Design", along with the relevant project homepages and tools referenced throughout the discussion.
Andrew Pearcy, an engineering team lead at Bloomberg, outlines strategies for significantly reducing C++ compilation times. The talk draws from his experience of cutting build times from one hour to just six minutes, emphasizing practical techniques applicable in various C++ projects.
Pearcy starts by explaining the critical need to reduce compilation times. Long build times lead to context switching, reduced productivity, and delays in CI pipelines, affecting both local development experience and time to market. Additionally, longer compilation times make adopting static analysis tools like Clang-Tidy impractical due to the additional overhead. Reducing compilation time also optimizes resource utilization, especially in large companies where multiple machines are involved.
He recaps the C++ compilation model, breaking it down into phases: pre-processing, compilation, and linking. The focus is primarily on the first two stages. Pearcy notes that large header files and unnecessary includes can significantly inflate the amount of code the compiler must process, which in turn increases build time.
Quick Wins: Build System, Linkers, and Compiler Cachingâ
1. Build System:
Ninja: Pearcy recommends using Ninja instead of Make for better dependency tracking and faster incremental builds. Ninja was designed for Google's Chromium project and can often be an order of magnitude faster than Make. It utilizes all available cores by default, improving build efficiency.
LLD and Mold: He suggests switching to LLD, a faster alternative to the default linker, LD. Mold, a modern linker written by Rui Ueyama (who also worked on LLD), is even faster but consumes more memory and is available for Unix platforms as open-source while being a paid service for Mac and Windows.
Ccache: Pearcy strongly recommends Ccache for caching compilation results to speed up rebuilds by avoiding recompilation of unchanged files. This tool can be integrated into CI pipelines to share cache across users, which can drastically reduce build times.
Pearcy emphasizes the use of forward declarations in headers to reduce unnecessary includes, which can prevent large headers from being included transitively across multiple translation units. This reduces the amount of code the compiler needs to process.
2. Removing Unused Includes:
He discusses the challenge of identifying and removing unused includes, mentioning tools like Include What You Use and Graphviz to visualize dependencies and find unnecessary includes.
To reduce dependency on large headers, he suggests the Pimpl (Pointer to Implementation) Idiom or creating interfaces that hide the implementation details. This technique helps in isolating the implementation in a single place, reducing the amount of code the compiler needs to process in other translation units.
4. Precompiled Headers (PCH):
Using precompiled headers for frequently included but rarely changed files, such as standard library headers, can significantly reduce build times. However, he warns against overusing PCHs as they can lead to diminishing returns if too many headers are precompiled.
CMake added support for PCH in version 3.16, allowing easy integration into the build process.
Pearcy introduces Unity builds, where multiple translation units are combined into a single one, reducing redundant processing of headers and improving build times. This technique is particularly effective in reducing overall build times but can introduce issues like naming collisions in anonymous namespaces.
CMake provides built-in support for Unity builds, with options to batch files to balance parallelization and memory usage.
Turbocharging Your .NET Code with High-Performance APIs
Steve, a Microsoft MVP and engineer at Elastic, discusses various high-performance APIs in .NET that can optimize application performance. The session covers measuring and improving performance, focusing on execution time, throughput, and memory allocations.
Performance in Application Code Performance is measured by how quickly code executes, the throughput (how many tasks an application can handle in a given timeframe), and memory allocations. High memory allocations can lead to frequent garbage collections, impacting performance. Steve emphasizes that performance optimization is contextual, meaning not every application requires the same level of optimization.
Optimization Cycle The optimization cycle involves measuring current performance, making small changes, and re-measuring to ensure improvements. Tools like Visual Studio profiling, PerfView, and JetBrains products are useful for profiling and measuring performance. BenchmarkDotNet is highlighted for micro-benchmarking, providing precise measurements by running benchmarks multiple times to get accurate data.
High-Performance Code Techniques
Span<T>: A type that provides a read/write view over contiguous memory, allowing for efficient slicing and memory operations. It is highly efficient with constant-time operations for slicing.
Array Pool: A pool for reusing arrays to avoid frequent allocations and deallocations. Using the ArrayPool<T>.Shared pool allows for efficient memory reuse, reducing short-lived allocations.
System.IO.Pipelines: Optimizes reading and writing streams by managing buffers and minimizing overhead. It is particularly useful in scenarios like high-performance web servers.
System.Text.Json: A high-performance JSON API introduced in .NET Core 3. It includes low-level Utf8JsonReader and Utf8JsonWriter for zero-allocation JSON parsing, as well as higher-level APIs for serialization and deserialization.
Examples and Benchmarks Steve presents examples of using these APIs in real-world scenarios, demonstrating significant performance gains. For instance, using Span<T> and ArrayPool in a method that processes arrays and messages led to reduced execution time and memory allocations. Switching to System.IO.Pipelines and System.Text.Json resulted in similar improvements.
"Slicing is really just changing the view over an existing block of memory... it's a constant time, constant cost operation."
"Measure your code, donât assume, donât make assumptions with benchmarks, itâs dangerous."
Conclusion Optimizing .NET code with high-performance APIs requires careful measurement and iterative improvements. While not all applications need such optimizations, those that do can benefit from significant performance gains. Steve concludes by recommending the book "Pro .NET Memory Management" for a deeper understanding of memory management in .NET.
Too Many Interviews (00):
Problem: Candidates face multiple rounds of interviews (up to seven), causing frustration and inefficiency. Many find it counterproductive to go through so many technical interviews.
Root Cause: Overly complex hiring processes that assume more interviews lead to better candidates.
Advice: Implement a streamlined process with just one technical interview and one non-technical interview, each lasting no more than one hour. Long interview processes are unnecessary and may filter out good candidates.
Interview Redundancy (00):
Problem: The same type of technical questions are asked repeatedly across different interviews, leading to duplication.
Root Cause: Lack of coordination among interviewers and reliance on similar types of technical questions.
Advice: Ensure each interviewer asks unique, relevant questions and does not rely on others to gather the same information. Interviewers should bear ultimate responsibility for gathering critical data.
Bias in Hiring (00):
Problem: Interview processes are biased because hiring managers may already have preferred candidates (referrals, strong portfolios) before the process begins.
Root Cause: Pre-existing relationships with candidates or prior work experience influence decisions.
Advice: Avoid dragging out the process to mask biasesâshorter, efficient interviews can make the bias more visible but manageable. Long processes don't necessarily filter out bias.
Long Interview Processes Favor Privilege (00):
Problem: Prolonged interview panels select for candidates who can afford to take time off work, favoring those from more privileged backgrounds.
Root Cause: Candidates from less privileged backgrounds cannot afford to engage in drawn-out interviews.
Advice: Shorten the interview length and focus on relevant qualifications. Ensure accessibility for all candidates by keeping the process simple.
Interview Process Structure
Diffusion of Responsibility (00):
Problem: In group interview settings, responsibility for hiring decisions is diffused, leading to poor or delayed decision-making.
Root Cause: No single person feels accountable for making the final decision.
Advice: Assign ownership of decisions by giving specific interviewers responsibility for crucial aspects of the process. This reduces the likelihood of indecision and delayed outcomes.
Hiring Based on Team Fit vs. Technical Ability (00):
Problem: Emphasis on technical abilities often overshadows the importance of team compatibility.
Root Cause: Focus on technical skills without considering cultural and interpersonal dynamics within the team.
Advice: Ensure that interviews assess not only technical competence but also how well candidates fit into the team dynamic. Incorporate group discussions or casual settings (e.g., lunch meetings) to gauge team vibe.
Ambiguity in Interviewer Opinions (00):
Problem: Some interviewers avoid committing to clear opinions about candidates, preferring neutral stances.
Root Cause: Lack of confidence or fear of being overruled by the majority.
Advice: Use a rating system (e.g., 1â4 scale) that forces interviewers to choose a strong opinion, either in favor or against a candidate.
Candidate Experience and Behavior
Negative Behavior in Interviews (00):
Problem: Candidates who perform well technically but exhibit unprofessional behavior (e.g., showing up late or hungover) can still pass through the hiring process.
Root Cause: Strong technical performance may overshadow concerns about professionalism and reliability.
Advice: Balance technical performance with non-technical evaluations. Weigh behaviors such as punctuality and professional demeanor just as heavily as coding skills.
Take-Home Tests and Challenges (00):
Problem: Some candidates view take-home challenges as extra, unnecessary work, while others see them as a chance to showcase skills.
Root Cause: Different candidates have different preferences and responses to technical assessments.
Advice: Offer take-home tests as an option, but don't make them mandatory. Adjust the evaluation method based on candidate preferences to ensure both parties feel comfortable.
Systemic Issues in the Hiring Process
Healthcare Tied to Jobs (00):
Problem: In the U.S., job-based healthcare forces candidates to accept positions they might not want or complicates transitions between jobs.
Root Cause: The healthcare system is tied to employment, making job transitions risky.
Advice: There's no direct solution provided here, but highlighting the need for systemic changes in healthcare could make the hiring process more equitable.
Lack of Feedback to Candidates (00):
Problem: Many companies avoid giving feedback to candidates after interviews, leaving them unsure of their performance.
Root Cause: Fear of legal liability or workload concerns.
Advice: Provide constructive feedback to candidates, even if they aren't selected. It helps build long-term relationships and contributes to positive company reputation. Some of the best connections come from transparent feedback post-interview.
Hiring for Senior Positions
Senior Candidates Have Low Tolerance for Long Processes (00):
Problem: Highly qualified senior candidates are more likely to decline long and drawn-out interview processes.
Root Cause: Senior candidates, due to their experience and expertise, are less willing to tolerate inefficient processes.
Advice: Streamline the process for senior roles. Keep interviews short, efficient, and focused on relevant discussions. High-level candidates prefer concise assessments over lengthy ones.
Hiring on Trust vs. Formal Interviews
Hiring Based on Relationships (00):
Problem: Engineers with pre-existing relationships or referrals are more likely to be hired than those without, bypassing formal interviews.
Root Cause: Prior work relationships build trust, which can overshadow the need for formal vetting.
Advice: Trust-based hiring should be encouraged when there is prior working experience with the candidate. However, make efforts to balance trust with fairness by including formal evaluations where necessary.
Key Problems Summary
The length and complexity of the hiring process discourages many strong candidates, particularly senior-level applicants. Simplifying the process to two interviews (one technical and one non-technical) is recommended.
Bias in the hiring process, particularly when managers have pre-existing relationships with candidates, leads to unfair outcomes.
Long interview processes favor privileged candidates who can afford to take time off, disadvantaging those from less privileged backgrounds.
Providing feedback to candidates is crucial for building long-term relationships and ensuring a positive hiring experience, yet it's often avoided due to legal concerns.
Team fit is just as important as technical skills, and companies should incorporate group interactions to assess interpersonal dynamics.
Most Critical Issues and Solutions
Problem: Too many technical interviews create frustration and inefficiency.
Solution: Use just one technical and one non-technical interview, and assign responsibility for gathering all relevant information during these sessions.
Problem: Bias due to pre-existing relationships.
Solution: Shorten the process to expose bias more clearly and rely on trust-based hiring only when balanced with formal interviews.
Problem: Lack of feedback to candidates.
Solution: Provide constructive feedback to help candidates improve and establish long-term professional relationships.
This past year, four key lessons transformed my approach to software engineering.
First, I learned that execution is as important as the idea itself. Inspired by Steve Jobs, who highlighted the gap between a great idea and a great product, I focused on rapid prototyping to test feasibility and internal presentations to gather feedback. I kept my manager informed to ensure we were aligned and honest about challenges.
Second, I realized that trust and credibility are fragile but crucial. As a senior engineer, I'm expected to lead by solving complex issues and guiding projects. I saw firsthand how failing to execute or pushing unrealistic timelines could quickly erode trust within my team.
The third lesson was about the importance of visibility. I understood that hard work could go unnoticed if I didnât make it visible. I began taking ownership of impactful projects and increased my public presence through presentations and updates. I also honed my critical thinking to offer valuable feedback and identify improvement opportunities.
Finally, I learned to focus on changing myself rather than others. I used to try to change my team or company, but now I realize itâs more effective to work on my growth and influence others through my actions. Understanding the companyâs culture and my colleagues' aspirations helped me align my efforts with my career goals.
These lessons have reshaped my career and how I approach my role as an engineer.
Iâve tried Notion, Obsidian, Things, Apple Reminders, Apple Notes, Jotter and endless other tools to keep me organised and sure, Notion has stuck around the most because we use it for client stuff, but for todo lists, all of the above are way too complicated.
Iâve given up this week and gone back to paper and a pencil and I feel unbelievably organised and flexible, day-to-day. Itâs because itâs simple. Thereâs nothing fancy. No fancy pen or anything like that either. Just a notebook and a pencil.
Iâm in an ultra busy period right now so for future me when you inevitably get back to this situation: just. use. fucking. paper.
I've been thinking a lot about the state of Free and Open Source Software (FOSS) lately. My concern is that FOSS thrives on surplusâboth from the software industry and the labor of developers. This surplus has been fueled by high margins in the tech industry, easy access to investment, and developers who have the time and financial freedom to contribute to FOSS projects. However, I'm worried that these resources are drying up.
High interest rates are making investments scarcer, particularly for non-AI software, which doesn't really support open-source principles. The post-COVID economic correction is leading to layoffs and higher coder unemployment, which means fewer people have the time or incentive to contribute to FOSS. OSS burnout is another issue, with fewer fresh developers stepping in to replace those who are exhausted by maintaining projects that often lack supportive communities.
Companies are also cutting costs and questioning the value of FOSS. Why invest in open-source projects when the return on investment is uncertain? The rise of LLM-generated code is further disconnecting potential contributors from FOSS projects, weakening the communities that sustain them.
My fear is that FOSS is entering a period of decline. As the industry and labor surpluses shrink, FOSS projects might suffer from neglect, security issues, or even collapse. While some of this decline might be a necessary correction, it's hard not to worry about the future of the FOSS ecosystem, especially when we don't know which parts are sustainable and which are not.
Take two consecutive Fibonacci numbers, for example 5 and 8.
And you're done converting. No kidding â there are 8 kilometers in 5 miles. To convert back just read the result from the other end â there are 5 miles in 8 km!
Another example.
Let's take two consecutive Fibonacci numbers 21 and 34. What this tells us is that there are approximately 34 km in 21 miles and vice versa. (The exact answer is 33.79 km.)
The article explores the challenge of unfinished projects and the cycle of starting with enthusiasm but failing to complete them. The author describes this as the Hydra Effectâeach task completed leads to new challenges. Unfinished projects feel full of potential, but fear of imperfection or even success prevents many developers from finishing.
"An unfinished project is full of intoxicating potential. It could be the next big thing... your magnum opus."
However, leaving projects incomplete creates mental clutter, making it hard to focus and learn key lessons like optimization and refactoring. Finishing is crucial for growth, both technically and professionally.
"By not finishing, you miss out on these valuable learning experiences."
To break this cycle, the author offers strategies: define "done" early, focus on MVP (Minimum Viable Product), time-box projects, and separate ideation from implementation. Practicing small completions and using accountability are also recommended to build the habit of finishing.
The article emphasizes that overcoming the Hydra Effect requires discipline but leads to personal and professional growth.
In this article, I introduce the essentials of application availability and how to approach high availability. High availability is measured by uptime percentage. Achieving 99.999% availability (five nines) means accepting no more than 5 minutes of downtime per year, which requires automation to detect and fix issues fast.
I discuss redundancy as a key strategy to improve availability by using backups for connectivity, compute resources, and persistence. If one component fails, the system switches to a secondary option. However, redundancy adds both cost and complexity. More components require advanced tools, like load balancers, to manage failures, but these solutions introduce their own reliability concerns.
Not every part of an application needs the same availability target. In an e-commerce system, for instance, I categorize components into tiers:
T1 (website and payments) must stay available at all times.
T2 (order management) allows some downtime.
T3 (fulfillment) can tolerate longer outages.
T4 (ERP) has the least strict requirements.
"Your goal is to perform an impact analysis and classify each component in tiers according to its criticality and customer impact."
By setting different availability targets for each tier, you can reduce costs while focusing on the most important parts of your system.
"All strategies to improve availability come with trade-offs, usually involving higher costs and complexity."
This sets the stage for future discussions on graceful degradation, asynchronous processing, and disaster recovery strategies.
If the team is constantly tripping over a recurring issue, it's crucial to fix the root cause, rather than repeatedly patching symptoms. The author mentions, "I decided to fix it, and it took ten minutes to update our subscription layer to call subscribers on the main thread instead," thereby removing the cause of crashes, streamlining the codebase, and reducing mental overhead.
Pace versus quality must be balanced based on context. In low-risk environments, it's okay to ship faster and rely on guardrails; in high-risk environments (like handling sensitive data), quality takes precedence. "You donât need 100% test coverage or an extensive QA process, which will slow down the pace of development," when bugs can be fixed easily.
Sharpening your tools is always worth it. Being efficient with your IDE, shortcuts, and dev tools will pay off over time. Fast typing, proficiency in the shell, and knowing browser tools matter. Although people warn against over-optimizing configurations, "I donât think Iâve ever seen someone actually overdo this."
When something is hard to explain, it's likely incidental complexity. Often, complexity isn't inherent but arises from the way things are structured. If you can't explain why something is difficult, itâs worth simplifying. The author reflects that "most of the complexity I was explaining was incidental... I could actually address that first."
Solve bugs at a deeper level, not just by patching the immediate issue. If a React component crashes due to null user data, you could add a conditional return, but itâs better to prevent the state from becoming null in the first place. This creates more robust systems and a clearer understanding of how things work.
Investigating bugs should include reviewing code history. The author discovered a memory leak after reviewing commits, realizing the issue stemmed from recent code changes. Git history can be essential for debugging complex problems that aren't obvious through logs alone.
Write bad code when needed to get feedback. Perfect code takes too long and may not be necessary in every context. It's better to ship something that works, gather feedback, and refine it. "If you err on the side of writing perfect code, you donât get any feedback."
Make debugging easier by building systems that streamline the process. Small conveniences like logging state diffs after every update or restricting staging environment parallelism to 1 can save huge amounts of time. The author stresses, "If itâs over 50%, you should figure out how to make it easier."
Working on a team means asking questions when needed. Especially in the first few months, it's faster to ask a coworker for a solution than spending hours figuring it out solo. Asking isnât seen as a burden, so long as itâs not something trivial that could be self-solved in minutes.
Maintaining a fast shipping cadence is critical in startups and time-sensitive projects. Speed compounds over time, and improving systems, reusable patterns, and processes that support fast shipping is essential. "Shipping slowly should merit a post-mortem as much as breaking production does."
"Stop hiring for the things you don't want to do. Hire for the things you love to do so you're forced to deal with the things you don't want to do.
This is some of the best advice I've been giving lately. Early on, I screwed up by hiring an editor because I didn't like editing. Since I didn't love editing, I couldn't be a great workplace for an editorâI couldn't relate to them, and they felt alone. My bar for a good edit was low because I just wanted the work off my plate.
But when I started editing my own stuff, I got pretty good and actually started to like it. Now, I genuinely think I'll stop recording videos before I stop editing them. By doing those things myself, I ended up falling in love with them.
Apply this to startups: If you're a founder who loves coding, hire someone to do it so you can't focus all your time on it. Focus on the other crucial parts of your business that need your attention.
Don't make the mistake of hiring to avoid work. Embrace what you love, and let it force you to grow in areas you might be neglecting."
Original post: 2024-09-14 Founder Mode { paulgraham.com }
Theo
Breaking Through Organizational Barriers: Connect with the Doers, Not Just the Boxes
In large organizations, it's common to encounter roadblocks where teams are treated as "black boxes" on the org chart. You might hear things like, "We can't proceed because the XYZ team isn't available," or "They need more headcount before tackling this."
Here's a strategy that has made a significant difference for me:
Start looking beyond the org chart and reach out directly to the individuals who are making things happen.
How to find them?
Dive into GitHub or project repositories: See who's contributing the most code or making significant updates.
Identify the most driven team members: Every team usually has someone who's more passionate and proactive.
Reach out and build a connection: They might appreciate a collaborative partner who shares their drive.
Why do this?
Accelerate Progress: Bypass bureaucratic delays and get projects moving.
Build Valuable Relationships: These connections can lead to future opportunities, referrals, or even partnerships.
Expand Your Influence: Demonstrating initiative can set you apart and open doors within the organization.
Yes, there are risks. Your manager might question why you're reaching out independently, or you might face resistance. But consider the potential rewards:
Best Case: You successfully collaborate to solve problems, driving innovation and making a real impact.
Worst Case: Even if you face pushback, you've connected with someone valuable. If either of you moves on, that relationship could lead to exciting opportunities down the line.
đ Sprints never stop. Sprints in Scrum are constant, unlike the traditional Waterfall model where high-pressure periods are followed by low-pressure times. Sprints create ongoing, medium-level stress, which is more damaging long-term than short-term, intense stress. Long-term stress harms both mental and physical health.
Advice: Build in deliberate breaks between sprints. Allow teams time to recover, reflect, and recalibrate before the next sprint. Introduce buffer periods for less intense work or creative activities.
đ Sprints are involuntary. Sprints in a Scrum environment are often imposed on developers, leaving them no control over the process or duration. Lack of autonomy leads to higher stress, similar to studies where forced activity triggers stress responses in animals. Control over work processes can reduce stress and improve job satisfaction.
Advice: Involve the team in the sprint planning process and give them a say in determining task durations, sprint length, and workload. Increase autonomy to reduce stress by tailoring the Scrum process to fit the teamâs needs rather than rigidly following preset rules.
đĄ Sprints neglect key supporting activities. Scrum focuses on completing tasks within sprint cycles but doesnât allocate enough time for essential preparatory activities like brainstorming and research. The lack of preparation time creates stress and leads to suboptimal work because thinking and doing cannot be entirely separated.
Advice: Allocate time within sprints for essential preparation, brainstorming, and research. Set aside dedicated periods for planning, learning, or technical exploration, rather than expecting full-time execution during the sprint.
đˇ Most Scrum implementations devolve into âScrumfall.â Scrum is often mixed with Waterfall-like big-deadline pressures, which cancel out the benefits of sprints and increase stress. When major deadlines approach, Scrum practices are suspended, leading to a high-stress environment combining the worst aspects of both methodologies.
Advice: Resist combining Waterfall-style big deadlines with Scrum. Manage stakeholder expectations upfront and break larger goals into smaller deliverables aligned with sprint cycles. Stick to Agile principles and avoid falling back into the big-bang, all-at-once delivery mode.
The MrBeast definition of A, B and C-team players is one I havenât heard before:
A-Players are obsessive, learn from mistakes, coachable, intelligent, donât make excuses, believe in Youtube, see the value of this company, and are the best in the goddamn world at their job. B-Players are new people that need to be trained into A-Players, and C-Players are just average employees. [âŚ] They arnât obsessive and learning. C-Players are poisonous and should be transitioned to a different company IMMEDIATELY. (Itâs okay we give everyone severance, theyâll be fine).
Iâm always interested in finding management advice from unexpected sources. For example, I love The Eleven Laws of Showrunning as a case study in managing and successfully delegating for a large, creative project.