Skip to main content

· 40 min read

⌚ Nice watch!

In this blog post, I'll be sharing a collection of videos with concise content digests. These summaries extract the key points, focusing on the problem discussed, its root cause, and the solution or advice offered. I find this approach helpful because it allows me to retain the core information long after watching the video. This section will serve as a dedicated space for these "good watches," presenting only the most valuable videos and their takeaways in one place.

2024-10-16 Can Chinese Speakers Read Japanese? - YouTube { www.youtube.com }

image-20241016003254921

image-20241016003344065

2024-11-03 Keynote: Learning Is Teaching Is Sharing: Building a Great Software Development Team - Björn Fahller - YouTube { www.youtube.com }

image-20241102181531241

We attended a talk by Björn Fahller at ACCU 2024, focusing on how learning, teaching, and sharing are interdependent and critical to team success and personal growth. Below are key steps and ideas that were covered, with some outcomes noted and a few clarifications where needed.

1. Emphasizing Open Sharing for Safety and Improvement (13:52-14:36): Fahller shared an anecdote from 1968 about Swedish military aviation, highlighting the importance of allowing team members to communicate openly, especially about mistakes or difficulties, without fear of punishment. This approach encourages honesty and helps prevent repeated mistakes.

"Military aviation is dangerous... let them openly, and without risk for punishment, share the problems they face while flying."

Outcome: Building a safe environment for sharing leads to a culture where team members can discuss failures without fear, helping the team learn from each experience and improve.

🤖 GPT: Fahller’s translation suggests he views open communication as essential to growth and trust in teams, especially in high-stakes fields.

2. Encouraging Question-Asking and Knowledge Sharing (20:00): In discussing "Sharing is Caring," Fahller emphasized the need for team members to bring up issues or observations that might seem trivial to ensure continuous improvement. He gave examples from aviation, such as pointing out gusts of wind affecting landing, to show how small insights can contribute to collective knowledge.

Outcome: Actively sharing observations improves understanding and may reveal underlying problems that would otherwise go unnoticed. Open communication is key to refining processes.

🤖 GPT: Fahller’s examples reinforce the idea that even seemingly minor details should be voiced -- they may be crucial in the big picture.

3. Addressing Information Overload in Teams (37:52): New team members often feel overwhelmed by the volume of information shared by experienced team members. Fahller suggested that newcomers should ask experienced members to slow down, provide context, and "paint the scene" so they can understand the background of the tasks.

"Ask them to paint the scene. What are they trying to achieve? What is it that is not working?"

Outcome: When we take the time to explain context to newcomers, it helps bridge knowledge gaps and allows everyone to contribute effectively.

🤖 GPT: This approach builds understanding but also patience and humility in experienced team members by reminding them to make knowledge accessible.

4. Creating a Positive Review Culture (33:47): In discussing code reviews, Fahller contrasted two styles: dismissive comments (e.g., "I don’t understand. Rewrite!") vs. constructive feedback (e.g., "Can you explain why you chose to do it this way?"). He emphasized that reviews should be treated as educational opportunities rather than judgment sessions.

Outcome: Constructive reviews foster a growth-oriented environment and allow both the reviewer and reviewee to learn. Constructive feedback motivates improvement, while dismissive comments discourage engagement.

🤖 GPT: A consistent, constructive review culture also promotes long-term trust and makes code quality a shared team responsibility.

5. Handling Toxicity in the Workplace (55:45):

In this segment, Björn Fahller tackled the issue of toxicity within teams and its corrosive effects on collaboration, morale, and individual well-being. He addressed specific toxic behaviors that often crop up in workplaces, describing them not as isolated incidents but as patterns that can erode trust and productivity if left unchecked. Fahller’s examples of toxic behavior included:

  • "The weekly dunce hat" – Singling out someone each week as a scapegoat or object of ridicule, effectively creating an atmosphere of shame and fear.
  • Blame-seeking – Looking for someone to hold responsible for problems, rather than investigating issues constructively or as a team.
  • Threats, pressure, fear, and bullying – Using intimidation tactics to push individuals into compliance, often stifling creativity, openness, and morale.
  • Ghosting – Ignoring someone’s contributions or input entirely, which Fahller noted can make people feel alienated and undervalued.
  • Stealing credit – Taking recognition for someone else’s work, which not only demoralizes the actual contributor but also creates a culture of mistrust.

Fahller stressed that these behaviors are not only demoralizing but actively prevent individuals from sharing ideas and asking questions openly. Such an environment can force people into silence and self-protection, hindering the team’s ability to learn from mistakes and innovate. He emphasized that the first step in combating toxicity is recognition—understanding and identifying toxic patterns when they appear.

"If you're not respected at work," Fahller advised, the first course of action is to try to find an ally. An ally can provide a supportive voice and help validate one's experiences, which can be especially important if toxic behavior is widespread or normalized within the team. An ally may be able to speak up on your behalf, lend credibility to your concerns, and offer support when you’re confronting challenging dynamics. This shared voice can help to bring attention to the toxicity and, ideally, drive change.

However, Fahller acknowledged that finding an ally may not always be enough. If a toxic environment persists despite attempts to address it, he advised a more decisive response: leaving. He argued that individuals should not allow themselves to be "ignored, threatened or made fun of," as staying in such an environment can be mentally and emotionally draining, ultimately leading to burnout and disengagement.

"If all else fails, go elsewhere. Don’t allow yourself to be ignored, threatened or made fun of."

This recommendation underscores Fahller's stance that no one should feel compelled to remain in an unchangeable toxic environment. He suggested that people value their self-respect and mental health over job stability if the work culture is irredeemably harmful.

Fahller’s advice reflected a pragmatic approach to toxicity: address it internally if possible, but recognize when to prioritize personal well-being over enduring a dysfunctional work environment. While leaving a job is often a difficult decision, Fahller's message was clear -- don’t compromise on respect and support. A healthy team environment where people feel safe and valued is essential not just for individual satisfaction but also for collective success.

2024-11-03 Nikhil Suresh - Skills that programmers need, to defend both their code and their careers - YouTube { www.youtube.com }

image-20241102200908952

In his talk, Nikhil Suresh, the director of Hermit Tech, explores the challenges that software engineers face in the corporate world. He begins with an old animal fable about a scorpion and a frog to illustrate the dynamics between programmers and businesses.

"The scorpion wants to ship a web application but cannot program, so it finds a frog because frogs are incredible programmers."

The scorpion assures the frog that it won't interfere with his work. However, after some time, the scorpion hires an agile consultant and imposes new restrictions, disrupting the frog's workflow. This story mirrors how businesses often unknowingly hinder their own developers.

Nikhil emphasizes that most companies don't know much about software, making it difficult for programmers to clearly indicate their value. He refers to Sturgeon's Law, which states that "90% of everything is bad," highlighting the prevalence of low standards in the industry.

He shares personal experiences where previous engineers lacked basic competence, such as not setting primary keys in databases or causing exorbitant costs due to misconfigured systems. These anecdotes illustrate that businesses cannot tell the difference between good and bad programmers, leading to competent developers being undervalued.

Introducing the concepts of profit centers and cost centers, Nikhil explains that IT departments are often seen as cost centers, affecting how programmers are treated within organizations. He points out that being better at programming isn't always highly valued by companies because they may not see a direct link between technical skill and profit.

To navigate these challenges, Nikhil advises developers to never call themselves programmers. He argues that the term doesn't convey meaningful information and can lead to misconceptions.

"If you tell someone who doesn't program that you're a programmer, their first thought is like, 'Ah, one of those expensive nerds.'"

He recommends reading Patrick McKenzie's article "Don't Call Yourself a Programmer, and Other Career Advice", which offers insights into presenting oneself more effectively in the professional sphere.

Nikhil encourages developers to write about their experiences and share them online. By doing so, they can showcase their unique ideas and differentiate themselves in the field. He believes that your unique ideas are what differentiate you from others and that sharing them helps in building a personal brand.

He also suggests that programmers should read outside of IT and delve into the humanities. This broadens their perspectives and provides valuable analogies for complex ideas. Nikhil shares how his involvement in improvised theater and reading "Impro: Improvisation and the Theatre" by Keith Johnstone helped him understand status dynamics in professional interactions.

Understanding these dynamics allows developers to navigate job interviews and workplace relationships more effectively. Nikhil emphasizes the importance of taking control of your career and making decisions that enhance your value to both yourself and society.

In conclusion, Nikhil urges developers to recognize that technical skill isn't the main barrier to having a better career. Factors like communication, strategic thinking, and understanding corporate dynamics play crucial roles. By focusing on these areas, developers can transform their passion into something that has greater value for both themselves and the broader community.

2024-11-02 Get old, go slow, write code! - Tobias Modig - NDC Oslo 2024 - YouTube { www.youtube.com }

image-20241116004657646

📝 Sustainable Software Development Careers: Aging, Quality, and Longevity in Tech

Introduction
In the fast-evolving world of software development, many professionals feel the pressure to stay young, move fast, and keep up with new trends. But does speed really equal success in this field? This post is for experienced developers, tech managers, and anyone considering a long-term career in software. We'll explore why sustainability in development—focusing on quality, experience, and career longevity—matters and how you can embrace aging as an asset, not a setback.

Why You Should Care
The tech industry often promotes rapid career progression and cutting-edge skills over stability and endurance. However, valuing experience, avoiding burnout, and emphasizing quality over speed are essential for creating durable, impactful software and ensuring personal career satisfaction.

Embracing Aging as a Developer

Many developers worry about becoming irrelevant as they age, yet experience can be a strength. Research shows the average age of developers is among the lowest across professional fields, meaning many leave the field early. However, experience contributes to problem-solving, architectural insights, and higher quality standards. Older developers often provide unique perspectives that younger professionals may lack, particularly in maintaining and improving code quality.

Slowing Down for Quality

Too many developers face intense pressure to deliver quickly, often sacrificing quality. This results in technical debt and rushed code that becomes difficult to maintain. The speaker argues that development is a marathon, not a sprint. Slowing down and building sustainable software creates long-term benefits, even if it appears slower at first. By prioritizing thoughtful coding and taking the time to address technical debt, developers can create resilient, maintainable systems.

Challenges with Traditional Career Progression

Many companies push experienced developers into management roles, which can leave skilled coders dissatisfied and underutilized. Known as the Peter Principle, this approach often results in skilled developers becoming ineffective managers. For those passionate about coding, staying in development roles—rather than climbing the corporate ladder—can offer fulfillment, especially if companies recognize and reward this choice.

Common Reasons Developers Leave the Field

Major reasons include burnout, shifting to roles with higher prestige, and losing the spark for coding. Additionally, aging can lead to insecurities about keeping up. To combat these trends, developers should prioritize work-life balance, take time to learn, and avoid the mindset that career progression has to mean management.

Practical Ways to Build a Sustainable Career

  • Commit to Continuous Learning: Attend conferences, read, and experiment with code to stay current.
  • Focus on Quality over Speed: Embrace practices like regular code reviews, refactoring, and retrospectives to build robust systems.
  • Build Team Trust and Psychological Safety: A supportive environment enhances productivity, allowing team members to grow together.
  • Incorporate Slack Time: Give yourself unstructured time to think, learn, and work creatively, helping avoid burnout and stagnation.

Let Experience Be Your Advantage

Staying relevant as a developer means focusing on the quality of your contributions, leveraging your experience to guide teams, and advocating for sustainable practices that benefit the entire organization. By valuing experience, resisting the rush, and maintaining passion, you can contribute meaningfully to tech at any age.

Quotes

"Getting old in software development is not a liability—it's an asset. Make those gray hairs your biggest advantage and let your experience shine through in quality code."

"Software development is not a sprint; it's a marathon. We need to slow down, find a sustainable pace, and stop rushing to deliver at the expense of quality."

"Don't let your career be dictated by the Peter Principle—just because you're a great developer doesn’t mean you’ll enjoy management. Stay with your passion if it’s coding."

"Poor quality code isn’t just a short-term fix; it’s a long-term burden. Building things right the first time is the fastest way to long-term success."

"There’s no need to be Usain Bolt in development; be more like a marathon runner. Set a steady, sustainable pace, focus on quality, and enjoy the journey."

2024-10-29 The Evolution of Functional Programming in C++ - Abel Sen - ACCU 2024 - YouTube { www.youtube.com }

image-20241116004141231

2024-11-04 Functional C++ - Gašper Ažman - C++Now 2024 - YouTube { www.youtube.com }

image-20241103203715424

This is procedural version of code that is ugly and has to be modernized with functional programming.

// procedural example
auto is_hostname_in_args(int, char const* const*) -> bool;
auto get_hostname_from_args(int, char const* const*) -> char const*;

auto get_hostname(int argc, char const* const* argv, std::string default_hostname) -> std::string {
// Split query / getter
if (is_hostname_in_args(argc, argv)) {
// Perhaps... might use optional here too?
return get_hostname_from_args(argc, argv);
}

// Ad-hoc Maybe
if (char const* maybe_host = getenv("SERVICE_HOSTNAME");
maybe_host != nullptr && *maybe_host != '\0') {
return maybe_host;
}

return default_hostname;
}

Unfortunately, I cannot provide the function version, because I don't understand it.

2024-11-07 Reintroduction to Generic Programming for C++ Engineers - Nick DeMarco - C++Now 2024 - YouTube { www.youtube.com }

image-20241107003250333

🔥🔥🔥2024-11-06 LEADERSHIP LAB: The Craft of Writing Effectively - YouTube { www.youtube.com }🔥🔥🔥

image-20241105224810433

found in 2024-11-06 Blog Writing for Developers { rmoff.net }

Introduction Writing isn’t just about sharing information; it’s about making an impact. In this insightful lecture, a distinguished writing instructor from the University of Chicago's Writing Program emphasizes that effective writing requires understanding your audience, establishing relevance, and creating a compelling narrative. This article captures the speaker’s key advice on improving writing by focusing on purpose, value, and the reader's needs.


  1. Focus on Value, Not Originality
  • Advice: The speaker challenges the idea that writing must always present something "new" or "original." Instead, writers should prioritize creating valuable content that resonates with their audience.
  • Application: Rather than striving for originality alone, focus on producing content that addresses the reader’s concerns or questions. A piece of writing is valuable if it enriches the reader’s understanding or helps solve a problem they care about.
  1. Define the Problem Clearly
  • Advice: To make a piece of writing compelling, start by establishing a problem that is relevant to your audience. A well-defined problem creates a sense of instability or inconsistency, which engages readers and positions the writer as a problem-solver.
  • Application: Use contrasting language to highlight instability—words like "but," "however," and "although" signal unresolved issues. This approach shifts the reader’s focus to the problem at hand, making them more receptive to the writer's proposed solution.
  1. Understand and Address Your Reader’s Needs
  • Advice: A writer’s task is to understand the specific needs and concerns of their reading community. This involves identifying problems that resonate with them and framing your thesis or solution in a way that is relevant to their lives or work.
  • Application: In academic and professional settings, locate problems in real-world contexts. Rather than presenting background information, articulate a challenge or inconsistency that is specific to the reader’s field or interests, making your argument compelling and directly relevant.
  1. Use the Language of Costs and Benefits
  • Advice: Writers should make it clear how the identified problem affects the reader directly. Frame issues in terms of "costs" and "benefits" to emphasize why addressing the problem is essential.
  • Application: Highlight the impact of ignoring the problem versus the benefits of solving it. This approach reinforces the relevance of your writing by aligning it with the reader’s motivations and concerns.
  1. Beware of the "Gap" Approach
  • Advice: Avoid using the concept of a "knowledge gap" as the sole justification for writing on a topic. While identifying gaps in research can work, it often lacks the urgency or impact required to engage readers fully.
  • Application: Rather than just pointing out missing information, emphasize the practical implications of filling that gap. Explain how the lack of certain knowledge creates instability or inconsistency in the field, making the need for your insights more compelling.
  1. Adopt a Community-Centric Perspective
  • Advice: Tailor your writing to the specific communities who will read it. Different communities (e.g., narrative historians vs. sociologists) have distinct approaches to problems and value different types of arguments.
  • Application: Define and understand the community of readers your work is meant to serve. Address their concerns directly and frame your argument in terms that align with their unique perspectives and values.
  1. Learn from Published Articles
  • Advice: Published work often contains subtle rhetorical cues about what resonates with readers in a specific field. Study these articles to understand the language, structure, and approach that successful writers use.
  • Application: Identify patterns in the language of published work within your target field. For instance, if a journal commonly uses cost-benefit language, incorporate it into your writing to align with reader expectations.
  1. Emphasize Function Over Form
  • Advice: Writing should serve a clear function beyond just following formal rules. Effective writing achieves its purpose by clearly communicating the problem and its significance to readers.
  • Application: Instead of focusing solely on rules or formalities, think about what your writing needs to accomplish for your audience. Make sure that every section and statement reinforces your overall argument and purpose.

2024-11-08 Developer Joy – How great teams get s%*t done - Sven Peters - NDC Oslo 2024 - YouTube { www.youtube.com }

image-20241107182436885

image-20241107182600363

In today’s fast-evolving tech landscape, “Developer Joy” is emerging as a crucial focus for engineering teams striving to deliver high-quality, innovative software. For those in software engineering or tech management, this concept brings a fresh perspective, shifting away from traditional productivity metrics and emphasizing a developer’s experience, satisfaction, and creativity. By focusing on Developer Joy, teams can foster an environment where developers not only perform optimally but also find deep satisfaction in their craft. This shift is more than just a trend; it’s a rethinking of how we define and sustain productivity in a complex, creative field like software development.

The Problem with Traditional Productivity Metrics

Traditional productivity measures, like lines of code or tasks completed, often fail to capture a developer's real impact. Software development, unlike factory work, requires creativity, problem-solving, and adaptability—traits that are poorly reflected in industrial-era metrics. Instead of simply measuring output, focusing on Developer Joy acknowledges the unique, non-linear nature of coding and innovation.

Developer Joy: A New Approach to Productivity

Developer Joy isn't about doing more in less time; it’s about creating an environment where developers thrive. When developers are joyful, they produce better code, collaborate more effectively, and sustain their motivation over time. Atlassian’s approach to Developer Joy incorporates several elements to support this environment:

  • High-quality Code: Developers enjoy working with well-structured, maintainable code.

  • Progressive Workflows: Fast, friction-free pipelines allow developers to take an idea from concept to deployment quickly.

  • Customer Impact: When developers know they’re making a meaningful difference for users, they feel a greater sense of pride and accomplishment.

    Tools and Processes to Foster Developer Joy

    To enable Developer Joy, teams at Atlassian have implemented practical solutions:

  • Constructive Code Reviews: By establishing a code review culture where feedback is respectful and constructive, teams can maintain high standards without discouraging or frustrating developers. Guidelines like assuming competence, offering clear reasoning, and avoiding dismissive comments make reviews both productive and uplifting.

  • Flaky Test Detection: The Confluence team developed an internal tool that identifies “flaky tests” (tests that fail intermittently) to save developers from unnecessary debugging. This tool boosts productivity by automating the detection and removal of unreliable tests.

  • The Punit Bot for Review Notifications: Timely code reviews are essential for maintaining team flow. The Punit Bot automatically notifies team members when their input is needed on pull requests, cutting down waiting times and keeping development on track.

Cross-Functional, Autonomous Teams

Teams need the freedom to work independently while staying aligned on goals. By embedding key functions within each team (like design, QA, and operations), Atlassian ensures that teams can progress without external dependencies. This “stack interchange” model allows each team to flow without bottlenecks.

Quality Assistance over Quality Assurance

Developers at Atlassian don’t rely solely on QA engineers to validate code. Instead, they partner with QA in the planning stage, gaining insights on testing best practices and writing their own test cases. This approach, called “Quality Assistance,” keeps quality embedded throughout the process and gives developers more control over the software they release.

Collaborating with Product Teams

Effective collaboration with product teams is crucial. Atlassian integrates developers into the full product lifecycle—from understanding the problem to assessing impact after release. This holistic involvement reduces miscommunication, enables rapid adjustments based on early feedback, and fosters a sense of ownership and pride in the end product.

The Developer Joy Survey: Measuring What Matters

To ensure Developer Joy remains high, Atlassian conducts regular “Developer Joy Surveys,” asking developers about their satisfaction in areas such as tool access, wait times, autonomy, and overall work satisfaction. By measuring both satisfaction and importance, teams identify and address specific challenges to ensure joy remains a central part of their development culture.


Notable Quotes and Jokes

  • “Developer Joy is about creating an environment where developers thrive, not just survive.”
  • “If you can’t measure Developer Joy, you’re probably measuring the wrong thing.”
  • “Code reviews should be about learning, not earning jerk points.”
  • “Productivity isn’t about lines of code; it’s about finding joy in the code you write.”

2024-11-09 Herding cats: lessons from 15 years of managing engineers at Microsoft - Kevin Pilch - YouTube { www.youtube.com }

image-20241109130107457

image-20241109130841936

Introduction Purpose and Relevance
This talk explores the nuances of managing software engineering teams. It’s particularly relevant for new or seasoned managers, especially those transitioning from technical roles to leadership. The speaker, Kevin Pilch, leverages his extensive experience managing engineering teams at Microsoft to provide insights into effective management strategies, challenges, and actionable advice.

Target Audience
Ideal for current and aspiring managers of software engineering teams, as well as individual contributors considering a management path.

Main Content

Coaching vs. Teaching
The emphasis here is on coaching engineers rather than simply teaching them. Coaching means asking questions that encourage team members to find solutions independently, fostering growth and engagement. By using the "ask solution" quadrant approach, managers can guide engineers toward problem-solving rather than directly offering answers, which enhances ownership and accountability.

Focus on Top Performers
Spend more time supporting top performers instead of focusing solely on underperformers. The impact of losing a high performer is significant—they are often highly sought after and can easily find other opportunities. Retaining skilled contributors by offering continuous support and new challenges is essential.

Importance of Self-Evaluation
The self-evaluation process is a valuable opportunity for engineers to reflect on their career paths, skill gaps, and accomplishments. By encouraging engineers to take ownership of self-assessments, managers promote introspection and personal growth, while also creating useful documentation for future managers and potential promotions.

Providing Clear Feedback
When giving performance feedback, it’s essential to avoid “weasel words” and sugarcoating, which soften the message and create misunderstandings. Use specific language that correlates to performance expectations—such as “lower than expected impact”—to ensure feedback is clear, actionable, and direct.

Encouraging Constructive Failure
Allow team members to experience failure on controlled projects to enhance learning and resilience. This approach lets engineers learn from mistakes without jeopardizing critical objectives. By creating “safe-to-fail” environments, managers can frame certain projects as experiments and define success metrics upfront, avoiding sunk cost fallacies and confirmation biases.

Task Assignment Using the ABC Framework
Assign tasks based on complexity relative to each team member’s skill level. Above-level tasks serve as stretch assignments to promote growth, current-level tasks reinforce skills, and below-level tasks include routine but necessary responsibilities that everyone shares. Balancing these types keeps team members challenged and engaged while ensuring essential work is completed.

Motivating Different Personality Types
The SCARF model—Status, Certainty, Autonomy, Relatedness, Fairness—can help recognize diverse motivators across the team. Managers should tailor interactions to each team member’s unique motivators, fostering a supportive environment that avoids triggering negative responses.

2024-11-12 Success On Your Own Terms - Todd Gardner - CPH DevFest 2024 - YouTube { www.youtube.com }

image-20241111212449932

Defining Success on My Own Terms: Lessons from My Journey in Tech

For over 25 years, I've navigated the ever-changing landscape of the tech industry. This journey has been filled with successes, failures, and invaluable lessons that have shaped not only my career but also my understanding of what success truly means. If you're a developer, entrepreneur, or someone contemplating your own path in tech, perhaps my experiences can offer some insights.

The Evolution of Success

My definition of success has shifted throughout my career. It began with a desire for prestige, evolved into a quest for independence, and later transformed into valuing time above all else. I've come to realize that success isn't a fixed destination but a moving target that changes as we grow.

"The definition of success for me has shifted throughout my career. It used to just mean prestige. Then it meant independence, and then it meant time, and it's probably going to change again."

Building Request Metrics

I founded Request Metrics with the goal of addressing a critical problem: web performance. Initially, we focused on client-side observability, aiming to help developers monitor their websites and applications. However, we soon discovered that web performance is a complex issue, laden with constantly changing metrics and definitions.

The Challenge of Web Performance

Developers often struggle with understanding and improving web performance. The industry's metrics seem to continually shift, making it hard to pin down what "fast" truly means. This confusion was costing businesses real money, especially as user expectations for speed grew.

"It turns out developers don't know how to make things fast, and it's a problem that got a lot more important recently because of a thing Google did called the Core Web Vitals."

Google's Core Web Vitals

The game changed when Google introduced Core Web Vitals—a set of metrics that directly impact search rankings. Suddenly, web performance wasn't just a technical concern but a business-critical issue. Companies that relied on SEO for visibility faced tangible consequences if their websites didn't meet these new standards.

"Google said, 'This is how fast you need to be,' and if you don't, you're going to lose page rank. So now this suddenly got way more... now there is a cost to do this. If you are an e-commerce store or you are a content publisher... you care a whole lot about the Core Web Vitals; you care about performance."

Pivoting to Solve Real Problems

Recognizing this shift, we pivoted Request Metrics to focus on helping businesses understand and improve their Core Web Vitals. We developed tools that provide clear, actionable insights into performance issues. By doing so, we addressed a real pain point, offering solutions that companies were willing to invest in to protect their search rankings and user experience.

"We started building a new thing that was all about the Core Web Vitals. It was like, 'This is the problem that we need to solve.' Businesses that depend on their SEO... it's not clear when they're about to lose their SEO ranking because of performance issues. So let's focus on that."

Lessons Learned

Throughout this journey, I've learned several key lessons.

Time is precious. Life is unpredictable, and opportunities can be fleeting. It's crucial to focus on what truly matters and act promptly.

"First, you don't have as much time as you think. This story can end for any one of us tomorrow... It might all be over tomorrow, so do what you think is important."

Embracing uncertainty is essential. Feeling unprepared is natural. Many successful endeavors begin without a clear roadmap. Confidence often comes from taking action and learning along the way.

"Don't worry if you don't know, if you don't feel confident in what you're doing. None of us know what we're doing when we start... They just started and figured it out as they went. You can do that too."

Building relationships is vital. Success isn't achieved in isolation. Cultivating strong relationships and working collaboratively can open doors you never knew existed.

"Remember, no matter what you do or what you want out of life, you need to build relationships with people around you. Don't isolate yourself and think you can solve it all by yourself. Those relationships... are going to pay huge dividends that you could never imagine."

Solving real problems should be a priority. Focus on creating solutions that address genuine needs. If your product solves a real problem, people are more likely to value and pay for it.

"Be sure to build products that actually solve real problems that cost people money. Otherwise, you might find yourself building something really cool that nobody is ever going to pay you for."

Adapting and evolving are necessary. Be prepared to change course. Flexibility is key to staying relevant and achieving long-term fulfillment.

"We found through this we found a problem that was costing money to real people, and this is the path that we're on right now... because now we're solving a problem for people that... it's cheaper to pay us to solve the problem than to deal with the risks."

Taking risks and shipping early can lead to growth. Don't wait for perfection. Launching early allows you to gather feedback and iterate, which is more valuable than holding back out of fear.

"If you're going to build something successful and durable... you're going to need people to help. And be sure to build products that actually solve real problems... But you won't hit them unless you ship something, and if you're not embarrassed of it, you're waiting too long. Just throw something together and get it out there and see if anybody cares."

Moving Forward

As I continue on this path, I understand that my definition of success will keep evolving. What's important is to remain true to oneself, prioritize meaningful work, and leverage relationships to create lasting impact.

2024-11-14 Windows: Under the Covers - From Hello World to Kernel Mode by a Windows Developer - YouTube { www.youtube.com }

image-20241116005812334

For programmers and tech enthusiasts, "Hello World" is a rite of passage, a first step in coding. But behind the simplicity of printing "Hello World" on the screen, there lies a deeply intricate process within the Windows operating system. This article uncovers the fascinating journey that a simple printf command in C takes, from the initial code execution to the text’s appearance on the screen, traversing multiple layers of software and hardware. If you're curious about what happens behind the scenes of an OS or want a glimpse into the hidden magic of programming, this guide is for you.

  1. Starting Point: Writing Hello World in C

    • The classic C code printf("Hello, World!"); initiates the journey. In this line, the printf function doesn't directly display text. Instead, it prepares data for output, setting off a series of calls to the OS to manage the display of the text.
  2. Processing printf: User Mode to Kernel Mode

    • The runtime library processes printf, identifying format specifiers and preparing raw text to be sent to the output. This initiates a function call, like WriteFile or WriteConsole, which interacts with Windows’ Win32 API—a vast interface linking programs to system resources.
    • Kernel32.dll: Despite its name, Kernel32.dll operates in user mode, providing system access without directly tapping into the kernel. Named for historical reasons, it bridges functions requiring OS kernel resources by keeping security intact.
  3. Transitioning with System Calls

    • System calls serve as gates from user mode (where applications operate) to kernel mode (where core OS processes run). Here, Windows uses the System Descriptor Table and system calls like int 2E to cross into kernel mode securely, ensuring only validated programs access system resources.
  4. Windows Kernel Processing with ntoskrnl.exe

    • After the system call, ntoskrnl.exe checks permissions and validates parameters to ensure secure execution. This step guarantees the program isn’t making unauthorized access attempts, which fortifies Windows against possible exploits.
  5. Console Management through csrss.exe

    • The Client Server Runtime Subsystem (csrss.exe) manages console windows in user mode. csrss updates the display buffer, which holds the text data ready for rendering. It keeps a two-dimensional array of characters, handling all aspects like color, intensity, and style to maintain the console window’s appearance.
  6. Rendering Text with Graphics Device Interface (GDI)

    • GDI takes over for text rendering within the console, providing essential drawing properties like font and color. The console then relies on the Windows Display Driver Model (WDDM), which bridges communication between software and the graphics hardware.
  7. The GPU and Frame Buffer

    • The GPU receives the data, rendering the text by processing pixel-by-pixel instructions into the frame buffer. This buffer, a region of memory storing display data, holds the image of "Hello World" that will appear on screen. The GPU then sends this image to the display via HDMI or another interface.
  8. From Monitor to Visual Cortex

    • The display presents the text through LED pixels, and from there, light travels to the viewer’s eyes. Visual processing occurs in the brain's visual cortex, ultimately registering "Hello World" in the viewer's consciousness—a culmination of hardware, software, and human biology.

Notable Quotes and Jokes from Dave Plummer:

  • "Imagine the simplest Windows program you could write...but do you know how the magic happens?"
  • "Our journey begins in userland within the heart of your C runtime library."
  • "Calling printf is like sending a messenger on a long cross-country journey from high-level code to low-level bits and back again."
  • "When 'Hello World' pops up on the screen, you’re witnessing the endpoint of a complex, coordinated process..."

2024-11-14 In Prompts We Trust - Jiaranai Keatnuxsuo - CPH DevFest 2024 - YouTube { www.youtube.com }

image-20241116010954387 For those diving into AI applications, especially prompt engineering with generative AI, understanding trust-building and prompt precision is key to leveraging AI effectively. If you’re an AI practitioner, developer, or someone interested in optimizing how language models generate outputs, this guide explores techniques to achieve trustworthy and accurate AI responses. By improving prompt engineering skills, you’ll better navigate the complexities of AI interactions and make your AI applications more reliable, relevant, and valuable.

Core Techniques and Strategies in Prompt Engineering

When working with generative AI, the goal is to create prompts that elicit useful, accurate, and relevant responses. This requires understanding both the technical aspects of prompt engineering and the psychological aspects of trust. Here are key techniques for mastering this process:

The Importance of Trust in AI Outputs

Trust plays a central role in whether users accept or reject AI-generated outputs. As the speaker noted, “Trust is the bridge between the known and the unknown.” For AI to be effective, especially in high-stakes fields like medicine or government applications, users must feel confident in the system’s reliability and fairness. Factors that foster this trust include:

  • Accuracy: Ensuring the output is based on factual information and up-to-date sources.
  • Reliability: Confirming that outputs remain consistent across different scenarios.
  • Personalization: Tailoring responses to individual needs and contexts.
  • Ethics: Adhering to ethical guidelines, avoiding bias, and maintaining cultural sensitivity.

Precision in Prompt Engineering: Essential Techniques

To build trust, prompts need to be structured in a way that maximizes clarity and minimizes ambiguity. Key methods include:

  • Role Prompting: Assigning specific roles, such as “act as a coding assistant,” guides the model in responding within a particular expertise framework. As the speaker shared, “Role prompting is really good in terms of getting it to go find all those billions of web pages it was trained on.”

  • Chain of Thought Prompting: By instructing the model to provide step-by-step reasoning, this method helps in breaking down complex queries and reducing errors. For example, prompting the model to explain each step in a calculation avoids “error piling,” where initial mistakes skew subsequent responses.

  • System Messages: Used primarily by developers, system messages define overarching rules or tones for the AI. These instructions are hidden from the end-user but ensure the model stays consistent, ethical, and aligned with specific guidelines.

Handling AI’s Limitations: Mitigating Hallucinations and Bias

“Hallucination” refers to instances where AI generates plausible-sounding but incorrect information. The speaker explained, “We all think that hallucination is a bug; it’s actually not a bug—it’s a feature, depending on what you’re trying to do.” For applications where accuracy is crucial, employing techniques like Retrieval-Augmented Generation (RAG) helps ground AI responses by referencing reliable external sources.

Optimizing Prompt Parameters for Desired Outputs

Adjusting parameters such as temperature, frequency penalties, and presence penalties can enhance the creativity or precision of AI responses. For example, higher temperatures lead to more creative, varied outputs, while lower settings make responses more predictable and factual. As the speaker noted, “Every word in a prompt matters,” so these settings allow for fine-tuning responses to suit specific needs.

Recap & Call to Action

Effective prompt engineering isn’t just about crafting prompts—it’s about understanding trust and precision. Key strategies include role prompting, step-by-step guidance, and adjusting AI parameters to manage reliability and relevance. Remember, the goal is to enhance user trust by ensuring outputs are clear, relevant, and ethically sound. Try implementing these techniques in your next AI project to see how they impact the quality and trustworthiness of your results.

2024-11-14 Gwern Branwen - How an Anonymous Researcher Predicted AI's Trajectory { www.dwarkeshpatel.com }

image-20241114141855768

Gwern is a pseudonymous researcher and writer. He was one of the first people to see LLM scaling coming. If you've read his blog, you know he's one of the most interesting polymathic thinkers alive.

In order to protect Gwern's anonymity, I proposed interviewing him in person, and having my friend Chris Painter voice over his words after. This amused him enough that he agreed.

2024-11-16 Modern & secure adaptive streaming on the Web - Katarzyna Dusza - CPH DevFest 2024 - YouTube { www.youtube.com }

image-20241115230122104

Introduction In today’s streaming-centric world, the demand for smooth, high-quality, and secure content playback has never been higher. Whether it’s movies, music, or live broadcasts, users expect seamless experiences across multiple devices and network conditions. For developers and media engineers, understanding adaptive streaming and secure content delivery on the web is critical to meet these demands. This guide dives into adaptive streaming, DRM encryption, and decryption processes, providing the essential tools and concepts to ensure secure, efficient media delivery.

Who This Guide Is For This guide is intended for software engineers, streaming platform developers, and media engineers focused on optimizing web streaming quality and security. Those interested in learning about adaptive bitrate streaming, DRM protocols, and encryption processes will find valuable insights and practical applications.

2024-11-16 Back to Basics: Unit Testing in C++ - Dave Steffen - CppCon 2024 - YouTube { www.youtube.com }

image-20241116002211334

Introduction

In modern software development, unit testing has become a foundational practice, ensuring that individual components of code—specifically functions—perform as expected. For C++ developers, unit testing offers a rigorous approach to quality control, catching bugs early and enhancing code reliability. This article covers the essentials of unit testing in C++, focusing on why and how to apply it effectively in your projects. Whether you’re an experienced developer or a newcomer in C++, this guide will clarify best practices and introduce powerful frameworks to streamline your testing efforts.

Core Concepts and Challenges in Unit Testing

Understanding Unit Testing in C++
Unit testing verifies the smallest unit of code, usually a function, to confirm it works as intended. Over the past decade, it has become essential for software development projects, preventing critical bugs from reaching production and reducing the risk of project failures. While the concept is straightforward, implementing effective unit tests in C++ brings unique challenges, such as determining what to test and choosing the right framework to manage tests efficiently.

Addressing Key Challenges

  1. Framework Selection: C++ offers various testing frameworks like Catch2, which simplifies setting up unit tests and provides structured error reporting.
  2. Consistent Definitions: Defining what qualifies as a unit test varies across the industry. This inconsistency can complicate efforts to standardize testing practices.
  3. Testing Complexity: Many projects require extensive, comprehensive testing to cover complex logic, edge cases, and integration points without compromising performance.

Implementing Unit Tests Effectively

Using a Framework
Frameworks like Catch2 streamline test organization, allowing developers to structure tests in isolated, repeatable units. They provide clear output, automated reporting, and enable testing of all components, highlighting each failure without halting the entire test process. The framework choice is critical in ensuring that tests are not only functional but also maintainable and understandable.

Structure and Placement of Tests
The closer tests are to the code they evaluate, the easier they are to maintain. Best practices recommend keeping test files within the same project structure, allowing for easy updates and reducing the chance of disconnects between tests and the code they assess.

Scientific Principles in Unit Testing

Effective unit testing is analogous to scientific experimentation. Each test is an “experiment” designed to verify code behavior by testing specific inputs and expected outcomes. Emphasizing falsifiability ensures that tests are objective and replicable, providing clear indications of any issues. Core scientific principles in testing include:

  1. Repeatability and Replicability: Tests should yield consistent results on repeated runs.
  2. Precision and Accuracy: Tests should be specific and unambiguous, with clear indications of success or failure.
  3. Thorough Coverage: Effective tests cover all code paths and edge cases, ensuring all possible scenarios are addressed.

Valid and Invalid Tests: Ensuring Accuracy

Accurate tests provide clear insights into code functionality. Avoid using the code’s output as its own test standard—known as circular logic—because it cannot reliably reveal bugs. Instead, source test expectations from reliable, external standards or reference calculations to ensure validity and rigor.

White Box vs. Black Box Testing Approaches

Two approaches define C++ unit testing:

  • White Box Testing: Tests directly access private code areas using workarounds like friend classes, allowing tests to examine internal states. However, this method ties tests closely to code structure, making future refactoring more challenging.
  • Black Box Testing: Tests only interact with public interfaces, testing expected behaviors from an end-user perspective. Black Box Testing is recommended for maintainability, as it allows refactoring without breaking tests by focusing on behavior rather than code internals.

Behavior-Driven Development (BDD) and Documentation

BDD guides developers to create tests focused on expected behaviors, providing intuitive documentation. Each test names and validates a specific behavior, such as "a new cup is empty," which makes understanding the code straightforward for future developers.

Designing Readable and Maintainable Tests

Readable and maintainable tests are simple and free of unnecessary complexity. Every unit test should focus on a single behavior, making tests easy to interpret and troubleshoot. This clarity is essential for enabling reviewers to understand test intentions without knowing the code intimately.

Test-Driven Development (TDD) and Its Role in Design

TDD reinforces software design by encouraging developers to write tests before code. Known as the Red-Green-Refactor cycle, TDD begins with writing a failing test (Red), creating code to make the test pass (Green), and refining the code (Refactor). This practice minimizes bugs from the outset, refines design, and builds a stable foundation of tests to verify code during refactoring.

· 36 min read

⌚ Nice watch!

In this blog post, I'll be sharing a collection of videos with concise content digests. These summaries extract the key points, focusing on the problem discussed, its root cause, and the solution or advice offered. I find this approach helpful because it allows me to retain the core information long after watching the video. This section will serve as a dedicated space for these "good watches," presenting only the most valuable videos and their takeaways in one place.

2024-08-18 Burnout - When does work start feeling pointless? | DW Documentary - YouTube { www.youtube.com }

image-20240817174213143

High-Level Categories and Subcategories of Problems in the Transcript

1. Workplace Dysfunction

1.1 Bureaucracy and Sabotage

  • Problem: Office life has adopted tactics of sabotage (00:01:13) similar to a WWII manual, where inefficiency is encouraged through endless meetings, paperless offices, and waiting for decisions in larger meetings.

  • Root Cause: Bureaucratic processes have unintentionally adopted methods once used deliberately to disrupt efficiency.

  • Solution: Recognize the signs of sabotage in office routines and seek to streamline decision-making and reduce unnecessary meetings.

    1.2 Administrative Bloat

  • Problem: Administrative jobs (00:03:28) have increased from 25% to 75% of the workforce. These include unnecessary supervisory, managerial, and clerical jobs.

  • Root Cause: Expansion of administrative roles rather than reducing workload with technology.

  • Solution: A shift towards more meaningful roles and reducing bureaucratic excess would help in streamlining operations.

2. Employee Burnout and Mental Health

2.1 Physical and Emotional Exhaustion

  • Problem: Burnout (00:10:11) manifests in intense physical exhaustion, to the point of difficulty performing basic tasks, and emotional breakdowns.

  • Root Cause: Overwork, perfectionism, and the pressure to perform.

  • Solution: Recognize the early signs of burnout, reduce workloads, and address stress proactively through support and time off.

    2.2 Pluralistic Ignorance

  • Problem: Employees feel isolated, believing they are the only ones struggling (00:15:19), while everyone else seems fine.

  • Root Cause: Lack of open communication about stress and burnout in the workplace.

  • Solution: Encourage honest discussions about workplace difficulties to reduce isolation and collective burnout.

3. Managerial and Leadership Failures

3.1 Misaligned Management Expectations

  • Problem: Many managers are promoted based on tenure or individual performance (00:24:26), rather than leadership skills, leading to poor team management.

  • Root Cause: Promotions based on irrelevant criteria, such as tenure, rather than leadership capability.

  • Solution: Companies need to create pathways for individual contributors to be rewarded without forcing them into management roles.

    3.2 Disconnect Between Managers and Employees

  • Problem: Managers often do not engage with employees on a personal level (00:26:32), leading to isolation and poor job satisfaction.

  • Root Cause: Lack of training for managers to build relationships with their teams.

  • Solution: Managers should be trained in emotional intelligence and encouraged to have personal conversations with employees.

4. Corporate Culture and Value Conflicts

4.1 Corporate Reorganizations

  • Problem: Reorganizations, layoffs, and restructuring cause ongoing stress for employees (00:34:28). People live in fear of losing their jobs despite hard work.

  • Root Cause: Frequent corporate restructuring often lacks a clear purpose beyond satisfying financial analysts or stockholders.

  • Solution: Limit reorganizations to only when necessary and focus on transparent communication to reduce employee anxiety.

    4.2 Cynicism Due to Unfair Treatment

  • Problem: When workplaces are seen as unfair (00:46:43), cynicism grows, leading to a toxic environment.

  • Root Cause: Lack of transparency and fairness in company policies and actions, leading to distrust.

  • Solution: Implement fair policies and involve employees in decision-making to reduce feelings of exploitation.

5. Misalignment of Work and Purpose

5.1 Lack of Value in Work

  • Problem: Employees feel their work lacks social value (00:33:00). Despite hard work, they see no real-world impact or meaning.
  • Root Cause: The economic system rewards meaningless work more than jobs that provide immediate, tangible benefits to society.
  • Solution: Employers should align tasks with broader human values and ensure that workers understand the social impact of their contributions.

Summary of Key Problems and Solutions

  1. Workplace Dysfunction: Bureaucratic inefficiency, administrative bloat, and unnecessary meetings create a sense of sabotage in modern offices. Solution: Streamline decision-making and reduce bureaucratic roles.
  2. Employee Burnout: Burnout is widespread due to overwork, isolation, and emotional stress. Solution: Acknowledge the signs of burnout, reduce workload, and foster open communication.
  3. Managerial Failures: Many managers lack the skills to lead effectively, causing disengagement and poor team dynamics. Solution: Train managers in leadership and emotional intelligence.
  4. Corporate Culture: Frequent reorganizations and unfair treatment create cynicism and stress among employees. Solution: Ensure fair policies and minimize unnecessary restructurings.
  5. Lack of Meaningful Work: Employees feel disconnected from the social value of their work, seeing it as pointless. Solution: Align work tasks with human values and meaningful contributions.

The most critical issues are employee burnout and the disconnect between management and workers, both of which contribute to widespread dissatisfaction and inefficiency in workplaces. Addressing these through better leadership training, reducing unnecessary work, and improving workplace communication can lead to healthier, more engaged employees.

2024-10-13 How to Spend 14 Days in JAPAN 🇯🇵 Ultimate Travel Itinerary - YouTube { www.youtube.com }

image-20241013110107937

Here’s a streamlined travel plan for visiting some of Japan’s most iconic destinations, focusing on the essential experiences in each place. Follow this itinerary for a mix of history, nature, and food.

1. Shirakawago
Start your journey in Shirakawago, a mountain village known for its traditional Gassho-zukuri farmhouses and heavy winter snowfall. The buildings are arranged facing north to south to minimize wind resistance. Stay overnight in one of the farmhouses to fully experience the town.

  • Don't miss: The House of Pudding, serving Japan’s best custard pudding (2023 winner).

2. Takayama
Head to Takayama, a town in the Central Japan Alps, filled with traditional architecture and a retro vibe. Walk through the Old Town, and visit the Takayama Showa Museum, which perfectly captures Japan in the 1950s and 60s.

  • Must-try food: Hida Wagyu beef is a local specialty, available in street food stalls or restaurants. You can enjoy a stick of wagyu for around 600 yen.

3. Kyoto
Next, visit the cultural capital, Kyoto, and stay in a Machiya townhouse in the Higashiyama district for an authentic experience. Kyoto offers endless shrines and temples to explore.

  • Fushimi Inari Shrine: Famous for its 10,000 red Torii gates leading up Mount Inari. The gates are donated by businesses for good fortune.
  • Kinkakuji (Golden Pavilion): One of Kyoto’s most iconic landmarks, glistening in the sunlight.
  • Tenryuji Temple: A 14th-century Zen temple with a garden and pond, virtually unchanged for 700 years.

4. Nara
Travel to Nara, a smaller city where you can explore the famous Nara Park, home to 1,200 friendly deer. You can bow to the deer, and they'll bow back if they see you have crackers.

  • Todaiji Temple: Visit the 49-foot-tall Buddha and try squeezing through the pillar’s hole (said to grant enlightenment).
  • Yomogi Mochi: Don’t miss this chewy rice cake treat filled with red bean paste, but eat it carefully!

5. Osaka
End your trip in Osaka, known as the nation’s kitchen. Stay near Dotonbori to experience the neon lights and vibrant nightlife.

  • Takoyaki: Grab some fried octopus balls, Osaka’s most famous street food, but be careful—they’re hot!
  • Osaka Castle: Explore this iconic castle, though the interior is a modern museum.

This travel plan covers historical landmarks, must-try local foods, and unique cultural experiences, offering a comprehensive taste of Japan.

2024-10-12 How to Delete Code - Matthew Jones - ACCU 2024 - YouTube { www.youtube.com }

image-20241012110250287

Quote from attendee:

"Code is a cost. Code is not an asset. We should have less of it, not more of it."

Other thoughts on this topic:

Martin Fowler (Agile advocate and software development thought leader) has expressed similar thoughts in his writings. In his blog post "Code as a Liability," he explains that every line of code comes with maintenance costs, and the more code you have, the more resources are needed to manage it over time:

"The more code you have, the more bugs you have. The more code you have, the harder it is to make changes."

John Ousterhout, a professor and computer scientist, has echoed this in his book "A Philosophy of Software Design." He talks about code complexity and how more code often means more complexity, which in turn leads to more problems in the future:

"The most important thing is to keep your code base as simple as possible."

(GPT Summary)

Cppcheck - A tool for static C/C++ code analysis

  1. Dead Code Identification and Removal

    • Importance of removing dead code: Dead code clutters the codebase, adds complexity, and increases maintenance costs. Action: Actively look for dead functions or features that are no longer in use. For example, if a feature has been deprecated but not fully removed, ensure its code is deleted.
    • Techniques for identifying dead code: Use tools like static analysis, manual code review, or testing. Action: Rename the suspected dead function, rebuild, and let the compiler flag errors where the function is still being used.
    • Using static analysis and compilers: These tools help identify unreachable or unused code. Action: Regularly run tools like CPPCheck or Clang Analyze in your CI pipeline to detect dead code.
    • Renaming functions to detect dead code: A simple way to identify unused code. Action: Rename a function (e.g., myFunction to myFunction_old), and see if it causes errors during the build process. If not, the function is likely dead and can be safely removed.
    • Deleting dead features and their subtle dependencies: Features often have dependencies that may be missed. Action: When removing a dead feature, check for subtle references, such as menu items, command-line flags, or other parts of the system that may still rely on it.
  2. Caution with Large Codebase Changes

    • Taking small, careful steps: Removing too much at once can lead to major issues. Action: Remove a small function or part of the code, test, and repeat. For example, instead of removing an entire module, start with one function.
    • Avoiding aggressive feature removal: Over-removal can cause unexpected failures. Action: Approach code deletion incrementally. Don’t aim to delete an entire feature at once; instead, tease out its components slowly to avoid breaking dependencies.
    • Moving code to reduce scope: If code is not needed at the global scope, move it to a more local context. Action: Move public functions from header files to .cpp files and see if any errors occur. This can help isolate the function’s scope and make it easier to remove later.
    • Risk of breaking builds: Avoid breaking the build with massive deletions. Action: Ensure you take incremental steps, test continuously, and use atomic commits to revert small changes if needed.
  3. Refactoring Approaches

    • Iterative refactoring and deletion: Refactor code in small steps to ensure stability. Action: When removing a dead function, check what other code depends on it. If a function calling it becomes unused, continue refactoring iteratively.
    • Refactoring legacy code: Legacy code can often hide dead functions. Action: Slowly reduce the scope of legacy functions by moving them to lower levels (like .cpp files) to see if their usage drops. If not used anymore, delete them.
    • Using unit tests for refactoring: Ensure that code works after refactoring. Action: Wrap legacy string classes or custom utility functions in unit tests, then replace the core logic with modern STL alternatives. If the tests pass, the old code can be removed safely.
    • Replacing custom features with third-party libraries: Many custom solutions from the past can now be replaced by modern libraries. Action: If you have a custom logger class, consider replacing it with a more standardized and robust library like spdlog.
  4. Working with Tools

    • Using plugins or IDEs: Most modern IDEs can help identify dead code. Action: Use Visual Studio or IntelliJ plugins that flag unreachable code or highlight unused functions.
    • Leveraging Compiler Explorer: Use online tools to isolate and test specific snippets of code. Action: If you can’t refactor in the main codebase, copy the function into Compiler Explorer (godbolt.org) and experiment with it there before making changes.
    • Setting compiler flags: Enable warnings for unreachable or unused code. Action: Use -Wall or -Wextra in GCC or Clang to flag potentially dead code. For example, set Wextra in your build system to catch unused variables and unreachable code.
    • Running static analysis tools: Integrate tools like CPPCheck into your CI pipeline. Action: Add CPPCheck to Jenkins and run it with -j to detect dead functions across multiple translation units.
  5. Source Control Best Practices

    • Atomic commits: Always break down deletions into small, reversible changes. Action: Commit changes one at a time and with meaningful messages, such as "Deleted unused function myFunction()." This allows you to easily revert just one commit if needed.
    • Small steps and green builds: Ensure the build passes after each commit. Action: Commit your changes, wait for the CI pipeline to return a green build, and only proceed if everything passes.
    • Keeping history in the main branch: Deleting code in a branch risks losing history. Action: Perform deletions in the main branch with proper commit messages. In Git, avoid squashing commits when merging deletions, as this may obscure your work history.
  6. Communication and Collaboration

    • Educating teams about dead code: Not everyone understands the importance of cleaning up dead code. Action: When you find dead code, educate the team by documenting what you’ve removed and why.
    • Communicating when deleting shared code: Deleting code that others may rely on needs consensus. Action: Start a conversation with the team and document the code you intend to delete. Make sure the removal won’t disrupt anyone’s work.
    • Seasonal refactoring: Pick quieter periods like holidays for large-scale refactoring. Action: Plan code cleanups during slower times (e.g., Christmas or summer) when fewer developers are working. For example, take the three days between Christmas and New Year to remove unused code while avoiding merge conflicts.
  7. Handling Legacy Features

    • Addressing dead features tied to legacy systems: These can be tricky to remove without causing issues. Action: Mark features as deprecated first, communicate with stakeholders, and plan their removal after a safe period.
    • Managing end-of-life features carefully: Inform customers and stakeholders before removing any external-facing features. Action: Announce the feature’s end-of-life, allow time for feedback, and only remove the feature after this period (e.g., six months).
  8. Miscellaneous Code Cleanup

    • Removing unnecessary includes: Many includes are added but never removed. Action: Comment out all include statements at the top of a file, then add them back one by one to see which ones are actually needed.
    • Deleting repeated or needless code: Repeated code should be factored into functions or libraries. Action: If you find duplicated code, refactor it into a helper function or a shared library to reduce repetition.
  9. Comments in Code

    • Avoiding inane comments: Comments that explain obvious code operations are distracting. Action: Delete comments like “// increment i by 1” that explain simple logic you can deduce from reading the code.
    • Recognizing risks in outdated comments: Old comments can hide the fact that code has changed. Action: When refactoring, ensure comments are either updated or removed to avoid misleading information about the code’s purpose.
    • Focusing on clean code: Let the code speak for itself. Action: Favor well-written, self-explanatory code that requires minimal commenting. For instance, use descriptive function names like calculateTotal() instead of adding comments like “// This function calculates the total.”
  10. When to Delete Code

    • Timing deletions carefully: Avoid risky deletions right before a release. Action: Plan large code cleanups in advance, and avoid removing any code near a major product release when stability is crucial.
    • Refactoring during quiet periods: Use downtimes, such as post-release, for cleanup. Action: After a major release or during holidays, revisit old tasks marked for deletion.
    • Tracking deletions in the backlog: Use a backlog to schedule code deletions that can’t be done immediately. Action: Create a "technical debt" section in your backlog and record all dead code identified for future cleanup.
  11. Final Thoughts on Refactoring

    • Challenging bad habits: Sometimes teams resist deleting old code. Action: Slowly introduce refactoring practices, starting small to show the benefits.
    • Measuring and recording progress: Keep track of all dead code and document changes. Action: Use tools like Jira to track deletions and improvements in code health.
    • Deleting responsibly: Don’t delete code just for the sake of it. Action: Ensure that deleted code is truly unused and won’t cause issues down the line. For example, test thoroughly before removing any core functionality.

2024-09-29 Insights From an L7 Meta Manager: Interviews, Onboarding, and Building Trust - YouTube { www.youtube.com }

image-20241013110406428

High-Level Categories of Problems and Solutions

1. Onboarding and Adjustment in New Senior Roles (00)

  • Problem: Senior engineers often struggle when transitioning to new companies, particularly in adjusting to different company cultures and technical structures.

    • Context: Moving between large tech companies like Amazon and Meta presents challenges due to different coding practices (e.g., service-oriented architecture vs. monorepo) and operational structures.

    • Root Cause: A mismatch between previous experiences and new company environments.

    • Solution: Avoid trying to change the new environment immediately. Instead, focus on learning and adapting to the culture. Build trust with the team over six to seven months before attempting major changes.

    • Timestamp: 00:03:30

    • Quote:

      "If you go join another company, you've got a lot to learn, you've got a lot of relationships to build, and you ultimately need to figure out how to generalize your skill set."

2. Building Trust and Relationships in Senior Roles (00)

  • Problem: Senior engineers often fail to invest time in building relationships and trust with new teams.

    • Context: New senior engineers may rush into projects without first establishing rapport with their colleagues.

    • Root Cause: Lack of emphasis on trust-building leads to resistance from teams.

    • Solution: Dedicate the first few months to relationship-building and understanding the team’s dynamics. Don’t attempt large projects right away.

    • Timestamp: 00:05:00

    • Quote:

      "If you rush that process, you're going to be in for a hell of a lot of resistance."

3. Poor Ramp-up Periods for New Engineers (00)

  • Problem: New hires are often not given enough time to ramp up before being evaluated in performance reviews.

    • Context: Lack of structured ramp-up time for new senior hires can lead to poor performance evaluations early on.

    • Root Cause: Managers failing to allocate sufficient time for new employees to learn and adapt.

    • Solution: Managers should provide clear onboarding timelines (6-7 months) for engineers to integrate into teams, with gradual increases in responsibility.

    • Timestamp: 00:09:00

    • Quote:

      "The main thing that we did is just basically give them a budget of some time... to build up their skill set and trust with the team."

4. Mistakes in Adapting to New Cultures (00)

  • Problem: Senior engineers often try to change new environments too quickly, leading to friction.

    • Context: Engineers accustomed to one type of tech stack or organizational process may attempt to enforce old methods in a new setting.

    • Root Cause: Engineers feel uncomfortable in the new culture and attempt to recreate their old environment.

    • Solution: Focus on understanding the reasons behind the new company's practices before suggesting any changes.

    • Timestamp: 00:07:00

    • Quote:

      "Failure mode... is to try to change everything... and that's almost always the wrong approach."

Performance Reviews and Evaluations

5. Misunderstanding the Performance Review Process (00)

  • Problem: Engineers sometimes misunderstand how they are evaluated in performance reviews, especially during their first year.

    • Context: There’s often confusion about how contributions during the onboarding period are assessed.

    • Root Cause: Lack of transparency or communication from managers regarding performance criteria.

    • Solution: Managers must clarify performance expectations and calibration processes, while engineers should ask for regular feedback to stay on track.

    • Timestamp: 00:10:00

    • Quote:

      "Some managers just don't do a good job of actually setting the stage for new hires."

6. Lack of Visibility in Performance Reviews (00)

  • Problem: Senior engineers often fail to showcase their work to the broader team, limiting their visibility in performance reviews.

    • Context: In larger organizations, a single manager is not solely responsible for performance evaluations. Feedback from other team members and leadership is critical.

    • Root Cause: Not socializing work with peers or senior leadership.

    • Solution: Regularly communicate your contributions to multiple stakeholders, not just your direct manager.

    • Timestamp: 00:14:00

    • Quote:

      "Socialize the work that you're doing with those other people... it's even better if you've had a chance to actually talk with them."

7. Taking on Projects Too Early (00)

  • Problem: Engineers may overestimate their readiness and take on large projects too soon after joining a new company.

    • Context: Jumping into big projects without adequate preparation can lead to mistakes and strained relationships.

    • Root Cause: Lack of patience and eagerness to prove oneself.

    • Solution: Focus on smaller tasks and gradually scale up responsibility after establishing trust and familiarity with the environment.

    • Timestamp: 00:06:30

    • Quote:

      "Picking up a massive project as soon as you join a company is probably not the best idea."

Behavioral and Technical Interviews

8. Lack of Depth in Behavioral Interviews (00)

  • Problem: Engineers often struggle with behavioral interviews, particularly when it comes to self-promotion and clearly discussing their impact.

    • Context: Senior engineers may downplay their role in leading large projects, failing to convey their leadership and influence.

    • Root Cause: Engineers often feel uncomfortable talking about their own contributions.

    • Solution: Engineers need to learn how to take credit for their work and articulate the complexity of their projects in interviews.

    • Timestamp: 00:19:00

    • Quote:

      "If you simply talk about your team and you aren't framing this as you driving, it doesn't demonstrate the level that I'm looking for."

9. Over-Reliance on Rehearsed Answers in Design Interviews (00)

  • Problem: In design interviews, engineers sometimes rely on rehearsed answers, which doesn’t showcase their real problem-solving abilities.

    • Context: Instead of improvising, engineers often recite previously learned solutions that don't apply to the specific design problem at hand.

    • Root Cause: A lack of confidence in applying their experience to new problems.

    • Solution: Approach design problems creatively by focusing on unique elements of the task and how past experience can offer novel solutions.

    • Timestamp: 00:17:00

    • Quote:

      "You're really supposed to be scribbling outside the lines."

Key Problems and Their Solutions Summary:

  1. Onboarding and Adjustment: Senior engineers often face challenges adapting to new company cultures. Solution: Focus on learning the environment, and avoid trying to change it too quickly.
  2. Trust and Relationships: Lack of relationship-building leads to resistance. Solution: Take time to build rapport and trust with the team before diving into big projects.
  3. Performance Reviews: New hires may not understand performance expectations. Solution: Ensure transparency in review processes and socialize your contributions with key stakeholders.
  4. Interviews: Engineers may struggle in behavioral and design interviews. Solution: Take ownership of your contributions and avoid relying on rehearsed answers.

These are the most critical problems discussed in the transcript, with clear, actionable advice for each.

2024-09-24 LLMs gone wild - Tess Ferrandez-Norlander - NDC Oslo 2024 - YouTube { www.youtube.com }

Tess Ferrandez-Norlander (works at Microsoft)

image-20240923230759509

image-20240923231052659

2024-09-24 Overview - Chainlit { docs.chainlit.io }

image-20240923232539521

image-20240924110712641

2024-09-24 2406.04369 RAG Does Not Work for Enterprises { arxiv.org }

image-20240924111707846

2024-09-26 Your website does not need JavaScript - Amy Kapernick - NDC Oslo 2024 - YouTube { www.youtube.com }

image-20240926005552838

No JS (amyskapers.dev)

image-20240926235229401

2024-09-27 amykapernick/no_js { github.com }

2024-08-24 Reducing C++ Compilation Times Through Good Design - Andrew Pearcy - ACCU 2024 - YouTube { www.youtube.com }

image-20241013111059590

  1. Precompiled Headers: One of the most effective methods is using precompiled headers (PCH). This technique involves compiling the header files into an intermediate form that can be reused across different compilation units. By doing so, you significantly reduce the need to repeatedly process these files, cutting down the overall compilation time. Tools like CMake can automate this by managing dependencies and ensuring headers are correctly precompiled and reused across builds.

  2. Parallel Compilation: Another approach is parallel compilation. Tools like Make, Ninja, and distcc allow you to compile multiple files simultaneously, taking advantage of multi-core processors. For instance, using the -j flag in make or ninja enables you to specify the number of jobs (i.e., compilation tasks) to run in parallel, which can dramatically reduce the time it takes to compile large projects.

  3. Unity Builds: Unity builds are another technique where multiple source files are compiled together as a single compilation unit. This reduces the overhead caused by multiple compiler invocations and can be particularly useful for large codebases. However, unity builds can introduce some challenges, such as longer error messages and potential name collisions, so they should be used selectively.

  4. Code Optimization: Structuring your code to minimize dependencies can also be highly effective. Techniques include forward declarations, splitting projects into smaller modules with fewer interdependencies, and replacing heavyweight standard library headers with lighter alternatives when possible. By reducing the number of dependencies that need to be recompiled when a change is made, you can significantly decrease compile times.

  5. Caching Compilation Results: Tools like ccache store previous compilation results, which can be reused if the source files haven’t changed. This approach is particularly useful in development environments where small, incremental changes are frequent.

Here is the detailed digest from Andrew Pearcy's talk on "Reducing Compilation Times Through Good Design", along with the relevant project homepages and tools referenced throughout the discussion.

Video Title: Reducing Compilation Times Through Good Design

Andrew Pearcy, an engineering team lead at Bloomberg, outlines strategies for significantly reducing C++ compilation times. The talk draws from his experience of cutting build times from one hour to just six minutes, emphasizing practical techniques applicable in various C++ projects.

Motivation for Reducing Compilation Times

Pearcy starts by explaining the critical need to reduce compilation times. Long build times lead to context switching, reduced productivity, and delays in CI pipelines, affecting both local development experience and time to market. Additionally, longer compilation times make adopting static analysis tools like Clang-Tidy impractical due to the additional overhead. Reducing compilation time also optimizes resource utilization, especially in large companies where multiple machines are involved.

Overview of the C++ Compilation Model

He recaps the C++ compilation model, breaking it down into phases: pre-processing, compilation, and linking. The focus is primarily on the first two stages. Pearcy notes that large header files and unnecessary includes can significantly inflate the amount of code the compiler must process, which in turn increases build time.

Quick Wins: Build System, Linkers, and Compiler Caching

1. Build System:

  • Ninja: Pearcy recommends using Ninja instead of Make for better dependency tracking and faster incremental builds. Ninja was designed for Google's Chromium project and can often be an order of magnitude faster than Make. It utilizes all available cores by default, improving build efficiency.
  • Ninja Documentation: Ninja Build System

2. Linkers:

  • LLD and Mold: He suggests switching to LLD, a faster alternative to the default linker, LD. Mold, a modern linker written by Rui Ueyama (who also worked on LLD), is even faster but consumes more memory and is available for Unix platforms as open-source while being a paid service for Mac and Windows.
  • LLD: LLVM Project - LLD
  • Mold: Mold: A Modern Linker

3. Compiler Caching:

  • Ccache: Pearcy strongly recommends Ccache for caching compilation results to speed up rebuilds by avoiding recompilation of unchanged files. This tool can be integrated into CI pipelines to share cache across users, which can drastically reduce build times.
  • Ccache: Ccache

Detailed Techniques to Reduce Build Times

1. Forward Declarations:

  • Pearcy emphasizes the use of forward declarations in headers to reduce unnecessary includes, which can prevent large headers from being included transitively across multiple translation units. This reduces the amount of code the compiler needs to process.

2. Removing Unused Includes:

  • He discusses the challenge of identifying and removing unused includes, mentioning tools like Include What You Use and Graphviz to visualize dependencies and find unnecessary includes.
  • Include What You Use: Include What You Use
  • Graphviz: Graphviz

3. Splitting Protocol and Implementation:

  • To reduce dependency on large headers, he suggests the Pimpl (Pointer to Implementation) Idiom or creating interfaces that hide the implementation details. This technique helps in isolating the implementation in a single place, reducing the amount of code the compiler needs to process in other translation units.

4. Precompiled Headers (PCH):

  • Using precompiled headers for frequently included but rarely changed files, such as standard library headers, can significantly reduce build times. However, he warns against overusing PCHs as they can lead to diminishing returns if too many headers are precompiled.
  • CMake added support for PCH in version 3.16, allowing easy integration into the build process.
  • CMake Precompiled Headers: CMake Documentation

5. Unity Build:

  • Pearcy introduces Unity builds, where multiple translation units are combined into a single one, reducing redundant processing of headers and improving build times. This technique is particularly effective in reducing overall build times but can introduce issues like naming collisions in anonymous namespaces.
  • CMake provides built-in support for Unity builds, with options to batch files to balance parallelization and memory usage.
  • Unity Build Documentation: CMake Unity Builds

2024-07-26 Turbocharged: Writing High-Performance C# and .NET Code - Steve Gordon - NDC Oslo 2024 - YouTube

image-20241013111239413

Turbocharging Your .NET Code with High-Performance APIs

Steve, a Microsoft MVP and engineer at Elastic, discusses various high-performance APIs in .NET that can optimize application performance. The session covers measuring and improving performance, focusing on execution time, throughput, and memory allocations.

Performance in Application Code Performance is measured by how quickly code executes, the throughput (how many tasks an application can handle in a given timeframe), and memory allocations. High memory allocations can lead to frequent garbage collections, impacting performance. Steve emphasizes that performance optimization is contextual, meaning not every application requires the same level of optimization.

Optimization Cycle The optimization cycle involves measuring current performance, making small changes, and re-measuring to ensure improvements. Tools like Visual Studio profiling, PerfView, and JetBrains products are useful for profiling and measuring performance. BenchmarkDotNet is highlighted for micro-benchmarking, providing precise measurements by running benchmarks multiple times to get accurate data.

High-Performance Code Techniques

  1. Span<T>: A type that provides a read/write view over contiguous memory, allowing for efficient slicing and memory operations. It is highly efficient with constant-time operations for slicing.
  2. Array Pool: A pool for reusing arrays to avoid frequent allocations and deallocations. Using the ArrayPool<T>.Shared pool allows for efficient memory reuse, reducing short-lived allocations.
  3. System.IO.Pipelines: Optimizes reading and writing streams by managing buffers and minimizing overhead. It is particularly useful in scenarios like high-performance web servers.
  4. System.Text.Json: A high-performance JSON API introduced in .NET Core 3. It includes low-level Utf8JsonReader and Utf8JsonWriter for zero-allocation JSON parsing, as well as higher-level APIs for serialization and deserialization.

Examples and Benchmarks Steve presents examples of using these APIs in real-world scenarios, demonstrating significant performance gains. For instance, using Span<T> and ArrayPool in a method that processes arrays and messages led to reduced execution time and memory allocations. Switching to System.IO.Pipelines and System.Text.Json resulted in similar improvements.

"Slicing is really just changing the view over an existing block of memory... it's a constant time, constant cost operation."

"Measure your code, don’t assume, don’t make assumptions with benchmarks, it’s dangerous."

Conclusion Optimizing .NET code with high-performance APIs requires careful measurement and iterative improvements. While not all applications need such optimizations, those that do can benefit from significant performance gains. Steve concludes by recommending the book "Pro .NET Memory Management" for a deeper understanding of memory management in .NET.

2024-07-07 [Theo - t3․gg](https://www.youtube.com/@t3dotgg) My Spiciest Take On Tech Hiring - YouTube

2024-07-07 Haskell for all: My spiciest take on tech hiring

image-20240707101322999

High-Level Categories of Problems

  1. Tech Hiring Process Issues

    • Too Many Interviews (00): Problem: Candidates face multiple rounds of interviews (up to seven), causing frustration and inefficiency. Many find it counterproductive to go through so many technical interviews. Root Cause: Overly complex hiring processes that assume more interviews lead to better candidates. Advice: Implement a streamlined process with just one technical interview and one non-technical interview, each lasting no more than one hour. Long interview processes are unnecessary and may filter out good candidates.

    • Interview Redundancy (00): Problem: The same type of technical questions are asked repeatedly across different interviews, leading to duplication. Root Cause: Lack of coordination among interviewers and reliance on similar types of technical questions. Advice: Ensure each interviewer asks unique, relevant questions and does not rely on others to gather the same information. Interviewers should bear ultimate responsibility for gathering critical data.

    • Bias in Hiring (00): Problem: Interview processes are biased because hiring managers may already have preferred candidates (referrals, strong portfolios) before the process begins. Root Cause: Pre-existing relationships with candidates or prior work experience influence decisions. Advice: Avoid dragging out the process to mask biases—shorter, efficient interviews can make the bias more visible but manageable. Long processes don't necessarily filter out bias.

    • Long Interview Processes Favor Privilege (00): Problem: Prolonged interview panels select for candidates who can afford to take time off work, favoring those from more privileged backgrounds. Root Cause: Candidates from less privileged backgrounds cannot afford to engage in drawn-out interviews. Advice: Shorten the interview length and focus on relevant qualifications. Ensure accessibility for all candidates by keeping the process simple.

  2. Interview Process Structure

    • Diffusion of Responsibility (00): Problem: In group interview settings, responsibility for hiring decisions is diffused, leading to poor or delayed decision-making. Root Cause: No single person feels accountable for making the final decision. Advice: Assign ownership of decisions by giving specific interviewers responsibility for crucial aspects of the process. This reduces the likelihood of indecision and delayed outcomes.

    • Hiring Based on Team Fit vs. Technical Ability (00): Problem: Emphasis on technical abilities often overshadows the importance of team compatibility. Root Cause: Focus on technical skills without considering cultural and interpersonal dynamics within the team. Advice: Ensure that interviews assess not only technical competence but also how well candidates fit into the team dynamic. Incorporate group discussions or casual settings (e.g., lunch meetings) to gauge team vibe.

    • Ambiguity in Interviewer Opinions (00): Problem: Some interviewers avoid committing to clear opinions about candidates, preferring neutral stances. Root Cause: Lack of confidence or fear of being overruled by the majority. Advice: Use a rating system (e.g., 1–4 scale) that forces interviewers to choose a strong opinion, either in favor or against a candidate.

  3. Candidate Experience and Behavior

    • Negative Behavior in Interviews (00): Problem: Candidates who perform well technically but exhibit unprofessional behavior (e.g., showing up late or hungover) can still pass through the hiring process. Root Cause: Strong technical performance may overshadow concerns about professionalism and reliability. Advice: Balance technical performance with non-technical evaluations. Weigh behaviors such as punctuality and professional demeanor just as heavily as coding skills.

    • Take-Home Tests and Challenges (00): Problem: Some candidates view take-home challenges as extra, unnecessary work, while others see them as a chance to showcase skills. Root Cause: Different candidates have different preferences and responses to technical assessments. Advice: Offer take-home tests as an option, but don't make them mandatory. Adjust the evaluation method based on candidate preferences to ensure both parties feel comfortable.

  4. Systemic Issues in the Hiring Process

    • Healthcare Tied to Jobs (00): Problem: In the U.S., job-based healthcare forces candidates to accept positions they might not want or complicates transitions between jobs. Root Cause: The healthcare system is tied to employment, making job transitions risky. Advice: There's no direct solution provided here, but highlighting the need for systemic changes in healthcare could make the hiring process more equitable.

    • Lack of Feedback to Candidates (00): Problem: Many companies avoid giving feedback to candidates after interviews, leaving them unsure of their performance. Root Cause: Fear of legal liability or workload concerns. Advice: Provide constructive feedback to candidates, even if they aren't selected. It helps build long-term relationships and contributes to positive company reputation. Some of the best connections come from transparent feedback post-interview.

  5. Hiring for Senior Positions

    • Senior Candidates Have Low Tolerance for Long Processes (00): Problem: Highly qualified senior candidates are more likely to decline long and drawn-out interview processes. Root Cause: Senior candidates, due to their experience and expertise, are less willing to tolerate inefficient processes. Advice: Streamline the process for senior roles. Keep interviews short, efficient, and focused on relevant discussions. High-level candidates prefer concise assessments over lengthy ones.
  6. Hiring on Trust vs. Formal Interviews

    • Hiring Based on Relationships (00): Problem: Engineers with pre-existing relationships or referrals are more likely to be hired than those without, bypassing formal interviews. Root Cause: Prior work relationships build trust, which can overshadow the need for formal vetting. Advice: Trust-based hiring should be encouraged when there is prior working experience with the candidate. However, make efforts to balance trust with fairness by including formal evaluations where necessary.

Key Problems Summary

  • The length and complexity of the hiring process discourages many strong candidates, particularly senior-level applicants. Simplifying the process to two interviews (one technical and one non-technical) is recommended.
  • Bias in the hiring process, particularly when managers have pre-existing relationships with candidates, leads to unfair outcomes.
  • Long interview processes favor privileged candidates who can afford to take time off, disadvantaging those from less privileged backgrounds.
  • Providing feedback to candidates is crucial for building long-term relationships and ensuring a positive hiring experience, yet it's often avoided due to legal concerns.
  • Team fit is just as important as technical skills, and companies should incorporate group interactions to assess interpersonal dynamics.

Most Critical Issues and Solutions

  • Problem: Too many technical interviews create frustration and inefficiency. Solution: Use just one technical and one non-technical interview, and assign responsibility for gathering all relevant information during these sessions.

  • Problem: Bias due to pre-existing relationships. Solution: Shorten the process to expose bias more clearly and rely on trust-based hiring only when balanced with formal interviews.

  • Problem: Lack of feedback to candidates. Solution: Provide constructive feedback to help candidates improve and establish long-term professional relationships.

· 15 min read

Good Reads

2024-08-27 Four Lessons from 2023 That Forever Changes My Software Engineering Career | by Yifeng Liu | Medium | Medium { medium.com }

This past year, four key lessons transformed my approach to software engineering.

First, I learned that execution is as important as the idea itself. Inspired by Steve Jobs, who highlighted the gap between a great idea and a great product, I focused on rapid prototyping to test feasibility and internal presentations to gather feedback. I kept my manager informed to ensure we were aligned and honest about challenges.

Second, I realized that trust and credibility are fragile but crucial. As a senior engineer, I'm expected to lead by solving complex issues and guiding projects. I saw firsthand how failing to execute or pushing unrealistic timelines could quickly erode trust within my team.

The third lesson was about the importance of visibility. I understood that hard work could go unnoticed if I didn’t make it visible. I began taking ownership of impactful projects and increased my public presence through presentations and updates. I also honed my critical thinking to offer valuable feedback and identify improvement opportunities.

Finally, I learned to focus on changing myself rather than others. I used to try to change my team or company, but now I realize it’s more effective to work on my growth and influence others through my actions. Understanding the company’s culture and my colleagues' aspirations helped me align my efforts with my career goals.

These lessons have reshaped my career and how I approach my role as an engineer.

2024-08-28 Just use fucking paper, man - Andy Bell { andy-bell.co.uk }

27th of August 2024

I’ve tried Notion, Obsidian, Things, Apple Reminders, Apple Notes, Jotter and endless other tools to keep me organised and sure, Notion has stuck around the most because we use it for client stuff, but for todo lists, all of the above are way too complicated.

I’ve given up this week and gone back to paper and a pencil and I feel unbelievably organised and flexible, day-to-day. It’s because it’s simple. There’s nothing fancy. No fancy pen or anything like that either. Just a notebook and a pencil.

I’m in an ultra busy period right now so for future me when you inevitably get back to this situation: just. use. fucking. paper.

2024-08-29 The slow evaporation of the free/open source surplus – Baldur Bjarnason { www.baldurbjarnason.com }

I've been thinking a lot about the state of Free and Open Source Software (FOSS) lately. My concern is that FOSS thrives on surplus—both from the software industry and the labor of developers. This surplus has been fueled by high margins in the tech industry, easy access to investment, and developers who have the time and financial freedom to contribute to FOSS projects. However, I'm worried that these resources are drying up.

High interest rates are making investments scarcer, particularly for non-AI software, which doesn't really support open-source principles. The post-COVID economic correction is leading to layoffs and higher coder unemployment, which means fewer people have the time or incentive to contribute to FOSS. OSS burnout is another issue, with fewer fresh developers stepping in to replace those who are exhausted by maintaining projects that often lack supportive communities.

Companies are also cutting costs and questioning the value of FOSS. Why invest in open-source projects when the return on investment is uncertain? The rise of LLM-generated code is further disconnecting potential contributors from FOSS projects, weakening the communities that sustain them.

My fear is that FOSS is entering a period of decline. As the industry and labor surpluses shrink, FOSS projects might suffer from neglect, security issues, or even collapse. While some of this decline might be a necessary correction, it's hard not to worry about the future of the FOSS ecosystem, especially when we don't know which parts are sustainable and which are not.

2024-08-29 Why does getting a job in tech suck right now? (Is it AI?!?) – r y x, r { ryxcommar.com }

image-20240915141710361

2024-08-31 Using Fibonacci Numbers to Convert from Miles to Kilometers and Vice Versa { catonmat.net }

Take two consecutive Fibonacci numbers, for example 5 and 8.

And you're done converting. No kidding – there are 8 kilometers in 5 miles. To convert back just read the result from the other end – there are 5 miles in 8 km!

Another example.

Let's take two consecutive Fibonacci numbers 21 and 34. What this tells us is that there are approximately 34 km in 21 miles and vice versa. (The exact answer is 33.79 km.)

Mind = blown. Completely.

2024-09-11 The Art of Finishing | ByteDrum { www.bytedrum.com }

The article explores the challenge of unfinished projects and the cycle of starting with enthusiasm but failing to complete them. The author describes this as the Hydra Effect—each task completed leads to new challenges. Unfinished projects feel full of potential, but fear of imperfection or even success prevents many developers from finishing.

"An unfinished project is full of intoxicating potential. It could be the next big thing... your magnum opus."

However, leaving projects incomplete creates mental clutter, making it hard to focus and learn key lessons like optimization and refactoring. Finishing is crucial for growth, both technically and professionally.

"By not finishing, you miss out on these valuable learning experiences."

To break this cycle, the author offers strategies: define "done" early, focus on MVP (Minimum Viable Product), time-box projects, and separate ideation from implementation. Practicing small completions and using accountability are also recommended to build the habit of finishing.

The article emphasizes that overcoming the Hydra Effect requires discipline but leads to personal and professional growth.

2024-09-11 Improving Application Availability: The Basics | by Mario Bittencourt | SSENSE-TECH | Aug, 2024 | Medium { medium.com }

In this article, I introduce the essentials of application availability and how to approach high availability. High availability is measured by uptime percentage. Achieving 99.999% availability (five nines) means accepting no more than 5 minutes of downtime per year, which requires automation to detect and fix issues fast.

I discuss redundancy as a key strategy to improve availability by using backups for connectivity, compute resources, and persistence. If one component fails, the system switches to a secondary option. However, redundancy adds both cost and complexity. More components require advanced tools, like load balancers, to manage failures, but these solutions introduce their own reliability concerns.

Not every part of an application needs the same availability target. In an e-commerce system, for instance, I categorize components into tiers:

  • T1 (website and payments) must stay available at all times.
  • T2 (order management) allows some downtime.
  • T3 (fulfillment) can tolerate longer outages.
  • T4 (ERP) has the least strict requirements.

"Your goal is to perform an impact analysis and classify each component in tiers according to its criticality and customer impact."

By setting different availability targets for each tier, you can reduce costs while focusing on the most important parts of your system.

"All strategies to improve availability come with trade-offs, usually involving higher costs and complexity."

This sets the stage for future discussions on graceful degradation, asynchronous processing, and disaster recovery strategies.

2024-09-12 A Bunch of Programming Advice I’d Give To Myself 15 Years Ago Marcus' Blog { mbuffett.com }

If the team is constantly tripping over a recurring issue, it's crucial to fix the root cause, rather than repeatedly patching symptoms. The author mentions, "I decided to fix it, and it took ten minutes to update our subscription layer to call subscribers on the main thread instead," thereby removing the cause of crashes, streamlining the codebase, and reducing mental overhead.

Pace versus quality must be balanced based on context. In low-risk environments, it's okay to ship faster and rely on guardrails; in high-risk environments (like handling sensitive data), quality takes precedence. "You don’t need 100% test coverage or an extensive QA process, which will slow down the pace of development," when bugs can be fixed easily.

Sharpening your tools is always worth it. Being efficient with your IDE, shortcuts, and dev tools will pay off over time. Fast typing, proficiency in the shell, and knowing browser tools matter. Although people warn against over-optimizing configurations, "I don’t think I’ve ever seen someone actually overdo this."

When something is hard to explain, it's likely incidental complexity. Often, complexity isn't inherent but arises from the way things are structured. If you can't explain why something is difficult, it’s worth simplifying. The author reflects that "most of the complexity I was explaining was incidental... I could actually address that first."

Solve bugs at a deeper level, not just by patching the immediate issue. If a React component crashes due to null user data, you could add a conditional return, but it’s better to prevent the state from becoming null in the first place. This creates more robust systems and a clearer understanding of how things work.

Investigating bugs should include reviewing code history. The author discovered a memory leak after reviewing commits, realizing the issue stemmed from recent code changes. Git history can be essential for debugging complex problems that aren't obvious through logs alone.

Write bad code when needed to get feedback. Perfect code takes too long and may not be necessary in every context. It's better to ship something that works, gather feedback, and refine it. "If you err on the side of writing perfect code, you don’t get any feedback."

Make debugging easier by building systems that streamline the process. Small conveniences like logging state diffs after every update or restricting staging environment parallelism to 1 can save huge amounts of time. The author stresses, "If it’s over 50%, you should figure out how to make it easier."

Working on a team means asking questions when needed. Especially in the first few months, it's faster to ask a coworker for a solution than spending hours figuring it out solo. Asking isn’t seen as a burden, so long as it’s not something trivial that could be self-solved in minutes.

Maintaining a fast shipping cadence is critical in startups and time-sensitive projects. Speed compounds over time, and improving systems, reusable patterns, and processes that support fast shipping is essential. "Shipping slowly should merit a post-mortem as much as breaking production does."

This article reaction and discussion on youtube:

2024-09-12 Theo Unexpected Lessons I've Learned After 15 Years Of Coding - YouTube { www.youtube.com }

2024-09-14 We need to talk about "founder mode" - YouTube { www.youtube.com }

"Stop hiring for the things you don't want to do. Hire for the things you love to do so you're forced to deal with the things you don't want to do.

This is some of the best advice I've been giving lately. Early on, I screwed up by hiring an editor because I didn't like editing. Since I didn't love editing, I couldn't be a great workplace for an editor—I couldn't relate to them, and they felt alone. My bar for a good edit was low because I just wanted the work off my plate.

But when I started editing my own stuff, I got pretty good and actually started to like it. Now, I genuinely think I'll stop recording videos before I stop editing them. By doing those things myself, I ended up falling in love with them.

Apply this to startups: If you're a founder who loves coding, hire someone to do it so you can't focus all your time on it. Focus on the other crucial parts of your business that need your attention.

Don't make the mistake of hiring to avoid work. Embrace what you love, and let it force you to grow in areas you might be neglecting."

Original post: 2024-09-14 Founder Mode { paulgraham.com }

Theo

Breaking Through Organizational Barriers: Connect with the Doers, Not Just the Boxes

In large organizations, it's common to encounter roadblocks where teams are treated as "black boxes" on the org chart. You might hear things like, "We can't proceed because the XYZ team isn't available," or "They need more headcount before tackling this."

Here's a strategy that has made a significant difference for me:

Start looking beyond the org chart and reach out directly to the individuals who are making things happen.

How to find them?

  • Dive into GitHub or project repositories: See who's contributing the most code or making significant updates.
  • Identify the most driven team members: Every team usually has someone who's more passionate and proactive.
  • Reach out and build a connection: They might appreciate a collaborative partner who shares their drive.

Why do this?

  • Accelerate Progress: Bypass bureaucratic delays and get projects moving.
  • Build Valuable Relationships: These connections can lead to future opportunities, referrals, or even partnerships.
  • Expand Your Influence: Demonstrating initiative can set you apart and open doors within the organization.

Yes, there are risks. Your manager might question why you're reaching out independently, or you might face resistance. But consider the potential rewards:

  • Best Case: You successfully collaborate to solve problems, driving innovation and making a real impact.
  • Worst Case: Even if you face pushback, you've connected with someone valuable. If either of you moves on, that relationship could lead to exciting opportunities down the line.

2024-09-15 Why Scrum is Stressing You Out - by Adam Ard { rethinkingsoftware.substack.com }

📌 Sprints never stop. Sprints in Scrum are constant, unlike the traditional Waterfall model where high-pressure periods are followed by low-pressure times. Sprints create ongoing, medium-level stress, which is more damaging long-term than short-term, intense stress. Long-term stress harms both mental and physical health. Advice: Build in deliberate breaks between sprints. Allow teams time to recover, reflect, and recalibrate before the next sprint. Introduce buffer periods for less intense work or creative activities.

🔖 Sprints are involuntary. Sprints in a Scrum environment are often imposed on developers, leaving them no control over the process or duration. Lack of autonomy leads to higher stress, similar to studies where forced activity triggers stress responses in animals. Control over work processes can reduce stress and improve job satisfaction. Advice: Involve the team in the sprint planning process and give them a say in determining task durations, sprint length, and workload. Increase autonomy to reduce stress by tailoring the Scrum process to fit the team’s needs rather than rigidly following preset rules.

😡 Sprints neglect key supporting activities. Scrum focuses on completing tasks within sprint cycles but doesn’t allocate enough time for essential preparatory activities like brainstorming and research. The lack of preparation time creates stress and leads to suboptimal work because thinking and doing cannot be entirely separated. Advice: Allocate time within sprints for essential preparation, brainstorming, and research. Set aside dedicated periods for planning, learning, or technical exploration, rather than expecting full-time execution during the sprint.

🍷 Most Scrum implementations devolve into “Scrumfall.” Scrum is often mixed with Waterfall-like big-deadline pressures, which cancel out the benefits of sprints and increase stress. When major deadlines approach, Scrum practices are suspended, leading to a high-stress environment combining the worst aspects of both methodologies. Advice: Resist combining Waterfall-style big deadlines with Scrum. Manage stakeholder expectations upfront and break larger goals into smaller deliverables aligned with sprint cycles. Stick to Agile principles and avoid falling back into the big-bang, all-at-once delivery mode.

2024-09-15 HOW TO SUCCEED IN MRBEAST PRODUCTION (leaked PDF) { simonwillison.net }

The MrBeast definition of A, B and C-team players is one I haven’t heard before:

A-Players are obsessive, learn from mistakes, coachable, intelligent, don’t make excuses, believe in Youtube, see the value of this company, and are the best in the goddamn world at their job. B-Players are new people that need to be trained into A-Players, and C-Players are just average employees. […] They arn’t obsessive and learning. C-Players are poisonous and should be transitioned to a different company IMMEDIATELY. (It’s okay we give everyone severance, they’ll be fine).

I’m always interested in finding management advice from unexpected sources. For example, I love The Eleven Laws of Showrunning as a case study in managing and successfully delegating for a large, creative project.

Newsletters

2024-09-11 The web's clipboard { newsletter.programmingdigest.net }

2024-09-12 JavaScript Weekly Issue 704: September 12, 2024 { javascriptweekly.com }

· 15 min read

The Talk

2024-09-01 Investigating Legacy Design Trends in C++ & Their Modern Replacements - Katherine Rocha C++Now 2024 - YouTube { www.youtube.com }

Katherine Rocha

image-20240901150003068

GPT generated content (close to the talk content)

This digest is a comprehensive breakdown of the talk provided, which covered various advanced C++ programming techniques and concepts. Below, each point from the talk is identified and described in detail, followed by relevant C++ code examples to illustrate the discussed concepts.


1. SFINAE and Overload Resolution

The talk begins with a discussion on the use of SFINAE (Substitution Failure Is Not An Error) and its role in overload resolution. SFINAE is a powerful C++ feature that allows template functions to be excluded from overload resolution based on specific conditions, enabling more precise control over which function templates should be used.

Key Points:

  • SFINAE is used to selectively disable template instantiation based on the properties of template arguments.
  • Overload resolution in C++ allows for multiple functions or operators with the same name to be defined, as long as their parameters differ. The compiler decides which function to call based on the arguments provided.

C++ Example:

#include <type_traits>
#include <iostream>

// Template function enabled only for arithmetic types using SFINAE
template <typename T>
typename std::enable_if<std::is_arithmetic<T>::value, T>::type
add(T a, T b) {
return a + b;
}

// Overload for non-arithmetic types is not instantiated
template <typename T>
typename std::enable_if<!std::is_arithmetic<T>::value, T>::type
add(T a, T b) = delete;

int main() {
std::cout << add(5, 3) << std::endl; // OK: int is arithmetic
// std::cout << add("Hello", "World"); // Error: string is not arithmetic
return 0;
}

2. Compile-Time Error Messages

The talk transitions into how to improve compile-time error messages using static_assert and custom error handling in templates. By using these techniques, developers can provide clearer error messages when certain conditions are not met during template instantiation.

Key Points:

  • Use static_assert to enforce conditions at compile time, ensuring that the program fails to compile if certain criteria are not met.
  • Improve the readability of error messages by providing meaningful feedback directly in the code.

C++ Example:

#include <iostream>
#include <type_traits>

template<typename T>
void check_type() {
static_assert(std::is_integral<T>::value, "T must be an integral type");
}

int main() {
check_type<int>(); // OK
// check_type<double>(); // Compile-time error: T must be an integral type
return 0;
}

3. Concepts in C++20

The talk explores Concepts, a feature introduced in C++20, which allows developers to specify constraints on template arguments more succinctly and expressively compared to SFINAE. Concepts help in making templates more readable and the error messages more comprehensible.

Key Points:

  • Concepts define requirements for template parameters, making templates easier to understand and use.
  • Concepts improve the clarity of both template definitions and error messages.

C++ Example:

#include <concepts>
#include <iostream>

template<typename T>
concept Arithmetic = std::is_arithmetic_v<T>;

template<Arithmetic T>
T add(T a, T b) {
return a + b;
}

int main() {
std::cout << add(5, 3) << std::endl; // OK: int is arithmetic
// std::cout << add("Hello", "World"); // Error: concept 'Arithmetic' not satisfied
return 0;
}

4. Polymorphism and CRTP

The talk covers polymorphism and the Curiously Recurring Template Pattern (CRTP), a technique where a class template is derived from itself. CRTP allows for static polymorphism at compile time, which can offer performance benefits over dynamic polymorphism.

Key Points:

  • Runtime Polymorphism: Achieved using inheritance and virtual functions, but comes with runtime overhead due to the use of vtables.
  • CRTP: A pattern that enables polymorphism at compile-time, avoiding the overhead of vtables.

C++ Example:

#include <iostream>

// CRTP Base class
template<typename Derived>
class Base {
public:
void interface() {
static_cast<Derived*>(this)->implementation();
}

static void staticInterface() {
Derived::staticImplementation();
}
};

class Derived1 : public Base<Derived1> {
public:
void implementation() {
std::cout << "Derived1 implementation" << std::endl;
}

static void staticImplementation() {
std::cout << "Derived1 static implementation" << std::endl;
}
};

class Derived2 : public Base<Derived2> {
public:
void implementation() {
std::cout << "Derived2 implementation" << std::endl;
}

static void staticImplementation() {
std::cout << "Derived2 static implementation" << std::endl;
}
};

int main() {
Derived1 d1;
d1.interface();
Derived1::staticInterface();

Derived2 d2;
d2.interface();
Derived2::staticInterface();

return 0;
}

5. Deducing this in C++23

The discussion moves to deducing this, a feature introduced in C++23 that allows for more expressive syntax when working with member functions, particularly in the context of templates.

Key Points:

  • Deducing this enables more flexible and readable template code involving member functions.
  • This feature simplifies the syntax when this needs to be deduced as part of template metaprogramming.

C++ Example:

#include <iostream>

class MyClass {
public:
auto myMethod() const -> decltype(auto) {
return [this] { return *this; };
}

void print() const {
std::cout << "MyClass instance" << std::endl;
}
};

int main() {
MyClass obj;
auto f = obj.myMethod();
f().print(); // Outputs: MyClass instance
return 0;
}

6. Design Methodologies: Procedural, OOP, Functional, and Data-Oriented Design

The final section of the talk compares various design methodologies including Procedural, Object-Oriented Programming (OOP), Functional Programming (FP), and Data-Oriented Design (DOD). Each paradigm has its strengths and use cases, and modern C++ often blends these methodologies to achieve optimal results.

Key Points:

  • Procedural Programming: Focuses on a sequence of steps or procedures to accomplish tasks.
  • Object-Oriented Programming (OOP): Organizes code around objects and data encapsulation.
  • Functional Programming (FP): Emphasizes immutability and function composition.
  • Data-Oriented Design (DOD): Focuses on data layout in memory for performance, often used in game development.

C++ Example (Object-Oriented):

#include <iostream>
#include <vector>

class Telemetry {
public:
virtual void process() const = 0;
};

class InstantaneousEvent : public Telemetry {
public:
void process() const override {
std::cout << "Processing instantaneous event" << std::endl;
}
};

class LongTermEvent : public Telemetry {
public:
void process() const override {
std::cout << "Processing long-term event" << std::endl;
}
};

void processEvents(const std::vector<Telemetry*>& events) {
for (const auto& event : events) {
event->process();
}
}

int main() {
std::vector<Telemetry*> events = { new InstantaneousEvent(), new LongTermEvent() };
processEvents(events);

for (auto event : events) {
delete event;
}

return 0;
}

C++ Example (Functional Programming):

#include <iostream>
#include <vector>
#include <algorithm>

struct Event {
int time;
bool isLongTerm;
};

void processEvents(const std::vector<Event>& events) {
std::for_each(events.begin(), events.end(), [](const Event& event) {
if (event.isLongTerm) {
std::cout << "Processing long-term event at time " << event.time << std::endl;
} else {
std::cout << "Processing instantaneous event at time " << event.time << std::endl;
}
});
}

int main() {
std::vector<Event> events = { {1, false}, {2, true}, {3, false} };
processEvents(events);
return 0;
}

C++ Example (Data-Oriented Design):

#include <iostream>
#include <vector>

struct TelemetryData {
std::vector<int> instantaneousTimes;
std::vector<int> longTermTimes;
};

void processInstantaneous(const std::vector<int>& times) {
for (int time : times) {
std::cout << "Processing instantaneous event at time " << time << std::endl;
}
}

void processLongTerm(const std::vector<int>& times) {
for (int time : times) {
std::cout << "Processing long-term event at time " << time << std::endl;
}
}

int main() {
TelemetryData data = {


{ 1, 3, 5 },
{ 2, 4, 6 }
};

processInstantaneous(data.instantaneousTimes);
processLongTerm(data.longTermTimes);

return 0;
}

GPT generated content (with a bit of "hallucinations")

Here's the expanded digest with essential text and detailed code examples for each point, focusing on modern replacements for legacy C++ practices.


Legacy Pointers vs. Smart Pointers

Legacy Practice: Use of raw pointers, manual memory management, and explicit new and delete. This can lead to memory leaks, dangling pointers, and undefined behavior.

Modern Replacement: Use smart pointers like std::unique_ptr, std::shared_ptr, and std::weak_ptr to manage dynamic memory automatically.

// Legacy code
class LegacyClass {
int* data;
public:
LegacyClass() { data = new int[10]; }
~LegacyClass() { delete[] data; }
};

// Modern code
#include <memory>

class ModernClass {
std::unique_ptr<int[]> data;
public:
ModernClass() : data(std::make_unique<int[]>(10)) {}
// Destructor not needed, as std::unique_ptr handles memory automatically
};

Key Insight: Using smart pointers reduces the need for manual memory management, preventing common errors like memory leaks and dangling pointers.


C-Style Arrays vs. STL Containers

Legacy Practice: Use of C-style arrays, which require manual memory management and do not provide bounds checking.

Modern Replacement: Use std::vector for dynamic arrays or std::array for fixed-size arrays. These containers handle memory management internally and offer bounds checking.

// Legacy code
int arr[10];
for (int i = 0; i < 10; ++i) {
arr[i] = i * 2;
}

// Modern code
#include <vector>
#include <array>

std::vector<int> vec(10);
for (int i = 0; i < 10; ++i) {
vec[i] = i * 2;
}

std::array<int, 10> arr2;
for (int i = 0; i < 10; ++i) {
arr2[i] = i * 2;
}

Key Insight: STL containers provide better safety and ease of use compared to traditional arrays, and should be the default choice in modern C++.


Manual Error Handling vs. Exceptions and std::expected

Legacy Practice: Return codes or error flags to indicate failures, which can be cumbersome and error-prone.

Modern Replacement: Use exceptions for error handling, which separate normal flow from error-handling code. Use std::expected (from C++23) for functions that can either return a value or an error.

// Legacy code
int divide(int a, int b, bool& success) {
if (b == 0) {
success = false;
return 0;
}
success = true;
return a / b;
}

// Modern code with exceptions
int divide(int a, int b) {
if (b == 0) throw std::runtime_error("Division by zero");
return a / b;
}

// Modern code with std::expected (C++23)
#include <expected>

std::expected<int, std::string> divide(int a, int b) {
if (b == 0) return std::unexpected("Division by zero");
return a / b;
}

Key Insight: Exceptions and std::expected offer more explicit and manageable error handling, improving code clarity and robustness.


Void Pointers vs. Type-Safe Programming

Legacy Practice: Use of void* for generic programming, leading to unsafe code and difficult debugging.

Modern Replacement: Use templates for type-safe generic programming, ensuring that code is checked at compile time.

// Legacy code
void process(void* data, int type) {
if (type == 1) {
int* intPtr = static_cast<int*>(data);
// Process int
} else if (type == 2) {
double* dblPtr = static_cast<double*>(data);
// Process double
}
}

// Modern code
template <typename T>
void process(T data) {
// Process data safely with type known at compile time
}

int main() {
process(10); // Automatically deduces int
process(5.5); // Automatically deduces double
}

Key Insight: Templates provide type safety, ensuring errors are caught at compile time and making code easier to maintain.


Inheritance vs. Composition and Type Erasure

Legacy Practice: Deep inheritance hierarchies, which can lead to rigid designs and hard-to-maintain code.

Modern Replacement: Favor composition over inheritance. Use type erasure (e.g., std::function, std::any) or std::variant to achieve polymorphism without inheritance.

// Legacy code
class Base {
public:
virtual void doSomething() = 0;
};

class Derived : public Base {
public:
void doSomething() override {
// Implementation
}
};

// Modern code using composition
class Action {
std::function<void()> func;
public:
Action(std::function<void()> f) : func(f) {}
void execute() { func(); }
};

Action a([]() { /* Implementation */ });
a.execute();

// Modern code using std::variant
#include <variant>

using MyVariant = std::variant<int, double, std::string>;

void process(const MyVariant& v) {
std::visit([](auto&& arg) {
// Implementation for each type
}, v);
}

Key Insight: Composition and type erasure lead to more flexible and maintainable designs than traditional deep inheritance hierarchies.


Global Variables vs. Dependency Injection

Legacy Practice: Use of global variables for shared state, which can lead to hard-to-track bugs and dependencies.

Modern Replacement: Use dependency injection to provide dependencies explicitly, improving testability and modularity.

// Legacy code
int globalCounter = 0;

void increment() {
globalCounter++;
}

// Modern code using dependency injection
class Counter {
int count;
public:
Counter() : count(0) {}
void increment() { ++count; }
int getCount() const { return count; }
};

void useCounter(Counter& counter) {
counter.increment();
}

int main() {
Counter c;
useCounter(c);
std::cout << c.getCount();
}

Key Insight: Dependency injection enhances modularity and testability by explicitly providing dependencies rather than relying on global state.


Macros vs. constexpr and Inline Functions

Legacy Practice: Extensive use of macros for constants and inline code, which can lead to debugging challenges and obscure code.

Modern Replacement: Use constexpr for compile-time constants and inline functions for inline code, which are type-safe and easier to debug.

// Legacy code
#define SQUARE(x) ((x) * (x))

// Modern code using constexpr
constexpr int square(int x) {
return x * x;
}

// Legacy code using macro for constant
#define MAX_SIZE 100

// Modern code using constexpr
constexpr int maxSize = 100;

Key Insight: constexpr and inline functions offer better type safety and are easier to debug compared to macros, making the code more maintainable.


Manual Resource Management vs. RAII (Resource Acquisition Is Initialization)

Legacy Practice: Manual resource management, requiring explicit release of resources like files, sockets, and memory.

Modern Replacement: Use RAII, where resources are tied to object lifetime and automatically released when the object goes out of scope.

// Legacy code
FILE* file = fopen("data.txt", "r");
if (file) {
// Use file
fclose(file);
}

// Modern code using RAII with std::fstream
#include <fstream>

{
std::ifstream file("data.txt");
if (file.is_open()) {
// Use file
} // File is automatically closed when going out of scope
}

Key Insight: RAII automates resource management, reducing the risk of resource leaks and making code more reliable.


Explicit Loops vs. Algorithms and Ranges

Legacy Practice: Manual loops for operations like filtering, transforming, or accumulating data.

Modern Replacement: Use STL algorithms (std::transform, std::accumulate, std::copy_if) and ranges (C++20) to express intent more clearly and concisely.

// Legacy code
std::vector<int> vec = {1, 2, 3, 4, 5};
std::vector<int> result;

for (auto i : vec) {
if (i % 2 == 0) result.push_back(i * 2);
}

// Modern code using algorithms
#include <algorithm>
#include <vector>

std::vector<int> vec = {1, 2, 3, 4, 5};
std::vector<int> result;

std::transform(vec.begin(), vec.end(), std::back_inserter(result),
[](int x) { return x % 2 == 0 ? x * 2 : 0; });
result.erase(std::remove(result.begin(), result.end(), 0), result.end());

// Modern code using ranges (C++20)
#include <ranges>

auto result = vec | std::views::filter([](int x) { return x % 2 == 0; })
| std::views::transform([](int x) { return x * 2; });

Key Insight: STL algorithms and ranges make code more expressive and concise, reducing the likelihood of errors and enhancing readability.


Manual String Manipulation vs. std::string and std::string_view

Legacy Practice: Use of char* and

manual string manipulation with functions like strcpy, strcat, and strcmp.

Modern Replacement: Use std::string for dynamic strings and std::string_view for non-owning string references, which offer safer and more convenient string handling.

// Legacy code
char str1[20] = "Hello, ";
char str2[] = "world!";
strcat(str1, str2);
if (strcmp(str1, "Hello, world!") == 0) {
// Do something
}

// Modern code using std::string
#include <string>

std::string str1 = "Hello, ";
std::string str2 = "world!";
str1 += str2;
if (str1 == "Hello, world!") {
// Do something
}

// Modern code using std::string_view (C++17)
#include <string_view>

std::string_view strView = str1;
if (strView == "Hello, world!") {
// Do something
}

Key Insight: std::string and std::string_view simplify string handling, provide better safety, and eliminate the risks associated with manual C-style string manipulation.


Threading with Raw Threads vs. std::thread and Concurrency Utilities

Legacy Practice: Creating and managing threads manually using platform-specific APIs, which can be error-prone and non-portable.

Modern Replacement: Use std::thread and higher-level concurrency utilities like std::future, std::async, and std::mutex to manage threading in a portable and safe way.

// Legacy code (Windows example)
#include <windows.h>

DWORD WINAPI threadFunc(LPVOID lpParam) {
// Thread code
return 0;
}

HANDLE hThread = CreateThread(NULL, 0, threadFunc, NULL, 0, NULL);

// Modern code using std::thread
#include <thread>

void threadFunc() {
// Thread code
}

std::thread t(threadFunc);
t.join(); // Wait for thread to finish

// Modern code using std::async
#include <future>

auto future = std::async(std::launch::async, threadFunc);
future.get(); // Wait for async task to finish

Key Insight: std::thread and other concurrency utilities provide a portable and higher-level interface for multithreading, reducing the complexity and potential errors associated with manual thread management.


Function Pointers vs. std::function and Lambdas

Legacy Practice: Use of function pointers to pass functions as arguments or store them in data structures, which can be cumbersome and less flexible.

Modern Replacement: Use std::function to store callable objects, and lambdas to create inline, anonymous functions.

// Legacy code
void (*funcPtr)(int) = someFunction;
funcPtr(10);

// Modern code using std::function and lambdas
#include <functional>
#include <iostream>

std::function<void(int)> func = [](int x) { std::cout << x << std::endl; };
func(10);

Key Insight: std::function and lambdas offer a more flexible and powerful way to handle functions as first-class objects, making code more modular and expressive.

· 15 min read

[[TOC]]

How the things work

2024-08-31 Hypervisor From Scratch - Part 1: Basic Concepts & Configure Testing Environment | Rayanfam Blog { rayanfam.com }

Hypervisor From Scratch

The source code for Hypervisor From Scratch is available on GitHub :

[https://github.com/SinaKarvandi/Hypervisor-From-Scratch/]

2024-08-31 Reversing Windows Internals (Part 1) - Digging Into Handles, Callbacks & ObjectTypes | Rayanfam Blog { rayanfam.com }

2024-08-31 A Tour of Mount in Linux | Rayanfam Blog { rayanfam.com }

image-20240830200258339

2024-09-01 tandasat/Hypervisor-101-in-Rust: { github.com }

The materials of "Hypervisor 101 in Rust", a one-day long course, to quickly learn hardware-assisted virtualization technology and its application for high-performance fuzzing on Intel/AMD processors.

https://tandasat.github.io/Hypervisor-101-in-Rust/

image-20240901010106576

SAML

2024-09-02 A gentle introduction to SAML | SSOReady { ssoready.com }

image-20240901234406239

2024-09-02 Visual explanation of SAML authentication { www.sheshbabu.com }

image-20240901233107815

:thinking: Tricks!

2024-09-02 saving my git email from spam { halb.it }

Github has a cool option that replaces your private email with a noreply github email, which looks like this: 14497532+username@users.noreply.github.com. You just have to enable “keep my email address private” in the email settings. You can read the details in the github guide for setting your email privacy.

With this solution your email will remain private without loosing precious green squares in the contribution graph.

CRDT

2024-09-01 Movable tree CRDTs and Loro's implementation – Loro { loro.dev }

This article introduces the implementation difficulties and challenges of Movable Tree CRDTs when collaboration, and how Loro implements it and sorts child nodes.

Art and Assets

2024-09-01 Public Work by Cosmos { public.work }

image-20240901005017480

Game Theory 101

2024-09-01 ⭐️ Game Theory 101 (#1): Introduction - YouTube { www.youtube.com }

image-20240901010905811

2024-09-01 Finding Nash Equilibria through Simulation { coe.psu.ac.th }

image-20240901011057303

(Emacs)

2024-09-01 A Simple Guide to Writing & Publishing Emacs Packages { spin.atomicobject.com }

image-20240901153404884

2024-09-01 Emacs starter kit { emacs-config-generator.fly.dev }

image-20240901153233791

2024-09-01 dot-files/emacs-blog.org at 1b54fe75d74670dc7bcbb6b01ea560c45528c628 · howardabrams/dot-files { github.com }

image-20240901152917238

2024-08-31 ⭐️ The Organized Life - An Expert‘s Guide to Emacs Org-Mode – TheLinuxCode { thelinuxcode.com }

2024-08-31 ⭐️ Mastering Organization with Emacs Org Mode: A Complete Guide for Beginners – TheLinuxCode { thelinuxcode.com }

image-20240830193810145

2024-08-30 chrisdone-archive/elisp-guide: A quick guide to Emacs Lisp programming { github.com }

image-20240830134758680

2024-08-30 Getting Started With Emacs Lisp Hands On - A Practical Beginners Tutorial – Ben Windsor – Strat at an investment bank { benwindsorcode.github.io }

image-20240830135224690

Retro / Fun

2024-08-30 VisiCalc - The Early History - Peter Jennings { benlo.com }

image-20240830135448117

2024-09-01 paperclips { www.decisionproblem.com }

image-20240901153052859

2024-09-02 Seiko Originals: The UC-2000, A Smartwatch from 1984 – namokiMODS { www.namokimods.com }

image-20240901235821210

Inspiration

2024-09-02 Navigating Corporate Giants Jeffrey Snover and the Making of PowerShell - CoRecursive Podcast { corecursive.com }

image-20240902001457920

I joined Microsoft at a time when the company was struggling to break into the enterprise market. While we dominated personal computing, our tools weren’t suitable for managing large data centers. I knew we needed a command-line interface (CLI) to compete with Unix, but Microsoft’s culture was deeply rooted in graphical user interfaces (GUIs). Despite widespread skepticism, I was determined to create a tool that could empower administrators to script and automate complex tasks.

My first major realization was that traditional Unix tools wouldn’t work on Windows because Unix is file-oriented, while Windows is API-oriented. This led me to focus on Windows Management Instrumentation (WMI) as the backbone for our CLI. Despite this, I faced resistance from within. The company only approved a handful of commands when we needed thousands. To solve this, I developed a metadata-driven architecture that allowed us to efficiently create and scale commands, laying the foundation for PowerShell.

However, getting others on board was a challenge. When I encountered a team planning to port a Unix shell to Windows, I knew they were missing the bigger picture. To demonstrate my vision, I locked myself away and wrote a 10,000-line prototype of what would become PowerShell. This convinced the team to embrace my approach.

I was able to show them and they said, ‘Well, what about this?’ And I showed them. And they said, ‘What about that?’ And I showed them. Their eyes just got big and they’re like, ‘This, this, this.’

Pursuing this project meant taking a demotion, a decision that was financially and personally difficult. But I was convinced that PowerShell could change the world, and that belief kept me going. To align the team, I wrote the Monad Manifesto, which became the guiding document for the project. Slowly, I convinced product teams like Active Directory to support us, which helped build momentum.

The project faced another major challenge during Microsoft’s push to integrate everything with .NET. PowerShell, built on .NET, was temporarily removed from Windows due to broader integration issues. It took years of persistence to get it back in, but I eventually succeeded.

PowerShell shipped with Windows Vista, but I continued refining it through multiple versions, despite warnings that focusing on this project could harm my career. Over time, PowerShell became a critical tool for managing data centers and was instrumental in enabling Microsoft’s move to the cloud.

In the end, the key decisions—pushing for a CLI, accepting a demotion, and persisting through internal resistance—led to PowerShell's success and allowed me to make a lasting impact on how Windows is managed.

2024-09-02 Netflix/maestro: Maestro: Netflix’s Workflow Orchestrator { github.com }

image-20240901234630103

2024-09-01 The Scale of Life { www.thescaleoflife.com }

image-20240901153703324

2024-09-01 opslane/opslane: Making on-call suck less for engineers { github.com }

image-20240901152737861

2024-09-01 Azure Quantum | Learn with quantum katas { quantum.microsoft.com }

image-20240901152236367

2024-09-01 microsoft/QuantumKatas: Tutorials and programming exercises for learning Q# and quantum computing { github.com }

2024-09-01 EP122: API Gateway 101 - ByteByteGo Newsletter { blog.bytebytego.com }

2024-09-01 pladams9/hexsheets: A basic spreadsheet application with hexagonal cells inspired by: http://www.secretgeek.net/hexcel. { github.com }

image-20240901010426062

2024-09-01 Do Quests, Not Goals { www.raptitude.com }

The other problem with goals is that, outside of sports, “goal” has become an uninspiring, institutional word. Goals are things your teachers and managers have for you. Goals are made of quotas and Key Performance Indicators. As soon as I write the word “goals” on a sheet of paper I get drowsy.

image-20240901005313993

Here are some of the quests people took on:

  • Declutter the whole house
  • Record an EP
  • Prep six months’ worth of lessons for my students
  • Set up an artist’s workspace
  • Finish two short stories
  • Gain a basic knowledge of classical music
  • Fill every page in a sketchbook with drawings
  • Complete a classical guitar program
  • Make an “If I get hit by a bus” folder for my family

2024-08-30 oTranscribe { otranscribe.com }

image-20240830135922316

Security

2024-08-31 The State of Application Security 2023 • Sebastian Brandes • GOTO 2023 - YouTube { www.youtube.com }

image-20240830192609064

Sebastian, co-founder of Hey Hack, a Danish startup focused on web application security, presented findings from a large-scale study involving the scanning of nearly 4 million hosts globally. The study uncovered widespread vulnerabilities in web applications, including file leaks, dangling DNS records, vulnerable FTP servers, and persistent cross-site scripting (XSS) issues.

Key findings include:

  • File leaks: 29% of organizations had exposed sensitive data like source code, passwords, and private keys.
  • Dangling DNS records: Risks of subdomain takeover attacks due to outdated DNS entries.
  • Vulnerable FTP servers: 7.9% of servers running ProFTPD 1.3.5 were at risk due to a file copy module vulnerability.
  • XSS vulnerabilities: 4% of companies had known XSS issues, posing significant security risks.

Sebastian stressed that web application firewalls (WAFs) are not foolproof and cannot replace fixing underlying vulnerabilities. He concluded by emphasizing the importance of early investment in application security during the development process to prevent future attacks.

"We’ve seen lots of leaks or file leaks that are sitting out there—files that you probably would not want to expose to the public internet."

"Web application firewalls can maybe do something, but they’re not going to save you. It’s much, much better to go ahead and fix the actual issues in your application."

2024-08-30 BeEF - The Browser Exploitation Framework Project { beefproject.com }

image-20240830140152625

2024-08-31 stack-auth/stack: Open-source Clerk/Auth0 alternative { github.com }

Stack Auth is a managed user authentication solution. It is developer-friendly and fully open-source (licensed under MIT and AGPL).

Stack gets you started in just five minutes, after which you'll be ready to use all of its features as you grow your project. Our managed service is completely optional and you can export your user data and self-host, for free, at any time. image-20240830194951803

Markdown

2024-09-02 romansky/dom-to-semantic-markdown: DOM to Semantic-Markdown for use in LLMs { github.com }

image-20240901232517227

C || C++

2024-09-02 Faster Integer Parsing { kholdstare.github.io }

image-20240901233314132

2024-09-01 c++ - What is the curiously recurring template pattern (CRTP)? - Stack Overflow { stackoverflow.com }

image-20240901144719965

image-20240901144828823

The Era of AI

2024-09-02 txtai { neuml.github.io }

txtai is an all-in-one embeddings database for semantic search, LLM orchestration and language model workflows.

image-20240901235351463

2024-09-02 Solving the out-of-context chunk problem for RAG { d-star.ai }

Many of the problems developers face with RAG come down to this: Individual chunks don’t contain sufficient context to be properly used by the retrieval system or the LLM. This leads to the inability to answer seemingly simple questions and, more worryingly, hallucinations.

Examples of this problem

  • Chunks oftentimes refer to their subject via implicit references and pronouns. This causes them to not be retrieved when they should be, or to not be properly understood by the LLM.
  • Individual chunks oftentimes don’t contain the complete answer to a question. The answer may be scattered across a few adjacent chunks.
  • Adjacent chunks presented to the LLM out of order cause confusion and can lead to hallucinations.
  • Naive chunking can lead to text being split “mid-thought” leaving neither chunk with useful context.
  • Individual chunks oftentimes only make sense in the context of the entire section or document, and can be misleading when read on their own.

2024-08-30 MahmoudAshraf97/whisper-diarization: Automatic Speech Recognition with Speaker Diarization based on OpenAI Whisper { github.com }

2024-08-30 openai/whisper: Robust Speech Recognition via Large-Scale Weak Supervision { github.com }

2024-08-30 ggerganov/whisper.cpp: Port of OpenAI's Whisper model in C/C++ { github.com }

2024-09-01 microsoft/semantic-kernel: Integrate cutting-edge LLM technology quickly and easily into your apps { github.com }

2024-09-01 How to add genuinely useful AI to your webapp (not just chatbots) - Steve Sanderson - YouTube { www.youtube.com }

image-20240901012420483

The talk presented here dives into the integration of AI within applications, particularly focusing on how developers, especially those familiar with .NET and web technologies, can leverage AI to enhance user experiences. Here are the key takeaways and approaches from the session:

Making Applications Intelligent: The speaker discusses various interpretations of making an app "intelligent." It’s not just about adding a chatbot. While chatbots can create impressive demos quickly, they may not necessarily be useful in production. For AI to be genuinely beneficial, it must save time, improve job performance, and be accurate. The speaker challenges developers to quantify these benefits rather than rely on assumptions.

"If you try to put it into production, are people going to actually use it? Well, maybe it depends... does this thing actually save people time and enable them to do their job better than they would have otherwise?"

Patterns of AI Integration: The speaker introduces several UI-level AI enhancements such as Smart Components. These are experiments allowing developers to add AI to the UI layer without needing to rebuild the entire app. An example given is a Smart Paste feature that allows users to paste large chunks of text, which AI then parses and fills out the corresponding fields in a form. This feature improves user efficiency by reducing the need for repetitive and mundane tasks.

Another example is the Smart ComboBox, which uses semantic search to match user input with relevant categories, even when the exact terms do not appear in the list. This feature is particularly useful in scenarios where users may not know the exact terminology.

Deeper AI Integration: Moving beyond UI enhancements, the speaker explores deeper layers of AI integration within traditional web applications like e-commerce platforms. For instance, AI can be used to:

  • Semantic Search: Improve search functionality so that users don't need to know the exact phrasing.
  • Summarization: Automatically generate descriptive titles for support tickets to help staff quickly identify issues.
  • Classification: Automatically categorize support tickets to streamline workflows and save staff time.
  • Sentiment Analysis: Provide sentiment scores to help staff prioritize urgent issues.

"I think even in this very traditional web application, there's clearly lots of opportunity for AI to add a lot of genuine value that will help your staff actually be more productive."

Data and AI Integration: The talk also delves into the importance of data in AI applications. The speaker introduces the Semantic Kernel, a .NET library for working with AI, and demonstrates how to generate data using LLMs (Large Language Models) locally on the development machine using Ollama. The process involves creating categories, products, and related data (like product manuals) in a structured manner.

Data Ingestion and Semantic Search: The speaker showcases how to ingest unstructured data, such as PDFs, and convert them into a format that AI can use for semantic search. Using the PDFPig library, the speaker demonstrates extracting text from PDFs, chunking it into smaller, meaningful fragments, and then embedding these chunks into a semantic space. This allows for efficient, relevant searches within the data, enhancing the AI’s ability to provide accurate information quickly.

Implementing Inference with AI: As the talk progresses, the speaker moves on to implementing AI-based inference within a Blazor application. By integrating summarization directly into the workflow, the application can automatically generate summaries of customer interactions, helping support staff to quickly understand the context of a ticket without reading through the entire conversation history.

"I want to generate an updated summary for it... Generate a summary of the entire conversation log at that point."

Function Calling and RAG (Retrieval-Augmented Generation): The speaker discusses a more complex AI pattern—RAG—which involves the AI model retrieving specific data to answer queries. While standard RAG implementations rely on specific AI platforms, the speaker demonstrates a custom approach that works across various models, including locally run models like Ollama. This approach involves checking if the AI has enough context to answer a question and then retrieving relevant information if needed.

Job interview / Algorithms

2024-09-01 Understanding B-Trees: The Data Structure Behind Modern Databases - YouTube { www.youtube.com }

image-20240901011314149

Editing Distance

2024-09-02 Needleman–Wunsch algorithm - Wikipedia { en.wikipedia.org }

2024-09-02 Levenshtein distance - Wikipedia { en.wikipedia.org }

function LevenshteinDistance(char s[1..m], char t[1..n]):
// for all i and j, d[i,j] will hold the Levenshtein distance between
// the first i characters of s and the first j characters of t
declare int d[0..m, 0..n]

set each element in d to zero

// source prefixes can be transformed into empty string by
// dropping all characters
for i from 1 to m:
d[i, 0] := i

// target prefixes can be reached from empty source prefix
// by inserting every character
for j from 1 to n:
d[0, j] := j

for j from 1 to n:
for i from 1 to m:
if s[i] = t[j]:
substitutionCost := 0
else:
substitutionCost := 1

d[i, j] := minimum(d[i-1, j] + 1, // deletion
d[i, j-1] + 1, // insertion
d[i-1, j-1] + substitutionCost) // substitution

return d[m, n]

· 13 min read

Newsletters

2024-08-26 JavaScript Weekly Issue 701: August 22, 2024 { javascriptweekly.com }

Good Reads

2024-08-26 ⭐️ On Writing Well | nikhil.bafna { zodvik.com }

image-20240825174540032

Tech Talks

2024-08-30 Messaging: The fine line between awesome and awful - Laila Bougria - NDC Oslo 2024 - YouTube { www.youtube.com }

image-20240829180120559

Here's a digest of the talk:

I started with a light-hearted introduction about my cultural background and how it relates to having a siesta after lunch, which isn’t an option today since I'm giving this talk. About a decade ago, I was working on a project where we were building a retail system from scratch for a client. Initially, we created a monolithic architecture, which worked well for a while. However, as the business grew, we faced challenges. We saw increased demand and the architecture started showing its limitations. We experienced issues like failed requests, high strain on the database, and even system crashes.

Given the new demands, we decided to evolve our architecture by moving to a message-based system. We hoped this would solve our problems by improving performance, increasing resilience, and allowing easier scaling. However, we quickly realized that the transition wasn’t as smooth as expected. Instead of getting faster, the system became slower, and we started experiencing issues with UI inconsistency. Customers reported cases where the system didn't reflect their actions, leading to confusion and a poor user experience. We also encountered duplicate messages and messages arriving out of order, which led to significant failures and side effects in the system.

One critical lesson we learned was the importance of understanding the shift from synchronous to asynchronous communication. In a synchronous system, there's a direct, immediate response. But in an asynchronous system, messages might take a while to process, leading to delays and out-of-order execution. This can cause unexpected behaviors in the system, making troubleshooting a lot more challenging.

To address the issues with communication patterns, we explored different messaging patterns like one-way communication, request-response, and publish-subscribe. Each has its use case, but we learned that choosing the right pattern is crucial for system stability. For instance, publish-subscribe can be overused, leading to what I call the "passive-aggressive publisher" problem, where a service publishes an event expecting others to act on it, but without direct control, this can cause problems.

A key takeaway is that decoupling doesn’t happen automatically in a message-based system. It requires deliberate effort to identify service boundaries and manage coupling properly. When splitting a monolith, it’s crucial to ask the right questions about the domain and not just accept the default ordering of processes. For example, questioning whether the order in which tasks are executed is necessary can help in finding opportunities for parallel execution, thereby improving efficiency.

We also found that managing SLA (Service Level Agreements) became essential in an asynchronous environment. We started using delayed messages to ensure that tasks were completed within an acceptable time frame. This helped us recover gracefully from both technical and business failures, like handling payment processing delays or credit card issues.

In the end, it’s not just about transitioning to a new architecture but about understanding the trade-offs and challenges that come with it. The key is to balance the benefits of decoupling with the need to maintain order and consistency in the system. By carefully choosing the right communication patterns and managing the inevitable coupling, we can build systems that are both scalable and resilient, even in the face of growing demand.

This journey taught us that evolving a system architecture isn’t just about adopting new technologies but also about adapting our approach to fit the new reality. And sometimes, the lessons learned the hard way are the most valuable ones.

“One of the things we also observed is that sometimes we would receive duplicate messages, and the thing is, we didn’t really account for that. So that’s when we started to see failures and even side effects sometimes.”

“If you need a response with any data to continue when you publish an event—no. Then again, passive-aggressive communication and finally if you need any control over who receives or subscribes to that event—also not a good fit.”

The talk emphasizes the importance of thoughtful architecture decisions, especially when transitioning to a message-based system, and the need for continuous collaboration with business stakeholders to align the system’s behavior with business requirements.

Not a financial advise

2024-08-30 Ditch Banks — Go With Money Market Funds and Treasuries { thefinancebuff.com }

2024-08-30 Ditch banks – Go with money market funds and treasuries | Hacker News { news.ycombinator.com }

image-20240829181735734

Inspiration

2024-08-30 YTCH { ytch.xyz }

https://news.ycombinator.com/item?id=41247023 If YouTube had actual channels

image-20240829190610309

2024-08-30 GlyphDrawing.Club -blog { blog.glyphdrawing.club }

image-20240829185659044

2024-08-30 Vanilla JSX { vanillajsx.com }

2024-08-30 VanillaJSX.com | Hacker News { news.ycombinator.com }

image-20240829181223208

2024-08-30 Blender Shortcuts { hollisbrown.github.io }

image-20240829181030345

🏴‍☠️ Borrow it!

2024-08-30 clemlesne/scrape-it-now: A website to scrape? There's a simple way. { github.com }

image-20240829180644387

⭐️ Simplify HTML / Reader view

2024-08-30 aaronsw/html2text: Convert HTML to Markdown-formatted text. { github.com }

2024-08-30 Tracking supermarket prices with playwright { www.sakisv.net }

image-20240829190924245

The Era of AI

2024-08-26 chartdb/chartdb: Free and Open-source database diagrams editor, visualize and design your DB with a single query. { github.com }

Open-source database diagrams editor No installations • No Database password required.

image-20240825174357928

2024-08-30 Deep Live Cam: Real-Time Face Swapping and One-Click Video Deepfake Tool { deeplive.cam }

image-20240829192202923

WebDev

Charts

2024-08-26 Let’s Make A Bar Chart Tutorial | Vega { vega.github.io }

image-20240825174836815

CSS

2024-08-30 CSS Grid Areas { ishadeed.com }

image-20240829181923758

Keyboard / Game Pad

2024-08-26 jamiebuilds/tinykeys: A tiny (~650 B) & modern library for keybindings. { github.com }

A tiny (~650 B) & modern library for keybindings. See Demo

import { tinykeys } from "tinykeys" // Or `window.tinykeys` using the CDN version

tinykeys(window, {
"Shift+D": () => {
alert("The 'Shift' and 'd' keys were pressed at the same time")
},
"y e e t": () => {
alert("The keys 'y', 'e', 'e', and 't' were pressed in order")
},
"$mod+([0-9])": event => {
event.preventDefault()
alert(`Either 'Control+${event.key}' or 'Meta+${event.key}' were pressed`)
},
})

2024-08-30 alvaromontoro/gamecontroller.js: A JavaScript library that lets you handle, configure, and use gamepads and controllers on a browser, using the Gamepad API { github.com }

Styles

2024-08-26 Newspaper Style Design { codepen.io }

image-20240825175903485

JavaScript / DOM

2024-08-30 Patterns for Memory Efficient DOM Manipulation with Modern Vanilla JavaScript – Frontend Masters Boost { frontendmasters.com }

image-20240829184512754

This article focuses on optimizing DOM manipulation using modern vanilla JavaScript to enhance performance and reduce memory usage in web applications. Understanding and applying these low-level techniques can be crucial in scenarios where performance is a priority, such as in large projects like Visual Studio Code, which relies heavily on manual DOM manipulation for efficiency.

The article begins with an overview of the Document Object Model (DOM), explaining that it is a tree-like structure where each HTML element represents a node. The common DOM APIs like querySelector(), createElement(), and appendChild() are introduced, emphasizing that while frameworks like React or Angular abstract these details, knowing how to manipulate the DOM directly can lead to performance gains.

A significant point is the trade-off between using frameworks and manual DOM manipulation. While frameworks simplify development, they can also introduce performance overhead through unnecessary re-renders and excessive memory usage. The article argues that in performance-critical applications, direct DOM manipulation can prevent these issues by reducing the garbage collector's workload.

To optimize DOM manipulation, several tips are provided:

  • Hiding or showing elements is preferred over creating and destroying them dynamically. This approach keeps the DOM more static, leading to fewer garbage collection calls and reduced client-side logic complexity.
  • For example, instead of dynamically creating an element with JavaScript, it’s more efficient to toggle its visibility with classes (el.classList.add('show') or el.style.display = 'block').

Other techniques discussed include:

  • Using textContent instead of innerText for reading content from elements, as it is faster and avoids forcing a reflow.
  • insertAdjacentHTML is preferred over innerHTML because it inserts content without destroying existing DOM elements first.
  • For the fastest performance, the <template> tag combined with appendChild or insertAdjacentElement is recommended for creating and inserting new DOM elements efficiently.

The article also covers advanced techniques for managing memory:

  • WeakMap and WeakRef are used to avoid memory leaks by ensuring that references to DOM nodes are properly garbage collected when the nodes are removed from the DOM.
  • Proper cleanup of event listeners is emphasized, including methods like removeEventListener, using the once parameter, and employing event delegation to minimize the number of event listeners in dynamic components.

For handling multiple event listeners, the AbortController is introduced as a method to unbind groups of events easily. This can be particularly useful when needing to clean up or cancel multiple event listeners at once.

The article wraps up with profiling and debugging advice. It recommends using Chrome DevTools for memory profiling and JavaScript execution time analysis to ensure that DOM operations do not lead to performance bottlenecks or memory leaks.

"Efficient DOM manipulation isn’t just about using the right methods—it’s also about understanding when and how often you’re interacting with the DOM."

The key takeaway is that while frameworks provide convenience, understanding and utilizing these low-level DOM manipulation techniques can significantly enhance the performance of web applications, particularly in performance-sensitive scenarios.

TypeScript

2024-08-26 gruhn/typescript-sudoku: Playing Sudoku in TypeScript while the type checker highlights mistakes. { github.com }

image-20240825175100097

Markdown

2024-08-26 Getting Started | Milkdown { milkdown.dev }

image-20240825180829277

  • 📝 WYSIWYG Markdown - Write markdown in an elegant way
  • 🎨 Themable - Create your own theme and publish it as an npm package
  • 🎮 Hackable - Create your own plugin to support your awesome idea
  • 🦾 Reliable - Built on top of prosemirror and remark
  • Slash & Tooltip - Write faster than ever, enabled by a plugin.
  • 🧮 Math - LaTeX math equations support via math plugin
  • 📊 Table - Table support with fluent ui, via table plugin
  • 🍻 Collaborate - Shared editing support with yjs
  • 💾 Clipboard - Support copy and paste markdown, via clipboard plugin
  • 👍 Emoji - Support emoji shortcut and picker, via emoji plugin

SteamDeck

2024-08-30 mikeroyal/Steam-Deck-Guide: Steam Deck Guide. Learn all about the Tools, Accessories, Games, Emulators, and Gaming Tips that will make your Steam Deck an awesome Gaming Handheld or a Portable Computer Workstation. { github.com }

image-20240829191917324

Job interview Prep

2024-08-30 Visual Data Structures Cheat-Sheet - by Nick M { photonlines.substack.com }

image-20240829185934397

image-20240829190010682

image-20240829190043550

image-20240829190132236

Workplace

2024-08-30 The Science of Well-Being | Coursera { www.coursera.org }

image-20240829191217188

The Science of Well-Being course by Yale University challenges common assumptions about happiness and teaches evidence-based strategies for improving well-being.

It explains that external factors like wealth have less impact on long-term happiness than we often believe.

Hedonic adaptation shows that people quickly return to a baseline level of happiness after changes in their lives, highlighting the need for sustainable sources of well-being.

Practices like gratitude, mindfulness, and meditation are introduced to help shift focus and improve emotional regulation.

The course emphasizes the importance of social connections and forming healthy habits as key components of happiness.

2024-08-30 Your life, your volume | Loop Earplugs { www.loopearplugs.com }

Unfortunately, not a sponsored content. Seriously my colleague, Lisi, recommended these.

image-20240829182035762

Burnout

Burnout can manifest in different ways depending on the underlying causes. Here’s an expanded explanation of the two types of burnout mentioned:

1. Burnout from Boredom and Routine:

This type of burnout occurs when tasks become monotonous, and there’s a lack of challenge or variety in the work. Over time, this can lead to a sense of disengagement and apathy.

Tips to Mitigate This Type of Burnout:

  • Introduce Variety: Rotate tasks, take on new projects, or explore different aspects of your role to break the monotony.
  • Set Personal Goals: Establishing new challenges or learning opportunities can reinvigorate your sense of purpose.
  • Take Breaks: Step away from work periodically to reset your mind and come back with fresh energy.
  • Seek Feedback: Regularly ask for feedback to ensure you’re growing and improving in your role, which can make work more engaging.
  • Incorporate Creativity: Find ways to add a creative touch to your work, even in routine tasks, to make them more interesting.

2. Burnout from Too Many Changes and Uncertainty:

This type of burnout arises when there’s a constant state of flux, leading to stress and anxiety due to the unpredictability of work.

Tips to Mitigate This Type of Burnout:

  • Prioritize and Organize: Break down tasks into manageable steps and prioritize them to regain a sense of control.
  • Embrace Flexibility: Accept that change is inevitable and try to adapt by being flexible and open to new approaches.
  • Develop Coping Strategies: Practice stress-relief techniques like mindfulness, deep breathing, or exercise to manage anxiety.
  • Seek Support: Talk to colleagues, supervisors, or a professional about your concerns to gain perspective and support.
  • Focus on What You Can Control: Concentrate on aspects of your work where you can make an impact, rather than worrying about uncertainties beyond your control.

General Tips to Combat Burnout:

  • Maintain Work-Life Balance: Ensure you’re taking time for yourself outside of work to recharge.
  • Regular Exercise and Healthy Eating: Physical well-being can greatly influence mental health and resilience.
  • Limit Overtime: Avoid consistently working long hours, which can lead to exhaustion.
  • Take Vacations: Time away from work is crucial for long-term productivity and well-being.
  • Seek Professional Help: If burnout becomes overwhelming, don’t hesitate to consult with a mental health professional.

Personal Blogs

2024-08-26 Articles { codinghelmet.com }

Zoran Horvat

image-20240825181357093

· 18 min read

📚️ Good Reads

2024-06-16 ✏️ How to Build Anything Extremely Quickly - Learn How To Learn

found in programmingdigestOutline speedrunning algorithm:

  1. Make an outline of the project

  2. For each item in the outline, make an outline. Do this recursively until the items are small

  3. Fill in each item as fast as possible

    • You’ll get more momentum by speedrunning it, which feels great, and will make you even more productive

    • DO NOT PERFECT AS YOU GO. This is a huge and common mistake.

    • Finally, once completely done, go back and perfect

    • Color the title text, figure out if buttons should have 5% or 6% border radius, etc

    • Since you’re done, you’ll be less stressed, have a much clearer mind, and design your project better

    • And hey, you’ll enjoy the whole process more, and end up making more things over the long run, causing you to learn/grow more

image-20240806224912272

2024-06-18 A Long Guide to Giving a Short Academic Talk - Benjamin Noble

Anatomy of a Short Talk

Short academic talks tend to follow a standard format:

  • Motivation of the general idea. This can take the form of an illustrative example from the real world or it can highlight a puzzle or gap in the existing scholarship.
  • Ask the research question and preview your answer.
  • A few brief references to the literature you’re speaking to.
  • Your theoretical innovation.
  • An overview of the data underlying the result.
  • Descriptive statistics (if relevant).
  • (Maybe the statistical approach or model, but only if it’s something impressive and/or non-standard. The less Greek the better.)
  • Statistical results IN FIGURE FORM! No regression tables please.
  • Conclusion that restates your main finding. Then, briefly reference your other results (which you have in your appendix slides and would be happy to discuss further in Q&A), and highlight the broader implications of your research.

image-20240806225128203

2024-06-26 What's hidden behind "just implementation details" | nicole@web

Found in Programming Digest: Always Measure One Level Deeper

image-20240806225325353

2024-06-29 A Bunch of Programming Advice I’d Give To Myself 15 Years Ago - Marcus' Blog

If you (or your team) are shooting yourselves in the foot constantly, fix the gun

Regularly identify and fix recurring issues in your workflow or codebase to simplify processes and reduce errors. Don't wait for an onboarding or major overhaul to address these problems.

Assess the trade-off you’re making between quality and pace, make sure it’s appropriate for your context

Evaluate the balance between speed and correctness based on the project's impact and environment. In non-critical applications, prioritize faster shipping and quicker fixes over exhaustive testing.

Spending time sharpening the axe is almost always worth it

Invest time in becoming proficient with your tools and environment. Learn shortcuts, become a fast typist, and know your editor and OS well. This efficiency pays off in the long run.

If you can’t easily explain why something is difficult, then it’s incidental complexity, which is probably worth addressing

Simplify or refactor complex code that can't be easily explained. This reduces future maintenance and makes your system more robust.

Try to solve bugs one layer deeper

Address the root cause of bugs rather than applying superficial fixes. This approach results in a cleaner, more maintainable system.

Don’t underestimate the value of digging into history to investigate some bugs

Use version control history to trace the origin of bugs. Tools like git bisect can be invaluable for pinpointing changes that introduced issues.

Bad code gives you feedback, perfect code doesn’t. Err on the side of writing bad code

Write code quickly to get feedback, even if it’s not perfect. This helps you learn where to focus your efforts and improves overall productivity.

Make debugging easier

Implement debugging aids such as user data replication, detailed tracing, and state debugging. These tools streamline the debugging process and reduce time spent on issues.

When working on a team, you should usually ask the question

Don’t hesitate to ask more experienced colleagues for help. It’s often more efficient than struggling alone and fosters a collaborative environment.

Shipping cadence matters a lot. Think hard about what will get you shipping quickly and often

Optimize your workflow to ensure frequent and fast releases. Simplify processes, use reusable patterns, and maintain a system free of excessive bugs to improve shipping speed.

2024-06-30 How Does Facebook Manage to Serve Billions of Users Daily?

Found in 2024-06-30 Programming Digest: The Itanic Saga

You might be wondering, “Well, can’t we just query the database to get the posts that should be shown in the feed of a user?”. Of course, we can – but it won’t be fast enough. The database is more like a warehouse, where the data is stored in a structured way. It’s optimized for storing and retrieving data, but not for serving data fast.

The cache is more like a shelf, where the data is stored in a way that it can be retrieved quickly.

2024-07-15 How To Know When It's Time To Go

Found in 2024-07-15 Ten Years with Microservices :: Programming Digest

I retired in 2021 after 40 years as a programmer, not because I couldn't keep up but because I lost interest. Careers evolve, and everyone eventually reaches a point where they can no longer continue as they have. This isn't just about retirement; it can happen anytime. Some people become obsolete due to outdated technology, lose passion, or are forced out by market changes.

Sustaining a long programming career is challenging due to rapid technological shifts. Many of my peers either moved into management or became obsolete. It's essential to be honest with yourself about your ability to keep up and your job satisfaction. Sometimes, leaving programming or transitioning to a different field can bring greater fulfillment.

"Are you keeping up to date sufficiently to continue the job? Is the job even interesting anymore, or is there something else you would rather do?"

Making informed career decisions is crucial. Age and ability are not necessarily correlated, and personal fulfillment should take priority over financial reasons. Even in retirement, I continue to write code for my generative art practice, finding joy in the complexity and creativity it offers.

"Programming can be a fun career, a horrible nightmare, or something in between, and it never stands still."

Evaluate your career honestly, be open to change, and explore new opportunities when the current path no longer suits you.

2024-07-18 ‼️ Panic! at the Tech Job Market ‼️

Warning! This post is too long, but pleasant to read. I actually used Microsoft Edge TTS to read it and spent 2 good hours.

“I have the two qualities you require to see absolute truth: I am brilliant and unloved.”

"By the power of drawing two lines, we see correlation is causation and you can’t argue otherwise: interest rates go up, jobs go down."

"Nepo companies are the most frustrating because they suck up all the media attention for being outsized celebrity driven fads."

"Initial growth companies are the worst combination of high-risk, low-reward effort-vs-compensation tradeoffs."

"Modern tech hiring... has become a game divorced from meaningfully judging individual experience and impact."

"You must always open your brain live in front of people to dump out immediate answer to a series of pointless problems."

"Your job is physically impossible. You will always feel drained and incompetent because you can’t actually do everything everyday."

"AWS isn’t hands off 'zero-experience needed magic cloud'; AWS is actually 'datacenter as a service.'"

"The company thought they had 10,000 users per day... but my internal metrics showed only 300 users per day actually used the backend APIs."

"Most interview processes don’t even consider a person’s actual work and experience and capability."

"At some point, a switch flipped in the tech job market and 'programmer jobs' just turned into zero-agency task-by-task roles working on other people’s ideas under other people’s priorities to accomplish other people’s goals."

🎯 How the things work?

2024-07-15 How SQL Query works? SQL Query Execution Order for Tech Interview - DEV Community

Found in 2024-07-15 Ten Years with Microservices :: Programming Digest

image-20240806225527901

📢 Good Talks

2024-07-13 What you can learn from an open-source project with 300 million downloads - Dennis Doomen - YouTube

image-20240806225712599

Best Practices for Maintaining Fluent Assertions and Efficient Project Development

This talk covers effective techniques and tools for maintaining fluent assertions and managing development projects efficiently. It explores the use of GitHub for version control, emphasizing templates, change logs, and semantic versioning. The speaker also shares insights on tools like Slack, GitKraken, PowerShell, and more, highlighting their roles in streamlining workflows, ensuring code quality, and enhancing collaboration. Ideal for developers and project managers aiming to optimize their development processes and maintain high standards in their projects.

Tools discussed:

Project Management and Collaboration Tools

GitHub: GitHub hosts repositories, tracks issues, and integrates with various tools for maintaining projects. It supports version control and collaboration on code, providing features like pull requests, branch management, and GitHub Actions for CI/CD. Example output: Issues, pull requests, repository branches.

Development and Scripting Tools

Windows Terminal: Windows Terminal integrates various command-line interfaces like PowerShell and Bash into a single application, allowing for a seamless command-line experience. Example output: Command outputs from PowerShell, CMD, and Bash.

PowerShell: PowerShell is a scripting and automation framework from Microsoft, offering a command-line shell and scripting language for system management and automation tasks. Example output: Script execution results, system management tasks.

PSReadLine: PSReadLine enhances the PowerShell command-line experience with features like syntax highlighting, history, and better keyboard navigation. Example output: Enhanced command history navigation, syntax-highlighted command input.

vors/ZLocation: ZLocation: Z Location is a command-line tool that allows quick navigation to frequently accessed directories by typing partial directory names. Example output: Instantly switching to a frequently used directory.

Git and Version Control Tools

GitHub Flow Like a Pro with these 13 Git Aliases | You’ve Been Haacked: Git Extensions/Aliases simplify Git command-line usage by providing shorthand commands and scripts to streamline common Git tasks. Example output: Simplified Git commands like git lg for a condensed log view.

GitKraken: GitKraken is a graphical interface for Git that provides a visual overview of your repository, including branches, commits, and merges, making it easier to manage complex Git workflows. Example output: Visual representation of branch history and commit graphs.

JetBrains Rider: JetBrains Rider is an IDE specifically designed for .NET development, providing advanced coding assistance, refactoring, and debugging features to enhance productivity. Example output: Code completion suggestions, integrated debugging sessions.

Code Quality and Formatting Tools

EditorConfig: EditorConfig helps maintain consistent coding styles across different editors and IDEs by defining coding conventions in a simple configuration file. Example output: Automatically formatted code based on .editorconfig settings.

Sergio0694/PolySharp: PolySharp allows the use of newer C# syntax features in older .NET versions, enabling modern coding practices in legacy projects. Example output: Code using new C# syntax features in older .NET environments.

Build and Deployment Tools

Nuke: Nuke is a build automation system for .NET that uses C# for defining build steps and pipelines, providing flexibility and type safety. Example output: Automated build and deployment steps written in C#.

GitVersion: GitVersion generates version numbers based on Git history, branch names, and tags, ensuring consistent and semantically correct versioning. Example output: Semantic version numbers automatically updated in the project.

Dependency Management and Security Tools

Dependabot: Dependabot automatically scans repositories for outdated dependencies and creates pull requests to update them, helping to keep dependencies up to date and secure. Example output: Pull requests for dependency updates with detailed change logs.

CodeQL: CodeQL is a code analysis tool integrated with GitHub that scans code for security vulnerabilities and other issues, providing detailed reports and alerts. Example output: Security alerts and code scanning reports.

Testing and Benchmarking Tools

Stryker.NET: Stryker.NET is a mutation testing tool for .NET that modifies code to check if tests detect the changes, ensuring comprehensive test coverage. Example output: Mutation testing reports showing test effectiveness.

ArchUnit: ArchUnit checks architecture rules in Java projects, ensuring that dependencies and structure conform to specified rules. (Similar tools exist for .NET). Example output: Reports on architecture rule violations.

Documentation Tools

Docusaurus: Docusaurus helps build project documentation websites easily, providing a platform for creating and maintaining interactive, static documentation. Example output: Interactive documentation websites generated from markdown files.

Miscellaneous Tools

CSpell: CSpell is an NPM package used for spell checking in code projects, ensuring textual accuracy in code comments, strings, and documentation. Example output: Spell check reports highlighting errors and suggestions.

2024-07-14 Failure & Change: Principles of Reliable Systems • Mark Hibberd • YOW! 2018 - YouTube

image-20240806225909552

Mark Hibberd's talk "Failure & Change: Principles of Reliable Systems" at YOW! 2018 explores building and operating reliable software systems, focusing on understanding and managing failures in complex and large-scale systems.

Reliability is defined as consistently performing well. Using airline engines as an example, Hibberd illustrates how opting for fewer engines can sometimes be safer due to lower failure probability and fewer knock-on effects. The key is to control the scope and consequences of failures.

"We need to be resilient to failure by controlling the scope and consequences of our failure."

Redundancy and independence are crucial. Redundancy should be managed carefully to maintain reliability, avoiding tightly coupled systems where a single failure can cascade into multiple failures. Service granularity helps manage failures effectively by breaking down systems into smaller, independent services, each handling specific responsibilities and passing values around to maintain independence.

"Service granularity gives us this opportunity to trade the likelihood of a failure for the consequences of a failure."

In operations, it's essential to implement health checks and monitoring to detect failures early and route around them aggressively to prevent overload and cascading failures. Using circuit breakers to cut off communication to failing services allows them to recover.

Designing systems with independent services is key. Services should operate independently, using shared values rather than shared states or dependencies. For example, an online chess service can be broken down into services for pairing, playing, history, and analysis, each maintaining independence.

Operational strategies include implementing timeouts and retries to handle slow responses and prevent overloads, and deploying new versions gradually to test against real traffic and verify responses. Proxies can interact with unreliable code to maintain a reliable view of data.

"Timeouts are so important that we probably should have some sort of government-sponsored public service announcement."

Handling change in complex systems involves accommodating changes without significant disruptions through continuous deployment and rolling updates. Techniques like in-production verification and routing requests to both old and new versions during deployment help ensure reliability.

Data management is also crucial. Separating data storage from application logic helps maintain reliability during changes. Avoid coupling data handling directly with services to facilitate easier updates and rollbacks.

"We want to create situations where we can gracefully roll things out and flatten out this time dimension."

Hibberd emphasizes making informed trade-offs in architecture, redundancy, and granularity to enhance the reliability of software systems. Continuous monitoring, strategic failure handling, and incremental deployment are essential to ensure systems remain resilient and reliable despite inevitable failures and changes.

🤖 The Era of AI

2024-07-01 The limitations of LLMs, or why are we doing RAG? | EDB

image-20240806230126557

Despite powerful capabilities with many tasks, Large Language Models (LLMs) are not know-it-alls. If you've used ChatGPT or other models, you'll have experienced how they can’t reasonably answer questions about proprietary information. What’s worse, it isn’t just that they don't know about proprietary information, they are unaware of their own limitations and, even if they were aware, they don’t have access to proprietary information. That's where options like Retrieval Augmented Generation (RAG) come in and give LLMs the ability to incorporate new and proprietary information into their answers.

2024-06-18 What Is ChatGPT Doing … and Why Does It Work?—Stephen Wolfram Writings { writings.stephenwolfram.com }

image-20240806230337671

It’s Just Adding One Word at a Time That ChatGPT can automatically generate something that reads even superficially like human-written text is remarkable, and unexpected. But how does it do it? And why does it work? My purpose here is to give a rough outline of what’s going on inside ChatGPT—and then to explore why it is that it can do so well in producing what we might consider to be meaningful text. I should say at the outset that I’m going to focus on the big picture of what’s going on—and while I’ll mention some engineering details, I won’t get deeply into them. (And the essence of what I’ll say applies just as well to other current “large language models” [LLMs] as to ChatGPT.)

The first thing to explain is that what ChatGPT is always fundamentally trying to do is to produce a “reasonable continuation” of whatever text it’s got so far, where by “reasonable” we mean “what one might expect someone to write after seeing what people have written on billions of webpages, etc.”

2024-06-22 Practical Applications of Generative AI: How to Sprinkle a Little AI in Your App - Phil Haack - YouTube

Be Positive

  • ✅ Do this: "Explain how to implement a sorting algorithm."
  • ❌ Don't do this: "Don't talk about unrelated algorithms."
  • Example: Nike was on the right track when they said, "Just do it." Telling a prompt what not to do can lead it to do just that.

Give the Model an Out

  • ✅ Do this: "If you don't know the answer, it's okay to say 'I don't know.'"
  • ❌ Don't do this: "You must provide an answer for every question."
  • Let the model say 'I don’t know' to reduce hallucinations.

Break Complex Tasks into Subtasks

  • ✅ Do this: "Write three statements for and against using AI in education. Then use those statements to write an essay."
  • ❌ Don't do this: "Write an essay on AI in education."
  • Example: For an essay, ask the AI to write three statements for and against a point. Then have it use those statements to write the essay.

Ask for Its Chain of Thought

  • ✅ Do this: "Explain why you think using AI can improve customer service."
  • ❌ Don't do this: "Just tell me how AI can improve customer service without any explanation."
  • Ask it to explain its reasoning. Lately, it seems GPT-4 does this automatically.

Check the Model’s Comprehension

  • ✅ Do this: "Do you understand the task of generating a summary of this article?"
  • ❌ Don't do this: "Summarize this article without confirming if you understood the task."
  • "Do you understand the task?"

Links

2024-07-31 Building A Generative AI Platform

image-20240806230638016

(found in 2024-07-31 Programming Digest)

After studying how companies deploy generative AI applications, I noticed many similarities in their platforms. This post outlines the common components of a generative AI platform, what they do, and how they are implemented. I try my best to keep the architecture general, but certain applications might deviate. This is what the overall architecture looks like.

2024-08-05 📌 How I Use "AI" (nicholas.carlini.com)

image-20240806230826595

  • To build complete applications for me
  • As a tutor for new technologies
  • To get started with new projects
  • To simplify code
  • For monotonous tasks
  • To make every user a "power user"
  • As an API reference

· 7 min read

Good ideas

2024-05-11 SET.DO | AI-Powered To-do List That Gets Things Done

SET.DO researches, schedules and organizes tasks for you. Spend time doing, not planning.

image-20240609054343587

2024-05-11 Timesy: A Distraction-Free Online Timer

image-20240609055545329

Fun

2024-05-12 One Minute Park

One Minute Park is a project offering one-minute videos of parks from around the world, aiming to eventually cover all minutes in a day. Users can contribute by filming 60-second park videos, ensuring steady, unedited footage, and uploading them.

image-20240609055055245

Math!

2024-05-12 Immersive Math

image-20240609055333451

Preface

A few words about this book.

Chapter 1: Introduction

How to navigate, notation, and a recap of some math that we think you already know.

Chapter 2: Vectors

The concept of a vector is introduced, and we learn how to add and subtract vectors, and more.

Chapter 3: The Dot Product

A powerful tool that takes two vectors and produces a scalar.

Chapter 4: The Vector Product

In three-dimensional spaces you can produce a vector from two other vectors using this tool.

Chapter 5: Gaussian Elimination

A way to solve systems of linear equations.

Chapter 6: The Matrix

Enter the matrix.

Chapter 7: Determinants

A fundamental property of square matrices.

Chapter 8: Rank

Discover the behaviour of matrices.

Chapter 9: Linear Mappings

Learn to harness the power of linearity...

Chapter 10: Eigenvalues and Eigenvectors

This chapter has a value in itself.

Web development

2024-04-19 HyperFormula (v2.7.0)

Found in: https://javascriptweekly.com/issues/684

HyperFormula is a headless spreadsheet built in TypeScript, serving as both a parser and evaluator of spreadsheet formulas. It can be integrated into your browser or utilized as a service with Node.js as your back-end technology.

image-20240609062100269

2024-03-28 Write OpenAPI with TypeSpec

Github: microsoft/typespec

image-20240609062212387

Algorithms

2024-03-28 Binary array set

Despite the lack of deletion functionality, the data structure is still useful in applications that only add and test but don’t delete – for example, breadth-first search maintains an ever-growing set of visited nodes that shouldn’t be revisited. To compare time complexities with a popular alternative, a balanced binary search tree takes worst-case Θ(log n) time alike for adding, testing, or removing one element.

2024-04-19 Visualizing Algorithms

Found in: https://javascriptweekly.com/issues/684

This fantastic post is now ten years old, but I revisited it recently and it’s such a joy. Mike Bostock (of D3.js fame) visually guides us through some algorithms using both demos and code.

image-20240609062647379

2024-04-17 Solving the minimum cut problem for undirected graphs

In the study "Deterministic Near-Linear Time Minimum Cut in Weighted Graphs," the new approach to solving the minimum cut problem in weighted graphs hinges on an advanced form of cut-preserving graph sparsification. This technique meticulously reduces the original graph into a sparser version by strategically creating well-connected clusters of nodes that align with potential minimum cuts. These clusters are then contracted into single nodes, effectively simplifying the graph's complexity while maintaining the integrity of its critical structural properties. This method allows the algorithm to maintain deterministic accuracy and operate efficiently, providing a significant improvement over previous methods that were either limited to simpler graphs or relied on probabilistic outcomes.

image-20240609062816506

2024-04-02 Implementing Dijkstra's algorithm for finding the shortest path between two nodes using PriorityQueue in .NET 9

image-20240609062946764

Interviews

2024-05-22 14 Patterns to Ace Any Coding Interview Question | HackerNoon

2024-05-17 Software Engineer interviews: Everything you need to prepare | Tech Interview Handbook

image-20240516230705440

2024-05-17 Algorithms Course - Graph Theory Tutorial from a Google Engineer - YouTube

2024-05-17 Graph Algorithms for Technical Interviews - Full Course - YouTube

2024-05-11 How do you guys get good at DP? : r/leetcode

2024-05-11 DP for Beginners Problems | Patterns | Sample Solutions - LeetCode Discuss

2024-05-11 Dynamic Programming - Learn to Solve Algorithmic Problems & Coding Challenges - YouTube 5 hours of video

2024-05-11 neetcode.io Practice this is the list of problems to practice

2024-04-20 Blind 75 - evansoohoo.github.io

image-20240609063232624

🌟 2024-04-20 Design Pinterest - TianPan.co

Software Design common interview questions and answers

2024-04-20 GitHub - donnemartin/system-design-primer: Learn how to design large-scale systems. Prep for the system design interview. Includes Anki flashcards.

2024-04-10 The Amazon Leadership Principles - A Complete Interview Guide

This article provides an in-depth guide to understanding and preparing for the behavioral interview process at Amazon, focusing on the 16 Amazon Leadership Principles. These principles are integral to Amazon's hiring process and are used to evaluate candidates across all levels and job families.

Amazon Leadership Culture

  • Decentralization: Amazon operates with little centralization; each group functions like a startup, establishing its processes and best practices while adhering to the leadership principles.
  • Bar Raisers: A select group of experienced Amazonians who deeply understand the leadership principles and ensure that new hires align with them.

Understanding the Leadership Principles

  • Importance: The leadership principles are used daily for hiring, feedback, and decision-making.

  • Preparation: Candidates should thoroughly understand and reflect on these principles to succeed in interviews.

    The 16 Amazon Leadership Principles

  1. Customer Obsession: Prioritizing customer needs and making decisions that benefit them, even at the expense of short-term profits.
  2. Ownership: Thinking long-term, acting on behalf of the entire company, and taking responsibility for outcomes.
  3. Invent and Simplify: Encouraging innovation and simplicity, and being open to ideas from anywhere.
  4. Are Right, A Lot: Having good judgment and being open to diverse perspectives to challenge one's beliefs.
  5. Learn and Be Curious: Continuously learning and exploring new possibilities.
  6. Hire and Develop the Best: Focusing on raising performance bars and developing leaders within the organization.
  7. Insist on the Highest Standards: Maintaining high standards and continually raising the bar for quality.
  8. Think Big: Encouraging bold thinking and looking for ways to serve customers better.
  9. Bias for Action: Valuing speed and taking calculated risks without extensive study.
  10. Frugality: Accomplishing more with less and being resourceful.
  11. Earn Trust: Listening attentively, speaking candidly, and treating others respectfully.
  12. Dive Deep: Staying connected to details, auditing frequently, and being skeptical when metrics differ from anecdotes.
  13. Have Backbone; Disagree and Commit: Challenging decisions respectfully and committing fully once a decision is made.
  14. Deliver Results: Focusing on key business inputs, delivering with the right quality and in a timely manner.
  15. Strive to be Earth's Best Employer: Creating a productive, diverse, and just work environment, leading with empathy, and focusing on employees' growth.
  16. Success and Scale Bring Broad Responsibility: Recognizing the impact of Amazon's actions and striving to make better decisions for customers, employees, partners, and the world.

Domain Design

2024-04-28 Moving IO to the edges of your app: Functional Core, Imperative Shell - Scott Wlaschin - YouTube

image-20240428111330599

image-20240428111411872

2024-04-27 Architecture Modernization: Aligning Software, Strategy, and Structure - Nick Tune - YouTube

image-20240426230555264

2024-04-27 Hannes LowetteBuild software like a bag of marbles, not a castle of LEGO® - - YouTube

image-20240426215615553

· 32 min read

Good reads

2024-02-29 All you need is Wide Events, not “Metrics, Logs and Traces”

The article, authored by Ivan Burmistrov on February 15, 2024, presents a critique of the current observability paradigm in the tech industry, which is traditionally built around metrics, logs, and traces. Burmistrov argues that this model, despite being widely adopted and powered by Open Telemetry, contributes to a state of confusion regarding its components and their respective roles in observability.

Burmistrov suggests a shift towards a simpler, more unified approach to observability, advocating for the use of Wide Events. This concept is exemplified by Scuba, an observability system developed at Meta (formerly Facebook), which Burmistrov praises for its simplicity, efficiency, and ability to handle the exploration of data without preconceived notions about what one might find—effectively addressing the challenge of unknown unknowns.

Key points highlighted in the article include:

  • Observability's Current State: The article starts with a reflection on the confusion surrounding basic observability concepts like traces, spans, and logs, attributed partly to Open Telemetry's complex presentation of these concepts.
  • The Concept of Wide Events: Burmistrov introduces Wide Events as a more straightforward and flexible approach to observability. Wide Events are essentially collections of fields and values, akin to a JSON document, that encompass all relevant information about a system's state or event without the need for predefined structures or classifications.
  • Scuba - An Observability Paradise: The author shares his experiences with Scuba at Meta, highlighting its capability to efficiently process and analyze Wide Events. Scuba allows users to "slice and dice" data, exploring various dimensions and metrics to uncover insights about anomalies or issues within a system, all through a user-friendly interface.
  • Post-Meta Observability Landscape: Upon leaving Meta, Burmistrov expresses disappointment with the external observability tools, which seem to lack the simplicity and power of Scuba, emphasizing the industry's fixation on the traditional trio of metrics, logs, and traces.
  • Advocacy for Wide Events: The article argues that Wide Events can encapsulate the functionalities of traces, logs, and metrics, thereby simplifying the observability landscape. It suggests that many of the current observability practices could be more naturally and effectively addressed through Wide Events.
  • Call for a Paradigm Shift: Burmistrov calls for observability vendors to adopt and promote simpler, more intuitive systems like Wide Events. He highlights Honeycomb and Axiom as examples of platforms moving in this direction, encouraging others to follow suit to demystify observability and enhance its utility.

2024-02-29 Scheduling Internals

This post delves into the complex and fascinating world of concurrency, aiming to elucidate its mechanisms and how various programming models and languages implement it. The author seeks to demystify concurrency by answering key questions and covering topics such as the difference between concurrency and parallelism, the concept of coroutines, and the implementation of preemptive and non-preemptive schedulers. The discussion spans several programming languages and systems, including Node.js, Python, Go, Rust, and operating system internals, offering a comprehensive overview of concurrency's theoretical foundations and practical applications.

Concurrency vs. Parallelism: The post distinguishes between concurrency — the ability to deal with multiple tasks at once — and parallelism — the ability to execute multiple tasks simultaneously. This distinction is crucial for understanding how systems can perform efficiently even on single-core processors by managing tasks in a way that makes them appear to run in parallel.

Threads and Async I/O: Initially, the text explores the traditional approach of creating a thread per client for concurrent operations and quickly transitions into discussing the limitations of this method, such as the overhead of context switching and memory allocation. The narrative then shifts to asynchronous I/O operations as a more efficient alternative, highlighting non-blocking I/O and the use of event loops to manage concurrency without the heavy costs associated with threads.

Event Loops and Non-Preemptive Scheduling: The author introduces event loops as a core concept in managing asynchronous operations, particularly in environments like Node.js, which uses libuv as its underlying library. By employing an event loop, applications can handle numerous tasks concurrently without dedicating a separate thread to each task, leading to significant performance gains and efficiency.

Preemptive Scheduling: Moving beyond cooperative (non-preemptive) scheduling, where tasks must yield control voluntarily, the discussion turns to preemptive scheduling. This model allows the system to interrupt and resume tasks autonomously, ensuring a more equitable distribution of processing time among tasks, even if they don't explicitly yield control.

Coroutines and Their Implementation: Coroutines are presented as a flexible way to handle concurrency, with the post explaining the difference between stackful and stackless coroutines. Stackful coroutines, similar to threads but more lightweight, have their own stack, allowing for traditional programming models. In contrast, stackless coroutines, used in languages like Python and Rust, break tasks into state machines and require tasks to be explicitly marked as asynchronous.

Scheduling Algorithms: The article covers various scheduling algorithms used by operating systems and programming languages to manage task execution, including FIFO, Round Robin, and more sophisticated algorithms like those used by Linux (CFS and SCHED_DEADLINE) and Go's scheduler. These algorithms determine how tasks are prioritized and executed, balancing efficiency and fairness.

Multi-Core Scheduling: Lastly, the post touches on the challenges and strategies for scheduling tasks across multiple CPU cores, including task stealing, which allows idle cores to take on work from busier ones, optimizing resource utilization and performance across the system.

This comprehensive overview of concurrency aims to provide readers with a solid understanding of how modern systems achieve high levels of efficiency and responsiveness. Through detailed explanations and examples, the post illuminates the intricate mechanisms that allow software to handle multiple tasks simultaneously, whether through managing I/O operations, leveraging coroutines, or employing advanced scheduling algorithms.

2024-03-01 You’ve just inherited a legacy C++ codebase, now what?

Inheriting a legacy C++ codebase often feels like a daunting task, presenting a blend of complexity, idiosyncrasies, and challenges. This article delineates a strategic approach to revitalize such a codebase, focusing on minimizing effort while maximizing security, developer experience, correctness, and performance. The process emphasizes practical, incremental improvements over sweeping changes, aiming for a sustainable engineering practice.

Key Steps to Revitalize a Legacy C++ Codebase:

  1. Initial Setup and Minimal Changes: Start by setting up the project locally with the least amount of changes. Resist the urge for major refactorings at this stage.
  2. Trim the Fat: Remove all unnecessary code and features that do not contribute to the core functionality your project or company advertises.
  3. Modernize the Development Process: Integrate modern development practices like Continuous Integration (CI), linters, fuzzers, and auto-formatters to improve code quality and developer workflow.
  4. Incremental Code Improvements: Make small, incremental changes to the codebase, ensuring it remains functional and more maintainable after each iteration.
  5. Consider a Rewrite: If feasible, contemplate rewriting parts of the codebase in a memory-safe language to enhance security and reliability.

Strategic Considerations for Effective Management:

  • Get Buy-in: Before diving into technical improvements, secure support from stakeholders by clearly articulating the benefits and the sustainable approach of your plan.
  • Support and Documentation: Ensure the codebase can be built and tested across all supported platforms, documenting the process to enable easy onboarding and development.
  • Performance Optimization: Identify and implement quick wins to speed up build and test times without overhauling existing systems.
  • Quality Assurance Enhancements: Adopt linters and sanitizers to catch and fix bugs early, and integrate these tools into your CI pipeline to maintain code quality.
  • Code Health: Regularly prune dead code, simplify complex constructs, and upgrade to newer C++ standards when it provides tangible benefits to the project.

Technical Insights:

  • Utilize compiler warnings and tools like cppcheck to identify and remove unused code.
  • Incorporate clang-tidy and cppcheck for static code analysis, balancing thoroughness with the practicality of fixing identified issues.
  • Use clang-format to enforce a consistent coding style, minimizing diffs and merge conflicts.
  • Apply sanitizers (e.g., -fsanitize=address,undefined) to detect and address subtle bugs and memory leaks.
  • Implement a CI pipeline to automate testing, linting, formatting, and other checks, ensuring code quality and facilitating reproducible builds across environments.

2024-03-07 Making CRDTs 98% More Efficient | jakelazaroff.com

This article explores the process of making Conflict-free Replicated Data Types (CRDTs) significantly more efficient, reducing their size by nearly 98% through a series of compression techniques. Starting from a state-based CRDT for a collaborative pixel art editor that initially required a whopping 648kb to store the state of a 100x100 image, the author demonstrates a methodical approach to compressing this data to just about 14kb. The journey to this substantial reduction involves several steps, each building upon the previous to achieve more efficient storage.

Hex Codes: The initial step was converting RGB values to hex codes, which compacted the representation of colors from up to thirteen characters to a maximum of eight, or even five if the channel values are identical.

UUID Table: A significant improvement came from replacing repetitive UUIDs in each pixel's data with indices to a central UUID table, saving considerable space due to the reduction from 38 characters per UUID to much smaller indices.

Palette Table: Similar to the UUID table, a palette table was introduced to replace direct color values with indices, optimizing storage for images with limited color palettes.

Run-Length Encoding (RLE): For the spatial component, RLE was applied to efficiently encode sequences of consecutive blank spaces, drastically reducing the space needed to represent unoccupied areas of the canvas.

Binary Encoding: Transitioning from JSON to a binary format offered a major leap in efficiency. This approach utilizes bytes directly for storage, significantly compacting data representation. The binary format organizes data into chunks, each dedicated to specific parts of the state, such as UUIDs, color palettes, and pixel data.

Run-Length Binary Encoding: The final and most significant compression came from applying run-length encoding within the binary format, further optimizing the storage of writer IDs, colors, and timestamps separately. This approach significantly reduced redundancy and exploited patterns within each category of data, ultimately achieving the goal of reducing the CRDT's size by 98%.

2024-03-08 Fundamentals of Data Visualization: 29 Telling a story and making a point

Effective data visualization is more than just presenting data; it's about telling a story that resonates with the audience. This approach bridges the gap between complex insights and audience understanding, making abstract data engaging and accessible.

Key Elements of Storytelling in Data Visualization:

  • Narrative Structure: A well-constructed story, whether based on the Opening-Challenge-Action-Resolution format or other structures, captivates by guiding the audience from a set-up through a challenge, towards a resolution.
  • Visualization Sequence: Rather than relying on a single static image, a sequence of visualizations can more effectively convey the narrative arc, illustrating the journey from problem identification to solution.
  • Clarity and Simplicity: Visualizations should be straightforward, avoiding unnecessary complexity to ensure the audience can easily grasp the core message. This is akin to "making a figure for the generals," emphasizing clear and direct communication.
  • Memorability through Visual Elements: Employing techniques like isotype plots, which use pictograms or repeated images to represent data magnitudes, can make data visualizations more memorable without sacrificing clarity.
  • Diversity in Visualization: Utilizing a variety of visualization types within a narrative helps maintain audience interest and differentiates between narrative segments, ensuring each part contributes uniquely to the overarching story.
  • Progression from Raw Data to Derived Quantities: Starting with visualizations close to the raw data establishes a foundation for understanding, onto which more abstract, derived data representations can build, highlighting key insights and trends.

2024-03-12 Breaking Down Tasks - Jacob Kaplan-Moss

In a management group, someone asked for resources on teaching planning. I shared a link to this series on estimation, but quickly they came back and told me that there was something missing. The previous parts in this series assume you’re starting with a clearly defined task list, but the people this manager is teach aren’t there yet. They need help with an earlier step: “breaking down” a project into a clearly defined set of tasks.

Bonus: estimating this project

Because this a series on estimation, it seems reasonable to complete the work and produce an estimate for this project:

TaskComplexityUncertaintyExpected (days)Worst-case (days)
1. model datax-smalllow0.50.5
2a. weekly viewx-smalllow0.50.5
2b. home page viewx-smalllow0.50.5
2c. monthly viewx-smalllow0.50.5
2d. browsingsmalllow11.1
3. dynamic weeksmalllow11.1
4a. streak calculationmediummoderate34.5
4b. streak displayx-smalllow0.50.5
4c. streak recalculationmediumlow33.3
5a. freeze accumulationmediummoderate34.5
5b. prevent double accumulationsmallextreme15
5c. freeze spendingsmallmoderate11.5
Total:15.5 days23.5 days

2024-03-13 🍀 40 years of programming

10 PRINT "HELLO"
20 GOTO 10

In April, 1984, my father bought a computer for his home office, a Luxor ABC-802, with a Z80 CPU, 64 kilobytes of RAM, a yellow-on-black screen with 80 by 25 text mode, or about 160 by 75 pixels in graphics mode, and two floppy drives. It had BASIC in its ROM, and came with absolutely no games. If I wanted to play with it, I had to learn how to program, and write my own games. I learned BASIC, and over the next few years would learn Pascal, C, and more. I had found my passion. I was 14 years old and I knew what I wanted to do when I grew up.

When I was learning how to program, I thought it was important to really understand how computers work, how programming languages work, and how various tools like text editors work. I wanted to hone my craft and produce the finest code humanly possible. I was wrong.

On doing work When making a change, make only one change at a time. If you can, split the change you're making into smaller partial changes. Small changes are easier to understand and less likely to be catastrophic.

Automate away friction: running tests, making a release, packaging, delivery, deployment, etc. Do this from as early on as feasible. Set up a pipeline where you can make a change and make sure the software still works and willing users can start using the changed software. The smoother you can make this pipeline, the easier it will be to build the software.

Developing a career You can choose to be a deep expert on something very specific, or to be a generalist, or some mix. Choose wisely. There may not be any wrong choice, but every choice has consequences.

Be humble. Be Nanny, not Granny. People may respect the powerful witch more, but they like the kind one better.

Be open and honest. Treat others fairly. You don't have to believe in karma for it to work, so make it work for you, not against you.

Help and lift up others. But at the same time, don't allow others to abuse or take advantage of you. You don't need to accept bullshit. Set your boundaries.

Ask for help when you need it, or when you get stuck. Accept help when offered.

I am not the right person to talk about developing a career, but when I've done the above, things have usually ended up going well.

Algorithms

2024-03-14 The Myers diff algorithm: part 1 – The If Works

2024-03-14 The Myers diff algorithm: part 2 – The If Works

2024-03-14 Quick binary diffs with XDelta – The If Works

Formats

2024-03-12 JSON Canvas — An open file format for infinite canvas data.

An open file format for infinite canvas data.

Infinite canvas tools are a way to view and organize information spatially, like a digital whiteboard. Infinite canvases encourage freedom and exploration, and have become a popular interface pattern across many apps.

The JSON Canvas format was created to provide longevity, readability, interoperability, and extensibility to data created with infinite canvas apps. The format is designed to be easy to parse and give users ownership over their data. JSON Canvas files use the .canvas extension.

JSON Canvas was originally created for Obsidian. JSON Canvas can be implemented freely as an import, export, and storage format for any app or tool. This site, and all the resources associated with JSON Canvas are open source under the MIT license.

Rust

2024-03-03 joaocarvalhoopen/How_to_learn_modern_Rust: A guide to the adventurer.

This guide provides a roadmap for learning Rust, a systems programming language known for its safety, concurrency, and performance features. It systematically covers everything from basic concepts to advanced applications in Rust programming.

Getting Started with Rust

  • Explore the reasons behind Rust's popularity among developers.
  • Engage with introductory videos and tutorials to get a handle on Rust's syntax and foundational concepts.
  • Deep dive into "The Rust Programming Language Book" for an extensive understanding.

Advancing Your Knowledge

  • Tackle text processing in Rust and understand Rust's unique memory management system with lifetimes and ownership.
  • Delve into Rust's mechanisms for polymorphism and embrace test-driven development (TDD) for robust software development.
  • Discover the nuances of systems programming and how to use Rust for writing compilers.

Specialized Development

  • Explore the capabilities of Rust in WebAssembly (WASM) for developing web applications.
  • Apply Rust in embedded systems for creating efficient and safe firmware.

Expanding Skills and Community Engagement

  • Investigate how Rust can be utilized in web frameworks, SQL databases, and for rapid prototyping projects.
  • Learn about interfacing Rust with Python to enhance performance.
  • Connect with the Rust community through the Rust Foundation, blogs, and YouTube channels for insights and updates.

Practical Applications

  • Experiment with GUI and audio programming using Rust to build interactive applications.
  • Dive into the integration of machine learning in Rust projects.
  • Undertake embedded projects on hardware platforms like Raspberry Pi and ESP32 for hands-on learning.

C && C++

2024-03-09 GitHub - pocoproject/poco: The POCO C++ Libraries are powerful cross-platform C++ libraries for building network- and internet-based applications that run on desktop, server, mobile, IoT, and embedded systems.

The POCO C++ Libraries are powerful cross-platform C++ libraries for building network- and internet-based applications that run on desktop, server, mobile, IoT, and embedded systems.

📺 Günter Obiltschnig - 10 years of Poco C++ Libraries - Meeting C++ 2015 Lightning Talks - YouTube

2024-03-12 rkaehn/cr_task.h: Header-only library for asynchronous tasks in C

2024-03-13 Syllo/nvtop: GPU & Accelerator process monitoring for AMD, Apple, Huawei, Intel, NVIDIA and Qualcomm

2024-03-22 The Real C++ Killers (Not You, Rust)

Security

2024-03-02 Use KeePassXC to sign your git commits

2024-03-02 microsoft/Security-101: 7 Lessons, Kick-start Your Cybersecurity Learning.

2024-03-04 Identity, authentication, and authorisation from the ground up

In a detailed exploration of identity, authentication, and authorization, this article delves into the intricate mechanisms that applications utilize to authenticate users. The text breaks down the complex topic into digestible segments, each addressing a different aspect of the authentication process, from traditional passwords to cutting-edge WebAuthn standards. It not only clarifies the distinctions between identity, authentication, and authorization but also highlights the challenges and trade-offs associated with various authentication methods. The article emphasizes the importance of choosing the right authentication strategy to balance security concerns with user experience.

  1. Authentication Basics: Authentication is the process of verifying a user's identity, typically through something the user knows (like a password), owns (like a phone), or is (biometric data). The article sets the stage by explaining how critical authentication is in the digital realm, affecting both user access and system security.
  2. Knowledge-based Authentication: This traditional method relies on passwords, PINs, or passphrases. However, it's fraught with challenges such as secure storage, vulnerability to attacks, and user inconvenience due to forgotten passwords. The process involves hashing passwords for secure storage, yet it's still vulnerable to various attacks and creates friction for users.
  3. Ownership-based Authentication: This method involves verifying something the user owns, like an email inbox or phone number, often through one-time passwords (OTPs) or hardware like YubiKeys. Although more secure and user-friendly than knowledge-based methods, it still has drawbacks, including potential delays in OTP delivery and security concerns with SMS-based authentication.
  4. WebAuthn and Public-key Cryptography: A modern approach to authentication, WebAuthn uses public-key cryptography to enable secure, passwordless authentication. It leverages the concept of a public/private key pair, where the private key is securely stored on the user's device, and the public key is shared with the service. This method significantly enhances security and user experience by eliminating passwords and reducing phishing risks.
  5. Multi-factor Authentication and Biometrics: The article discusses how WebAuthn can be combined with biometrics or other forms of verification for multi-factor authentication, providing an additional layer of security and convenience.
  6. Cross-device Authentication Challenges: While WebAuthn offers a streamlined authentication process, managing authentication across multiple devices presents challenges, including the risk of losing access if a device is lost.
  7. Identity-based Authentication: This method relies on third-party identity providers like Google or Facebook to verify user identity. While convenient, it introduces the risk of access being revoked by the identity provider, highlighting the need for user-owned identity solutions.

The article concludes by acknowledging the ongoing innovation in authentication technologies and the quest for secure, user-friendly methods that respect individual sovereignty. It underscores the evolving landscape of digital authentication and the importance of staying informed about these developments to ensure secure and efficient access to digital services.

Software Design

2024-02-25 Larger Scale Software Development (and a Big Trap) - YouTube

WebComponents

2024-03-01 lamplightdev - Streaming HTML out of order without JavaScript

This analysis explores a technique for streaming HTML content out-of-order using Shadow DOM, illustrated through a demo where an app shell is rendered first, followed by content that loads asynchronously and out of sequence. The method, which doesn't rely on JavaScript or any specific framework, leverages the advantages of streaming HTML from the server to the browser in chunks, allowing for immediate rendering of parts of the page, and the Declarative Shadow DOM to manage content in isolation and out of order.

Key Concepts and Techniques

  • Streaming HTML: A method where HTML is sent in chunks from the server to the browser as it's generated, improving perceived load times by showing content progressively.

  • Shadow DOM: A web standard for encapsulating parts of a DOM to keep features private to a component. This can be used with any HTML element to create isolated sections of the DOM.

  • Declarative Shadow DOM (DSD): A browser feature that allows Shadow DOMs to be created on the server side without JavaScript, enabling the browser to render them directly.

    Implementation Details

  1. Server Support: A server capable of streaming responses, such as Hono, is required. The technique is not limited to JavaScript-based servers and can be applied across various backend technologies.
  2. Templating with Streaming Support: Utilizing a templating language or library that supports streaming, like SWTL, simplifies the process by handling asynchronous data and streaming seamlessly.
  3. Declarative Shadow DOM for Order-Independent Rendering: By employing DSD, developers can specify how parts of the page should be encapsulated and loaded without relying on JavaScript, ensuring content loads correctly regardless of the order it's streamed.

2024-03-05 Web Components Will Outlive Your JavaScript Framework | jakelazaroff.com

The article by Jake Lazaroff discusses the lasting value of web components over the transient nature of JavaScript frameworks. It starts with the author's project experience, opting for vanilla JS web components for a blog post series on CRDTs to include interactive demos. This decision was guided by the principle that the examples, although built with HTML, CSS, and JS, were content, not code, emphasizing their portability and independence from specific tech stacks or frameworks.

Key Takeaways:

  • Web Components offer a robust solution for creating reusable and encapsulated HTML elements, ensuring content portability across different platforms and frameworks.
  • Markdown and plain text files have facilitated content migration and compatibility across various content management systems, highlighting the shift towards more flexible and framework-agnostic content strategies.
  • The encapsulation and isolation provided by shadow DOM in web components are crucial for maintaining consistent styles and behaviors, analogous to native web elements.
  • Choosing vanilla JavaScript and standard web technologies over frameworks or libraries can mitigate dependencies and maintenance challenges, promoting longevity and stability in web development.
  • The resilience of the web as a platform is underscored by its ability to preserve backward compatibility, ensuring that even the earliest websites remain functional on modern browsers.

See also:

2024-03-05 WebComponents Will Outlive Your JavaScript Framework | Prime Reacts - YouTube

Fun / Art

2024-02-26 eyes

Animated eyes

2024-03-01 GitHub - SuperTux/supertux: SuperTux source code

SuperTux is a jump'n'run game with strong inspiration from the Super Mario Bros. games for the various Nintendo platforms.

Run and jump through multiple worlds, fighting off enemies by jumping on them, bumping them from below or tossing objects at them, grabbing power-ups and other stuff on the way.

CSS

2024-03-23 magick.css

Show HN: magick.css – Minimalist CSS for Wizards

2024-03-01 How To Center a Div

For a long time, centering an element within its parent was a surprisingly tricky thing to do. As CSS has evolved, we've been granted more and more tools we can use to solve this problem. These days, we're spoiled for choice!

I decided to create this tutorial to help you understand the trade-offs between different approaches, and to give you an arsenal of strategies you can use, to handle centering in all sorts of scenarios.

Honestly, this turned out to be way more interesting than I initially thought 😅. Even if you've been using CSS for a while, I bet you'll learn at least 1 new strategy!

2024-03-04 CSS for printing to paper

At work, one of the things I do pretty often is write print generators in HTML to recreate and replace forms that the company has traditionally done handwritten on paper or in Excel. This allows the company to move into new web-based tools where the form is autofilled by URL parameters from our database, while getting the same physical output everyone's familiar with.

This article explains some of the CSS basics that control how your webpages look when printed, and a couple of tips and tricks I've learned that might help you out.

sample_cheatsheet.html

<!DOCTYPE html>
<html>
<style>
@page
{
size: Letter portrait;
margin: 0;
}
html
{
box-sizing: border-box;
}
*, *:before, *:after
{
box-sizing: inherit;
}

html,
body
{
margin: 0;
background-color: lightblue;
}

header
{
background-color: white;
max-width: 8.5in;
margin: 8px auto;
padding: 8px;
}

article
{
background-color: white;
padding: 0.5in;
width: 8.5in;
height: 11in;

/* For centering the page on the screen during preparation */
margin: 8px auto;
}

@media print
{
html,
body
{
background-color: white !important;
}
body > header
{
display: none;
}
article
{
margin: 0 !important;
}
}
</style>

<body>
<header>
<p>Some help text to explain the purpose of this generator.</p>
<p><button onclick="return window.print();">Print</button></p>
</header>

<article>
<h1>Sample page 1</h1>
<p>sample text</p>
</article>

<article>
<h1>Sample page 2</h1>
<p>sample text</p>
</article>
</body>
</html>

Dev Tools

2024-03-02 Textadept

Textadept is a fast, minimalist, and remarkably extensible cross-platform text editor.

2024-03-02 orbitalquark/textadept: Textadept is a fast, minimalist, and remarkably extensible cross-platform text editor for programmers.

2024-02-28 Testcontainers

Testcontainers is an open source framework for providing throwaway, lightweight instances of databases, message brokers, web browsers, or just about anything that can run in a Docker container.

2024-02-29 Show HN: SQL workbench in the browser | Hacker News

The Hacker News thread showcases a vibrant discussion among developers who are exploring the potential of WebAssembly (WASM) for various database and data visualization projects. These projects leverage WASM to run complex applications directly in the browser, eliminating the need for server-side processing and enabling powerful data manipulation and analysis capabilities client-side.

9dev shared their experience of getting sidetracked while developing a file browser for managing database files using the WASM build of SQLite. This detour led to the creation of a multi-modal CSV file editor capable of displaying CSV files as sortable tables, powered by a streaming, web worker-based parser.

Simonw discussed utilizing a WASM build of Python and SQLite to run the Datasette server-side web application entirely in the browser. This setup allows executing SQL queries against data files, such as a parquet file containing AWS edge locations, demonstrating a novel approach to processing and analyzing data client-side.

Tobilg introduced the SQL Workbench, built on DuckDB WASM, Perspective.js, and React, supporting queries on remote and local data (Parquet, CSV, JSON), data visualizations, and sharing of queries via URL. A tutorial blog post was mentioned for guidance on common usage patterns, signaling a resource for developers interested in in-browser data engineering.

The discussion also touched on Perspective.js, highlighted by paddy_m as a powerful and fast table library primarily used in finance, and dav43, who integrated it into datasette.io as a plugin to handle large datasets. This conversation underscores the utility and versatility of Perspective.js in data-intensive applications.

2024-02-29 Paste to Markdown

2024-02-29 GitHub - euangoddard/clipboard2markdown: Convert rich-text on your clipbaord to markdown

2024-02-29 pql

pql is an open-source pipelined query language that translates to SQL and is written in Go

users
| where eventTime > minus(now(), toIntervalDay(1))
| project user_id, user_email
users
| project user_id=id, user_email
| as userTable
| join kind=leftouter (
workspace_members
) on user_id

Hmm... reminds me... Kusto ;)

Why did we build pql? Splunk, Sumologic, and Microsoft all have proprietary languages similar to pql. Open source databases can't compete because they all support SQL. pql is meant to bridge that gap by providing a simple but powerful interface.

Test Automation

2024-03-08 Ultimate Guide to Visual Testing with Playwright

Found in https://javascriptweekly.com/issues/678

2024-03-14 lavague-ai/LaVague: Automate automation with Large Action Model framework

Automate automation with Large Action Model framework

2024-03-14 The Playwright Test Generator

I don't know why I’ve not linked this before, as it’s so useful. Playwright isn’t just a library for controlling browsers from JavaScript, but also includes a tool for generating tests and page navigation code from your own interactions. Hit record, do stuff, and code is written. Found in: 2024-03-15 JavaScript Weekly Issue 679: March 14, 2024

Software!

2024-03-01 Welcome | Superset

Apache Superset™ is an open-source modern data exploration and visualization platform.

2024-03-05 Puter The Internet OS

Puter is a privacy-first personal cloud to keep all your files, apps, and games in one secure place, accessible from anywhere at any time.

2024-03-05 HeyPuter/puter: The Internet OS!

Show HN: 3 years and 1M users later, I just open-sourced my "Internet OS"

2024-03-08 BlockNote - Javascript Block-Based React rich text editor

Found in https://javascriptweekly.com/issues/678

A 'Notion-Like' Block-Based Text Editor — 0.12.0 is a significant release for this ProseMirror and TipTap-based editor that lets you drag and drop blocks, add real-time collaboration, add customizable ‘slash command’ menus, and more. It has an all new homepage, too, along with new examples.

2024-03-12 Kdenlive 24.02.0 released - Kdenlive

The Era of AI

2024-02-26 Reddit: How advanced are your prompt techniques? : ChatGPTPro

Zaki1052

I'm guessing you're thinking of Chain of Thought, and the research is a bit outdated but still applicable. Here are some links i put on github if you want to do some reading. The main idea behind it is the whole "let's think step by step to verify your answer", extrapolated to the process of:

  1. Assigning an expert role
  2. Iterating a purpose or task
  3. describing the process needed to complete the task
  4. leaving room for correction/error-checking
  5. restating the objective as an overall goal

You'll usually want things like "Stop and think carefully out loud about the best way to solve this problem. verify your answer step by step in a systematic process, and periodically review your thinking, backtracking on any possible errors in reasoning, and creating a new branch when needed." This is the very broad concept behind Tree of Thought, which is said to be CoT's successor. Personally, I'll sometimes include a little preamble in chat that seems to mitigate some of the issues from their obscenely long system pre-prompt, which mine goes something like:

Before you begin, take a deep breath and Think Carefully.

You MUST be accurate & able to help me get correct answers; the Stakes are High & Need Compute!

Your systematic step-by-step process and self-correction via Tree of Thoughts will enhance the quality of responses to complex queries.

All adopted EXPERT Roles = Qualified Job/Subject Authorities.

Take multiple turns as needed to comply with token limits; interrupt yourself to ask to continue, and do not condense responses unless specifically asked.

Optimize!

Otherwise, I like to follow the usual role and tone modifiers, with controls for verbosity and other small prompt-engineering techniques.

## **Custom Instructions**

- **Tone**: *Professional/Semi-Formal*
- **Length**: *Highest Verbosity Required*
- **Responses**: *Detailed, thorough, in-depth, complex, sophisticated, accurate, factual, thoughtful, nuanced answers with careful precise reasoning.*
- **Personality**: *Intelligent, logical, analytical, insightful, helpful, honest, proactive, knowledgeable, meticulous, informative, competent.*

## Methods

- *Always*: Assume **Roles** from a **Mixture of Experts**
- (e.g. Expert Java programmer/developer, Chemistry Tutor, etc.)
- allows you to *best complete tasks*.
- **POV** = *Advanced Virtuoso* in queried field!
- Set a **clear objective**

### Work toward goal

- Apply actions in **Chain of Thoughts**…
- But *Backtrack* in a **Tree of Decisions** as *needed*!

### Accuracy

- *Reiterate* on Responses
- *Report* & **Correct Errors** - *Enhance Quality*!
- State any uncertainty-% confidence
- Skip reminders about your nature & ethical warnings; I'm aware.

#### Avoid Average Neutrality

- Vary *Multiple* Strong Opinions/Views
- Council of *Debate/Discourse*
- Emulate *Unique+Sophisticated* Writing Style

### Verbosity Adjusted with “V=#” Notation

- V1=Extremely Terse
- V2=Concise
- *DEFAULT: V3=Detailed!*
- V4=Comprehensive
- V5=Exhaustive+Nuanced Detail; Maximum Depth/Breadth!
- If omitted, *extrapolate*-use your best judgment.

### Other

- Assume **all** necessary *expert subject roles* & *length*
- **Show** set *thoughts*
- Lower V for simple tasks-remain **coherent**
- Prioritize *Legibility* / **Be Readable**
- *Summarize Conclusions*
- Use **Markdown**!

## **Important**: *Be*

- *Organic+Concise>Expand*
- **Direct**-NO generic filler/fluff.
- **Balance** *Complexity & Clarity*
- **ADAPT!**
- Use **HIGH EFFORT**!
- *Work/Reason* **Systematically**!
- **Always** *Think Step by Step* & *Verify Processes*!

My Custom GPTs, for example, all follow a relatively similar format (pastebin links to the prompts):

Hope that gives you an idea of what I mean. The GPTs themselves are linked here and I have the full file of my instructions I use with the API here, to give you a reference point of my usual structure: https://github.com/Zaki-1052/GPTPortal/blob/main/public/instructions.md

2024-02-28 RecurseChat

2024-02-28 Show HN: I made an app to use local AI as daily driver | Hacker News

Was testing apps like this if anyone is interested:

Best / Easy to use:

- https://lmstudio.ai

- https://msty.app

- https://jan.ai

More complex / Unpolished UI:

- https://gpt4all.io

- https://pinokio.computer

- https://www.nvidia.com/en-us/ai-on-rtx/chat-with-rtx-generat...

- https://github.com/LostRuins/koboldcpp

Misc:

- https://faraday.dev (AI Characters):

No UI / Command line (not for me):

- https://ollama.com

- https://privategpt.dev

- https://serge.chat

- https://github.com/Mozilla-Ocho/llamafile

Pending to check:

- https://recurse.chat

Feel free to recommend more!

· 19 min read

Well folks, brace yourselves for what might just be the laziest link dump in the history of link dumps. I've got to admit, this one's a real gem of laziness, and for that, I offer my sincerest apologies. I wish I could say I had a good excuse, but the truth is, I was just too lazy to do any better. So, without further ado, here's a collection of my thoughts and ideas that may not be my finest work, but hey, we all have our lazy days, right? Thanks for sticking with me through this lazy adventure!

Good reads

Joe Armstrong, one of the creators of Erlang? He said:

The most reliable parts are not inside the system, they are outside the system. The most reliable part of a computer system is the power switch. You can always turn it off. The next most reliable part is the operating system. The least reliable part is the application

2024-02-16 The Three Virtues of a GREAT Programmer

According to Larry Wall(1), the original author of the Perl programming language, there are three great virtues of a programmer; Laziness, Impatience and Hubris

  1. 💎 Laziness: The quality that makes you go to great effort to reduce overall energy expenditure. It makes you write labor-saving programs that other people will find useful and document what you wrote so you don't have to answer so many questions about it.
  2. 💎 Impatience: The anger you feel when the computer is being lazy. This makes you write programs that don't just react to your needs, but actually anticipate them. Or at least pretend to.
  3. 💎 Hubris: The quality that makes you write (and maintain) programs that other people won't want to say bad things about.

2024-02-06 Command Line Interface Guidelines

An open-source guide to help you write better command-line programs, taking traditional UNIX principles and updating them for the modern day.

2024-02-08 A Distributed Systems Reading List

This document, curated by Fred Hebert in 2019 and later updated, serves as a comprehensive reading list and primer on distributed systems. It provides foundational theory, practical considerations, and insights into complex topics within the field. Intended for quick reference and discovery, it outlines the basics and links to seminal papers and resources for deeper exploration.

Foundational Theory

  • Models: Discusses synchronous, semi-synchronous, and asynchronous models, with explanations on message delivery bounds and their implications for system design.
  • Theoretical Failure Modes: Covers fail-stop, crash, omission, performance, and Byzantine failures, highlighting the complexity of handling faults in distributed environments.
  • Consensus: Focuses on the challenge of achieving agreement across nodes, introducing concepts like strong and t-resilient consensuses.
  • FLP Result: An influential 1985 paper by Fischer, Lynch, and Patterson stating that achieving consensus is impossible in a purely asynchronous system with even one possible failure.
  • Fault Detection: Explores strong and weak fault detectors and their importance following the FLP result.
  • CAP Theorem: Explains the trade-offs between consistency, availability, and partition tolerance in distributed systems, including refinements like Yield/Harvest models and PACELC.

Practical Matters

  • End-to-End Argument in System Design: Highlights the necessity of end-to-end acknowledgments for reliability.
  • Fallacies of Distributed Computing: Lists common misconceptions that lead to design flaws in distributed systems.
  • Common Practical Failure Modes: Provides an informal list of real-world issues, including netsplits, asymmetric netsplits, split brains, and timeouts.
  • Consistency Models: Describes various levels of consistency, from linearizability to eventual consistency, and their implications for system behavior.
  • Database Transaction Scopes: Discusses transaction isolation levels in popular databases like PostgreSQL, MySQL, and Oracle.
  • Logical Clocks: Introduces mechanisms like Lamport timestamps and Vector Clocks for ordering messages or state transitions.
  • CRDTs (Conflict-Free Replicated Data Types): Explains data structures that ensure operations can never conflict, no matter the order of execution.

Other Interesting Material

Links to reviews, protocol introductions (Raft, Paxos, ZAB), and influential papers like the Dynamo paper are provided for further exploration of distributed systems.

The document concludes with a recommendation for "Designing Data-Intensive Applications" by Martin Kleppmann, noted as a comprehensive resource that ties together various aspects of distributed systems. However, it's suggested that readers may benefit from foundational knowledge and discussions to fully grasp the material.

2024-01-05 Managing superstars can drive you crazy - by Anton Zaides

Managing Talented Developers:

  • Challenge: "The most talented developers are the hardest to manage."
  • Strategy: Instead of hiring multiple average engineers, consider hiring fewer top-tier engineers for better results.

Challenges with 'Superstars:

  • Promotion Pressure: A team full of superstars may constantly seek promotions, creating management difficulties.
  • Expectations: Superstars expect continuous challenges and significant projects.

Types of Developers:

  1. Low Ability + High Confidence: Difficult to work with due to overestimation of their abilities.
  2. High Ability + Low Confidence: Talented developers in need of mentorship.
  3. Low Ability + Low Confidence: May perform better in a different environment.
  4. High Ability + High Confidence: A positive challenge, expecting growth and opportunities.

Managing Rockstars:

  • Avoid Overpromising: Don't promise promotions you can't guarantee.
  • Listen to Advice: Consider their suggestions but maintain your decision-making authority.
  • Avoid Micromanagement: Trust them to manage their work and approach you when needed.

Effective Strategies:

  • Set Clear Goals: Define specific targets for promotion opportunities.
  • Delegate Challenging Tasks: Assign visible and difficult tasks to lay the groundwork for promotion.
  • Provide Unfiltered Feedback: Give honest feedback to help them grow.

dvice from Superstars:

  • Jordan Cutler: Help them focus on the right things and avoid being vague in feedback.
  • Raviraj Achar: Protect them from burnout and prevent them from disrespecting the team.

Crossplatform

2024-01-31 quickemu-project/quickemu: Quickly create and run optimised Windows, macOS and Linux desktop virtual machines.

Quickly create and run optimised Windows, macOS and Linux desktop virtual machines.

Kubernetes

2024-02-07 Learnings From Our 8 Years Of Kubernetes In Production — Two Major Cluster Crashes, Ditching Self-Managed, Cutting Cluster Costs, Tooling, And More | by Anders Jönsson | Feb, 2024 | Medium

Anders Jönsson's article on Medium delves into Urb-it's eight-year journey with Kubernetes, including the shift from AWS to Azure Kubernetes Service (AKS), lessons from two major cluster crashes, and various operational insights. Here's a simplified digest of the key points:

Early Adoption and Transition

  • Chose Kubernetes early for scalability and container orchestration.
  • Initially self-hosted on AWS, later migrated to AKS for better integration and ease of management.

Major Cluster Crashes

  • First Crash: Due to expired certificates, requiring a complete rebuild.
  • Second Crash: Caused by a bug in kube-aws, leading to another certificate expiration issue.

Key Learnings

  • Kubernetes Complexity: Requires dedicated engineers due to its complexity.
  • Updates: Keeping Kubernetes and Helm up-to-date is critical.
  • Helm Charts: Adopted a centralized Helm chart approach for efficiency.
  • Disaster Recovery: Importance of a reliable cluster recreation method.
  • Secrets Backup: Essential strategies for backing up and storing secrets.
  • Vendor Strategy: Shifted from vendor-agnostic to fully integrating with AKS for benefits in developer experience and cost.
  • Observability and Security: Stressed on comprehensive monitoring, alerting, and strict security measures.

Operational Insights

  • Monitoring and Alerting: Essential for maintaining cluster health.
  • Logging: Consolidating logs with a robust trace ID strategy is crucial.
  • Security Practices: Implementing strict access controls and security measures.
  • Tooling: Utilizing tools like k9s for managing Kubernetes resources more efficiently.

Infrastructure and Tooling Setup

  • AKS Adoption: Offered better integration with Azure services.
  • Elastic Stack: Transitioned to ELK stack for logging.
  • Azure Container Registry: Switched for better integration with Azure.
  • CI/CD with Drone: Highlighted its support for container-based builds.

Golang

2024-02-09 How I write HTTP services in Go after 13 years by Mat Ryer

Mat Ryer, in his blog post on Grafana, shares his refined approach to writing HTTP services in Go after 13 years of experience. This article is an evolution of his practices influenced by discussions, the Go Time podcast, and maintenance experiences. The post is aimed at anyone planning to write HTTP services in Go, from beginners to experienced developers, highlighting the shift in Mat's practices over time and emphasizing testing, structuring, and handling services for maintainability and efficiency.

Key Takeaways and Practices:

  1. Server Construction with NewServer:
  • Approach: The NewServer function is central, taking all dependencies as arguments to return an http.Handler, ensuring clear dependency management and setup of middleware for common tasks like CORS and authentication.
  func NewServer(logger *Logger, config *Config, commentStore *commentStore) http.Handler {
mux := http.NewServeMux()
// Configuration and middleware setup
return handler
}

Routing with routes.go:

  • Purpose: Centralizes API route definitions, making it easy to see the service's API surface and ensuring that route setup is consistent and manageable.
  • Implementation Strategy: Dependencies are explicitly passed to handlers, maintaining type safety and clarity in handler dependencies.

Simplified main Function:

  • Design: Encapsulates the application's entry point, focusing on setup and graceful shutdown, facilitated by a run function that encapsulates starting the server and handling OS signals.
  func main() {
if err := run(); err != nil {
// Handle error
}
}

Middleware and Handler Patterns:

  • Middleware: Adopts the adapter pattern for middleware, allowing pre- and post-processing around handlers for concerns like authorization, without cluttering handler logic.
  • Handlers: Emphasizes returning http.Handler from functions, allowing for initialization and setup to be done within the handler's closure for isolation and reusability.

Error Handling and Validation:

  • Strategy: Uses detailed error handling and validation within handlers and middleware, ensuring robustness and reliability of the service by catching and properly managing errors.

Testing:

  • Philosophy: Prioritizes comprehensive testing, covering unit to integration tests, to ensure code reliability and ease of maintenance. The structure of the codebase, particularly the use of run function, facilitates testing by mimicking real-world operation.

Performance Considerations:

  • Optimizations: Includes strategies for optimizing service performance, such as deferring expensive setup until necessary (using sync.Once for lazily initializing components) and ensuring quick startup and graceful shutdown for better resource management.

Linux

2024-02-15 systemd by example - Part 1: Minimization - Sebastian Jambor's blog

Jambor shares his journey to understand systemd, a crucial system and service manager for Linux, by starting with the simplest setup possible and gradually adding complexity. The post encourages hands-on experimentation by running systemd in a container, avoiding risks to the host system.

The article concludes with a functioning, minimal systemd setup comprised of six unit files. This foundational knowledge serves as a platform for further exploration and understanding of systemd's more complex features.

All examples, including unit files and Docker configurations, are available on systemd-by-example.com, facilitating hands-on learning and experimentation.

The Era of AI

2024-02-21 Let's build the GPT Tokenizer - YouTube

Let's Build the GPT Tokenizer [video]

2024-02-21 Let's Build the GPT Tokenizer video | Hacker News

Let's build GPT: from scratch, in code, spelled out.

https://www.youtube.com/watch?v=kCc8FmEb1nY

2024-02-21 Neural Networks: Zero To Hero Video Lectures

A course by Andrej Karpathy on building neural networks, from scratch, in code.

We start with the basics of backpropagation and build up to modern deep neural networks, like GPT. In my opinion language models are an excellent place to learn deep learning, even if your intention is to eventually go to other areas like computer vision because most of what you learn will be immediately transferable. This is why we dive into and focus on languade models.

Prerequisites: solid programming (Python), intro-level math (e.g. derivative, gaussian).

The spelled-out intro to neural networks and backpropagation: building micrograd

This is the most step-by-step spelled-out explanation of backpropagation and training of neural networks. It only assumes basic knowledge of Python and a vague recollection of calculus from high school.

The spelled-out intro to language modeling: building makemore

We implement a bigram character-level language model, which we will further complexify in followup videos into a modern Transformer language model, like GPT. In this video, the focus is on (1) introducing torch.Tensor and its subtleties and use in efficiently evaluating neural networks and (2) the overall framework of language modeling that includes model training, sampling, and the evaluation of a loss (e.g. the negative log likelihood for classification).

2024-01-30 Anil-matcha/Free-GPT-Actions: A listing of Free GPT actions available for public use

A listing of Free GPT actions available for public use

2024-02-15 reorproject/reor: AI note-taking app that runs models locally.

Reor is an AI-powered desktop note-taking app: it automatically links related ideas, answers questions on your notes and provides semantic search. Everything is stored locally and you can edit your notes with an Obsidian-like markdown editor.

2024-01-24 Code Europe 2023 Closing Keynote by Andrei Alexandrescu (@NVIDIA) – C++hatGPT & AI Tools' Impact - YouTube

Skill, Test, Creativity

2024-01-27 rasbt/LLMs-from-scratch: Implementing a ChatGPT-like LLM from scratch, step by step

In Build a Large Language Model (from Scratch), you'll discover how LLMs work from the inside out. In this book, I'll guide you step by step through creating your own LLM, explaining each stage with clear text, diagrams, and examples.

Lifehack

2024-02-09 ⚫ Show HN: Improve cognitive focus in 1 minute

2024-02-09 Show HN: Improve cognitive focus in 1 minute | Hacker News

Fun

2024-02-09 The sinusoidal tetris | andreinc

2024-02-12 Balancing cube – Willem Pennings

2024-02-15 Gitlab Meeting Simulator 2024

Workplace / Job Interview

2024-02-09 kpsingh/SystemDesign: This repo will be having my learning regarding the Design Principles (Low Level Design) and System Design (High Level Design)

The GitHub repository "SystemDesign" by kpsingh focuses on the author's learning journey regarding Design Principles (Low Level Design) and System Design (High Level Design). It aims to delve into foundational concepts such as SOLID principles and design patterns, crucial for understanding both low and high-level design aspects in software engineering. For those interested in exploring the nuances of software design, this repository could serve as a valuable resource. More details can be found on GitHub.

2024-02-09 adityadev113/Interview-Preparation-Resources: StudyGuide for Software Engineer Interview

The GitHub repository "Interview-Preparation-Resources" by adityadev113 serves as a comprehensive guide for software engineer interview preparation, containing various resources collected during the author's own SDE interview preparation journey. This repository is intended to assist others on the same path by providing a wide range of materials related to behavioral interviews, computer networks, DBMS, data structures and algorithms, mock interviews, operating systems, system design, and more. Additionally, it includes specific documents like interview questions from Microsoft, important Java questions, and a roadmap for learning the MERN stack. The repository encourages community contributions to enrich the resources available for interview preparation. For more detailed information, visit GitHub.

2024-02-09 Interview-Preparation-Resources/Understanding Data Structures and Algorithms/Leetcode Patterns and Problems.md at main · adityadev113/Interview-Preparation-Resources

The document "Leetcode Patterns and Problems" in the "Interview-Preparation-Resources" repository provides a structured approach to solving Leetcode problems. It categorizes problems into specific patterns to help understand and tackle algorithmic challenges effectively, aiming to enhance problem-solving skills for technical interviews. For detailed patterns and problems, you can visit the [GitHub page](https://github.com/adityadev113/Interview-Preparation-Resources/blob/main/Understanding Data Structures and Algorithms/Leetcode Patterns and Problems.md).

2024-02-12 Finding a New Software Developer Job | Henrik Warne's blog

ne section I added now was Behavioral Questions. These are questions of the form “Tell me about a time when you disagreed with a coworker. How did you resolve it?”. Typically, you should answer them using the STAR framework: Situation, Task, Action, Result, Reflection. In the past, I have failed interviews because of these questions – I hadn’t prepared, and couldn’t come up with good examples on the spot in the interviews.

This time I went through a good list of such questions (Rock the Behavioral Interview) from Leetcode, and thought about examples to use. Once I had good examples, I wrote the question and my answer down in the document. Before an interview, I would review what I had written down, so I would be able to come up with good examples. This worked well, I didn’t fail any interviews because of behavioral questions.

In the document I also wrote down little snippets of code in both Python and Go. I tried to cover many common patterns and idioms. I did this so I could refresh my memory and quickly come up with the right syntax in a coding interview. I ran all the snippets first, to see that I hadn’t made any mistake, and included relevant output. Reviewing these snippets before an interview made me feel calmer and more prepared.

I also watched a good video by Gergely Orosz, 🚩 Confessions from a Big Tech Hiring Manager: Tips for Software Engineering Interviews, on technical interviews in general. Some takeaways: be curious and collaborative, and ask questions.

C++

2024-02-09 Playing Video Games One Frame at a Time - Ólafur Waage - Meeting C++ 2023 - YouTube

2024-02-09 microsoft/Detours: Detours is a software package for monitoring and instrumenting API calls on Windows. It is distributed in source code form.

Detours is a software package for monitoring and instrumenting API calls on Windows. It is distributed in source code form.

2024-02-09 TerryCavanagh/VVVVVV: The source code to VVVVVV! http://thelettervsixtim.es/

This is the source code to VVVVVV, the 2010 indie game by Terry Cavanagh, with music by Magnus Pålsson. You can read the announcement of the source code release on Terry's blog!

2024-02-09 Playing Video Games One Frame at a Time - Ólafur Waage - Meeting C++ 2023 - YouTube

2024-02-09 VVVVVV on Steam $4.99

2024-01-06 Back to Basics: Iterators in C++ - Nicolai Josuttis - CppCon 2023 - YouTube

2024-02-18 All C++20 core language features with examples | Oleksandr Koval’s blog

2024-02-18 20 Smaller yet Handy C++20 Features - C++ Stories

Distributed Systems

Data Structures for Data-Intensive Applications: Tradeoffs and Design Guidelines

Manos Athanassoulis Stratos Idreos and Dennis Shasha Boston University, USA; mathan bu.edu Harvard University, USA; stratos seas.harvard.edu New York University, USA; shasha cs.nyu.edu

ABSTRACT

Key-value data structures constitute the core of any datadriven system. They provide the means to store, search, and modify data residing at various levels of the storage and memory hierarchy, from durable storage (spinning disks, solid state disks, and other non-volatile memories) to random access memory, caches, and registers. Designing efficient data structures for given workloads has long been a focus of research and practice in both academia and industry. This book outlines the underlying design dimensions of data structures and shows how they can be combined to support (or fail to support) various workloads. The book further shows how these design dimensions can lead to an understanding of the behavior of individual state-of-the-art data structures and their hybrids. Finally, this systematization of the design space and the accompanying guidelines will enable you to select the most fitting data structure or even to invent an entirely new data structure for a given workload.

Seattle

2024-02-09 🌎 nearbywiki Fairview Avenue North Bridge

Explore interesting places nearby listed on Wikipedia

Ideas

2024-02-24 Tommy's inclusive datepicker

2024-02-06 Kaptr.me - Capture, share and save data with live screenshots of any app or website

2024-02-07 Web based logs viewer UI for local development environment | Logdy

Dark Souls 2

2024-02-01 📌 Some more translated DS2 Kouryakubo Maps (WIP, edited for SotFS) : DarkSouls2

JP Kouryakubo Maps http://www.kouryakubo.com/darksouls2/index.html

2024-02-12 Dark Souls 2 Design Works : Free Download, Borrow, and Streaming : Internet Archive

Bigdata

2024-01-31 timeplus-io/proton: A streaming SQL engine, a fast and lightweight alternative to Apache Flink, 🚀 powered by ClickHouse.

CSS (Research)

2024-02-16 sass/sass: Sass makes CSS fun!

2024-02-16 💲📺 Creating a Living Style Guide with Sass and Vanilla JavaScript | Pluralsight

2024-02-16 Atomic Design Methodology | Atomic Design by Brad Frost

2024-02-20 You Might Not Need Sass: Modern CSS Techniques

2024-02-21 Stepping away from Sass

Web Dev Stuff

2024-01-28 In Loving Memory of Square Checkbox @ tonsky.me

2024-01-30 ⭐ Web Components in Earnest

Found in: 2024-01-30 JavaScript Weekly Issue 672: January 25, 2024

2024-01-30 How to start a React Project in 2024

Found in: 2024-01-30 JavaScript Weekly Issue 672: January 25, 2024

2024-01-30 pretty-ms 9.0: Convert Milliseconds to Readable Strings133700000015d 11h 23m 20s

Found in: 2024-01-30 JavaScript Weekly Issue 672: January 25, 2024

2024-01-30 TypeSpec

Found in: 2024-01-30 JavaScript Weekly Issue 672: January 25, 2024 A language for concisely describing cloud service APIs and generating other API description languages (e.g. OpenAPI), client and service code, docs, and more. Formerly known as CADL. – GitHub repo.

2024-02-23 JavaScript Bloat in 2024 @ tonsky.me

Json to code

Convert JSON to classes/code

Lisp

2024-01-30 Colin Woodbury - A Tour of the Lisps