Skip to main content

¡ 31 min read

⌚ Nice watch!​

2025-01-21 How I Built T3 Chat in 5 Days - YouTube { www.youtube.com }

image-20250120201037694 GPT Summary: I developed T3 Chat, an AI chat app that emphasizes speed, usability, and efficient local-first architecture, completing the project in just five days. Motivated by frustrations with existing AI tools, I aimed to create a responsive and seamless experience. Leveraging the Deep Seek V3 model for its speed and affordability, I found existing starter kits unsuitable for my needs, prompting me to build a custom solution. The app uses React, React Router, and Dexie.js for its database layer, enabling offline functionality and efficient synchronization between local and cloud data. Switching from server-driven routing to a client-side approach greatly improved navigation and responsiveness.

The development process included significant hurdles. I experimented with tools like Jazz for syncing but found its collaborative-first structure overly restrictive. Instead, I built a custom sync layer, tailoring the data flow to the app's requirements. Performance optimization was critical, with tools like React Scan helping to eliminate inefficiencies. Markdown chunking and memoized rendering were implemented to minimize unnecessary re-renders, ensuring a smooth user experience. Payments were integrated with Stripe, alongside an onboarding flow that uses inline messages to explain app features. Despite challenges, these deliberate engineering choices resulted in an app that is faster, more responsive, and better tailored to user needs than its competitors.

The tools used include Deep Seek V3, React, Dexie.js, React Router, React Scan, Tailwind CSS, Vercel AI SDK, and Stripe. Each played a vital role, though some required customization, like Jazz and OpenAuth. The result is an AI chat app that outperforms existing alternatives by leveraging local-first architecture, advanced optimizations, and thoughtful design principles. This project demonstrates how targeted engineering and innovative thinking can create high-performing, user-focused applications.

2025-01-19 Locknote: Local Reasoning in C++ - Sean Parent - NDC TechTown 2024 - YouTube { www.youtube.com }

image-20250118195257249 GPT Summary:

The speaker, Sean Parent, a senior principal scientist at Adobe, shared insights into improving software engineering practices with a focus on local reasoning—breaking down complex systems into manageable, verifiable components. He also delved into challenges around reasoning in C++, design principles, and strategies for building reliable systems.

Core Ideas from the Talk

The Root Cause of Software Failures The talk began with an analysis of why large software systems fail. Despite many failures being attributed to management issues, the real challenges often stem from exceeding our ability to reason about systems. The software engineering crisis, a problem identified as early as 1968, persists because large systems become too complex to understand or verify.

"The greatest limitation in writing software is our ability to understand the systems we're creating."

Key failures include lack of tools, poor practices, and an over-reliance on free relationships (unmanaged dependencies between components).

Local Reasoning Local reasoning is the ability to understand and verify a function or class independently of its broader context. This is enabled through clear APIs, preconditions, and postconditions, which define the contract between the client (caller) and the implementer. The talk focused on achieving local reasoning through careful structuring of functions, arguments, and classes.

API Contracts and Preconditions Preconditions and postconditions define the expectations and guarantees of a function:

  1. Preconditions: Specify conditions the caller must meet before invoking the function.
  2. Postconditions: Describe the state after the function executes successfully.
  3. Invariant Conditions: Properties that must always hold true within the scope of a function or class.

Preconditions allow implementers to shift responsibility for ensuring valid input to the caller, simplifying function logic.

"Do not underestimate the power of a precondition. It lets the implementer focus on the valid cases."

Managing Function Arguments Function arguments should follow a clear and consistent contract:

  • Let Arguments: Immutable references (e.g., const T&) that are not modified by the function.
  • In-Out Arguments: Mutable references (e.g., T&) that may be modified by the function.
  • Sink Arguments: R-value references (e.g., T&&) that are consumed by the function, leaving the caller responsible for ensuring proper ownership transfer.

General rules for arguments:

  • Non-const references must not be accessed by other threads during the function's execution.
  • Const references must not be written to during the function's execution.

These rules enforce memory safety and help prevent concurrency issues.

Avoiding Aliasing and Law of Exclusivity Aliasing—when multiple references point to the same memory—is a major challenge in reasoning about code. C++ lacks built-in safeguards like Swift’s exclusive access or Rust’s borrow checker, but developers must enforce similar rules manually:

  • Ensure no overlapping projections (references to object parts) are passed to a function.
  • Projections are invalidated if the object they point to is modified.

"In C++, we have exactly the same rule. We just don't have the language facilities to enforce it."

Projections and Value Semantics Projections (e.g., references to parts of an object) enable value semantics while maintaining efficiency. Rules for projections include:

  • Avoid overlapping mutable projections.
  • Projections are invalidated when the parent object is modified or destroyed.
  • Multiple non-overlapping projections may coexist safely.

Mutation and Independence To simplify reasoning, objects must be independent under mutation:

  1. Disallow mutation (functional programming).
  2. Disallow sharing of mutable objects.
  3. Allow mutation only when there is no sharing (copy-on-write).

Encapsulation and whole-part relationships (e.g., an object fully owning its parts) are critical to maintaining independence.

Encapsulation of Relationships Extrinsic relationships—connections between objects not captured by a whole-part hierarchy—are a primary source of complexity. These relationships should:

  • Be encapsulated within a managing class to enforce invariants (e.g., ensuring a pointer remains valid).
  • Be carefully tracked and invalidated when one side of the relationship changes.

"Containers are examples of classes that manage extrinsic relationships between their parts."

Complexity and Chaotic Systems Complex systems often become chaotic—unpredictable and impossible to reason about. Examples like the three-body problem illustrate how simple rules and relationships can create unpredictable behaviors. Developers must avoid creating chaotic systems by:

  • Structuring code into hierarchies (e.g., whole-part relationships).
  • Encapsulating relationships within manageable, well-defined classes.
  • Simplifying or abstracting relationships to reduce interconnectedness.

Free Relationships Free relationships—unmanaged dependencies—are inherently dangerous. The speaker recommends avoiding them entirely, except in cases where the relationships are monotonic:

  • Monotonic systems only move forward and never return to a previous state (e.g., immutable variables or conflict-free replicated data types).

Designing Reliable Code The speaker provided concrete recommendations for designing reliable, predictable systems:

  • Use small, single-purpose functions: Each function should have a clear role with well-defined inputs and outputs.
  • Avoid modifying shared state: Treat shared state as immutable or use copy-on-write semantics.
  • Minimize sharing: Avoid passing shared pointers or references to mutable state in public interfaces.
  • Write testable, invariant-based classes: Ensure each class encapsulates its relationships and invariants.
  • Manage complexity with hierarchies: Use containment relationships to enforce structure and reasoning.

Guidelines for APIs

  1. Functions should clearly specify their scope and effects.
  2. Use projections to allow manipulation of object parts while preserving value semantics.
  3. Do not pass overlapping projections or invalid references.
  4. Favor references over shared pointers in function interfaces.

Addressing Systemic Complexity When individual relationships between parts become too complex to reason about, step back and define the system as a whole. For example, instead of managing individual moves in chess, define the overall algorithm for playing the game.

Summary of Best Practices

  1. Write clear APIs with defined preconditions, postconditions, and invariants.
  2. Avoid shared state unless absolutely necessary, and treat shared state as immutable.
  3. Design objects for value semantics, ensuring independence and disjointedness.
  4. Encapsulate relationships into classes to simplify reasoning.
  5. Use hierarchies and DAGs to structure complex systems.
  6. Minimize complexity by managing extrinsic relationships and avoiding chaotic loops.
  7. Build monotonic systems where possible to allow for distributed, predictable behavior.

"At some point, individual relationships become too complex. You have to step back and solve the system as a whole."

These principles form a cohesive strategy for creating systems that are easier to reason about, maintain, and scale. By adhering to local reasoning, managing complexity, and encapsulating relationships, developers can build reliable, efficient software.

whole / part snippet:

/**
* @class whole
* @brief An example "whole" class that holds a "part" subobject.
*
* This class demonstrates a pattern where we:
* - Disallow default construction.
* - Provide an explicit constructor taking a required parameter.
* - Use compiler-generated (default) copy/move constructors and assignment operators.
* - Provide a default comparison operator.
*
* The goal is to ensure that any "whole" object is always in a valid and meaningful state,
* and that all defaulted functions have consistent semantics with their subobjects.
*/
class whole {
/**
* @var _part
* @brief The subobject/part that this "whole" manages.
*
* Storing the part as a member ensures that the "whole" is always composed of
* a valid "part". We rely on the "part" type to provide its own correctness
* and invariants.
*/
part _part;

public:
/**
* @brief Delete the default constructor.
*
* Reason:
* - We do not allow a "whole" to exist without explicitly providing
* a meaningful state for its subobject.
* - Prevents accidental creation of a "whole" in an uninitialized or
* incomplete state.
*/
whole() = delete;

/**
* @brief Construct a "whole" by providing a required state.
*
* @param s A "state" object that the "_part" subobject will be constructed with.
* Reason:
* - Ensures that each new "whole" has a valid "part" from the beginning.
* - Marked explicit to prevent implicit conversions from state -> whole,
* forcing a clear constructor call.
*/
explicit whole(state s)
: _part{s}
{ }

/**
* @brief The copy constructor (defaulted).
*
* Reason:
* - In most cases, compiler-generated copying does exactly what we want:
* memberwise copy of the subobjects.
* - Making it explicit (optional choice here) can prevent some unintentional
* conversions, but primarily we’re just acknowledging that it is defaulted.
*/
explicit whole(const whole&) = default;

/**
* @brief The move constructor (defaulted, noexcept).
*
* Reason:
* - We allow moving to be efficient and safe.
* - noexcept helps with certain optimizations (e.g., containers can move
* elements instead of copying if they know it won’t throw).
*/
whole(whole&&) noexcept = default;

/**
* @brief Copy assignment operator (defaulted).
*
* Reason:
* - Same rationale as the copy constructor: a simple memberwise copy
* from the other "whole" is typically correct and easiest to maintain.
*/
whole& operator=(const whole&) = default;

/**
* @brief Move assignment operator (defaulted, noexcept).
*
* Reason:
* - Same rationale as the move constructor: move semantics can improve
* performance, noexcept promises no exceptions are thrown.
*/
whole& operator=(whole&&) noexcept = default;

/**
* @brief Equality comparison operator (defaulted).
*
* @param other The other "whole" to compare with.
* @return true if the two "whole" objects are equal, false otherwise.
*
* Reason:
* - Defaulted comparison will do a memberwise comparison of "_part"
* (assuming "part" itself has an appropriate operator==).
* - Makes it easy to compare two "whole" objects without manual checks.
*/
bool operator==(const whole&) const = default;
};

2025-01-18 TPMs and the Linux Kernel: unlocking a better path to hardware security - Ignat Korchagin - YouTube { www.youtube.com }

image-20250118151126088 GPT Summary:

  • Introduction to TPMs: Trusted Platform Modules (TPMs) are passive hardware security chips widely available in modern laptops and servers. They are primarily used for cryptographic key management, platform integrity, and remote attestation, providing hardware-backed security for sensitive operations.
  • TPMs in Application Development: Despite their ubiquity, TPMs are rarely used directly by applications. Developers face challenges such as complex interfaces, limited documentation, and the absence of seamless support in common libraries and tools.
  • Complexity of TPM Interaction: Using TPMs involves navigating multiple layers:
    1. Resource Managers: Necessary to serialize access to the TPM, which cannot handle multiple concurrent requests. Linux provides an in-kernel resource manager (/dev/tpmrm0) to simplify this.
    2. TPM Libraries: Competing implementations (Intel TSS and IBM TSS) have incompatible APIs, forcing developers to make early, limiting choices.
  • Linux Kernel Key Retention Service: A subsystem of the Linux kernel that securely stores cryptographic keys in kernel memory, ensuring their isolation from user-space processes. It supports multiple key types (e.g., user, logon, and trusted keys) and organizes keys into hierarchical key rings with fine-grained permissions.
  • Trusted Keys with TPM Integration: Trusted keys leverage TPMs to encrypt key material into "wrapped blobs," ensuring plaintext keys are never exposed to user space. The kernel automatically decrypts these blobs when needed, making it a lightweight software HSM.
  • Key Management Challenges: Current trusted key implementation requires applications to manage wrapped blobs manually, which complicates key recovery, persistence, and scaling, especially for stateless systems or devices with limited storage.
  • Key Derivation from TPMs: A proposed approach uses TPM seed values and application-specific metadata to deterministically derive cryptographic keys. This method eliminates the need for persistent key storage and enables scalable, reproducible key management.
  • Linux Crypto API and Kernel Key Store Integration: The Linux Crypto API allows applications to offload cryptographic operations to the kernel using cryptographic sockets. A recent patch integrates this API with the key store, enabling cryptographic operations using kernel-managed keys without exposing them to user space.
  • Request Key System Call Enhancements: The request_key syscall is extended to allow dynamic retrieval of application-specific keys. A plugin-based architecture lets the kernel call user-space helpers (e.g., TPM-aware plugins) to derive or retrieve keys as needed.
  • Stateless Key Derivation with TPMs: The stateless key derivation method uses TPMs to create keys tied to application metadata (e.g., executable paths, user IDs, or code hashes). These keys are reproducible and isolated by design, making them suitable for ephemeral or IoT systems.
  • Kernel-Based Key Derivation: A proposed kernel patch would eliminate user-space exposure of key material entirely by performing key derivation directly in the kernel, ensuring plaintext keys remain within secure kernel memory.
  • Limitations of Current TPM Integration: Existing systems primarily support symmetric key operations. Asymmetric key functionality, such as signing or private key decryption, remains under development and is expected in future kernel releases.
  • Improving Accessibility for Developers: By exposing TPM functionality through the Linux key retention service, developers can leverage hardware-backed security without needing to understand TPM internals, providing a more accessible pathway for application adoption.
  • Call for Community Feedback: The speaker sought input on the practicality of proposed solutions for IoT and stateless systems, emphasizing the importance of balancing security, scalability, and developer usability.

2025-01-18 Memory Safety: Rust vs. C - Robert Seacord - NDC TechTown 2024 - YouTube { www.youtube.com }

image-20250118135635915 GPT Summary:Background and Context: The talk originates from a memory-safe languages panel led by government and industry stakeholders. There is a push, particularly from governments like the U.S. and other Five Eyes members, to migrate critical systems from C and C++ to "memory-safe languages" like Rust. The speaker, while defending C and C++, acknowledges biases and stresses the importance of fair evaluations between languages.

Challenges of Defining "Memory-Safe Languages": The panel has inconsistently defined key terms. A "low-level memory-safe language" (essentially Rust) is distinguished from garbage-collected ones like Java or Python. The main critique of C/C++ centers not on the inherent inability to ensure memory safety but on the lack of compiler-enforced memory safety, leaving discipline and external tools to fill the gap.

Types of Safety: The talk breaks down safety concerns into type safety, memory safety, and thread safety, each foundational to broader software security and functional safety. Functional safety ensures systems like brakes or airplane controls continue operating safely, even under partial failure.

Arguments for C/C++ in Safety-Critical Systems: Safety-critical systems in aerospace, automotive, and other domains rely on C/C++ due to decades of tooling, standards (e.g., ISO 26262), and expertise. The deterministic nature of these languages aligns with strict timing and behavior guarantees, which are harder to achieve with garbage collection or immature ecosystems.

Rust's Growing Role and Barriers: Rust, though promising, faces ecosystem maturity challenges. The availability of Rust-trained engineers, tooling gaps (e.g., in platforms like MathWorks), and reliance on interoperability with C APIs present barriers. Rust's adoption in safety-critical domains remains limited due to these hurdles and the immense cost of rewriting existing, battle-tested codebases.

Security Concerns Beyond Memory Safety: Eliminating memory safety issues does not address broader vulnerabilities like input validation, SQL injection, or business logic errors. For example, tools in C/C++ like AddressSanitizer (ASan) can address memory safety issues but are unsuitable for production. Security is a multi-faceted problem that Rust alone cannot solve.

Progress in C and C++: Modern updates in C (e.g., C23's checked integer operations) and C++ aim to close gaps in safety. Tools like UBSan, ASan, and static analysis have matured, enabling effective error detection and mitigation in development. The C/C++ ecosystem has advanced to rival or even surpass memory-safe languages in certain safety-critical applications.

Cost and Practicality of Transition: Rewriting massive C/C++ systems into Rust or any other language without adding new features is seen as economically unviable. Transition timelines are long, involving curriculum changes, workforce training, and standards development. Safety-critical systems tend to evolve incrementally rather than through wholesale rewrites.

Critique of the Panel's Conclusions: The speaker criticizes the panel's narrow focus on memory safety as overly simplistic. Broader issues like ecosystem maturity, tooling availability, and compatibility with safety standards make the wholesale dismissal of C and C++ impractical and misguided.

Closing Thoughts: The speaker emphasizes a balanced, pragmatic approach to safety and security. Transitioning to Rust or any new language must account for all engineering realities, including ecosystem readiness, regulatory compliance, and the multifaceted nature of software vulnerabilities.

2025-01-18 How Simple Is "As Simple As Possible"? - Rendle . - NDC Porto 2024 - YouTube { www.youtube.com }

image-20250118133914617

Simplifying Software Development: A Rant on Doing Less, Better

This is a talk about how we’re overcomplicating software for no good reason. It’s about keeping things simple and just building systems that work instead of getting lost in trends, tools, and buzzwords.

The speaker kicks off with a nostalgic dive into the early days of coding on Tandon 286 machines, soldering RS232 cables by hand, and building monoliths that simply did the job. “We wrote software that people used, they got their jobs done, and went home happy. No internet, no GitHub repos -- just code that worked.”

Fast forward to today, and things are a mess. Microservices? Great if you’ve got a thousand developers and a global scale problem. If not, you’re probably just smashing your monolith into tiny, unmanageable pieces. “If you’ve got fewer than 100 programmers and you’re doing microservices, I will find you and kick your shins.”

The same critique extends to APIs. SOAP was overkill; REST simplified things (or did it?), but now we’re stuffing APIs with metadata, inventing gRPC, or obsessing over “hypermedia” that no one asked for. “Just send some JSON over HTTP and call it a day. We don’t need another doctoral thesis to justify URLs.”

And don’t even get started on frontends. React, Angular, Vue -- they’re all bloated monstrosities. “140MB of node modules to load a blank page? What are we even doing?” The solution? Go back to server-side rendering or use lightweight tools like HTMX. “We solved these problems years ago, but no -- let’s reinvent them with more JavaScript.”

On infrastructure, the speaker points to Kubernetes as a classic example of overengineering. “Most of us don’t need it, but we’re running it anyway because it sounds cool. Just use containers properly and let the cloud handle the rest.”

The takeaway is simple: stop making things harder than they need to be. “If you’re adding complexity just to look good on your CV, you’re doing it wrong. Just build stuff that works, keep it simple, and fix it when it breaks. Complexity isn’t clever; it’s stupid.”

The talk wraps with humor but drives the point home: "Stop overcomplicating things. Build what you need. Then go get a beer."

2025-01-13 Why You're Not Getting Promoted To Senior (ex-Amazon Principal) - YouTube { www.youtube.com }

image-20250113124932305

To become a senior engineer, understanding what blocks promotions and taking deliberate actions to address those barriers is crucial. Based on the advice provided, here are the key takeaways and actionable insights:

One common misconception is that excellent technical skills alone will secure a promotion. Many engineers hit a plateau despite being technically competent because they overlook critical non-technical factors. According to the speaker, three specific roadblocks hinder promotions, and addressing these can change your trajectory.

The first roadblock is ineffective delegation. Promotions often require demonstrating leadership, and delegation is a cornerstone of this. However, not all delegation styles are equally effective:

  • The Load Balancer: Merely distributing tasks among the team doesn’t showcase leadership or improve team capabilities. "Tasks come in, and you spread them out to others on your team." This approach doesn't reduce overall workload or scale your impact.
  • The Decomposer: Breaking down ambiguous problems into smaller, executable tasks is better, but it's still an expected responsibility at most levels. It doesn’t elevate you as a leader.
  • The Capability Multiplier: This is the ideal approach. By assigning challenging problems to team members and coaching them through the process, you scale your impact by developing the team. "You coach them up, tell them how you would handle the situation, and let them handle the problem on their own." The critical elements of this approach are:
    • Knowing your team’s capabilities to assign tasks slightly outside their comfort zone.
    • Investing time upfront to coach them while stepping away to give them ownership.
    • Accepting the possibility of failure as part of their growth.

Effective delegation demonstrates leadership by "creating copies of yourself," a trait highly valued in promotion decisions.

The second roadblock is a weak relationship with your manager. Promotions often hinge on managerial support. Managers are hesitant to risk promoting someone who might fail at the next level, as this reflects poorly on them and disrupts team dynamics. The speaker emphasized: "Your manager is the biggest roadblock to getting promoted to senior... They only do that for people that they trust."

To strengthen this relationship:

  • Clearly communicate your desire for promotion and ask for specific feedback.
  • Build trust by consistently delivering results and taking ownership of problems.
  • Repair any strained relationships or consider moving to a different team if necessary.

The third roadblock is failing to demonstrate leadership by owning problems. To advance to senior engineer, you must show initiative in solving team-level issues. The story of David, an engineer striving for promotion, illustrates this. Despite his technical excellence, his promotion was blocked because he raised problems without presenting solutions. Leadership involves not just identifying issues but also proactively addressing them.

For example:

  • If user adoption is low, suggest prioritizing features to improve engagement.
  • If defect rates are high, identify patterns or implement training for improvement.
  • If operational load is causing attrition, propose forming a task force to resolve it.

"High-level ICs are leaders that don't have direct reports. Leaders take ownership of problems and do something about them."

In summary, focus on scaling through delegation, building trust with your manager, and demonstrating leadership through problem ownership. These strategies will position you as a strong candidate for promotion to senior engineer.

CS50 Cybersecurity - Lecture 1 - Securing Data

image-20241216200849853

Also:

2024-12-18 CS50 Cybersecurity - Lecture 2 - Securing Systems - YouTube { www.youtube.com }

1. Password Security and Hashing

  • Storing Passwords: Early systems stored plaintext passwords like "alice: apple," which is insecure.
  • Hashing Passwords: Converts passwords into fixed-length strings using a hash function.
  • Simple vs. Proper Hash Functions: Early functions were simplistic (A=1, B=2) but proper hashing creates cryptic outputs.
  • Salting Passwords: Adds a random value (salt) to passwords before hashing to produce unique hashes. Salt is stored alongside the hash.
  • Password Authentication: Input password is hashed and compared to the stored hash.
  • Hashing Vulnerabilities: Brute force attacks, rainbow tables, and shared password issues are mitigated with salting.
  • Best Practices: Use industry standards like NIST guidelines, avoid creating custom hashing functions, and rely on proven libraries.

2. Encryption and Cryptography

  • Cryptography: Secures data in transit and at rest using encryption.
  • Types: Codes (word substitution) and ciphers (character manipulation).
  • Encryption Keys: Symmetric encryption uses the same key for encryption/decryption; asymmetric encryption uses a public-private key pair.
  • Encryption Algorithms: AES and Triple DES (symmetric); RSA, Diffie-Hellman (asymmetric).
  • Asymmetric Key Cryptography: Involves public and private keys; RSA encrypts with a public key, decrypts with a private key.
  • Key Exchange Problem: Diffie-Hellman allows two parties to establish a shared secret key without prior communication.
  • Encryption vs. Hashing: Hashing is one-way and irreversible; encryption is two-way and reversible.

3. Digital Signatures and Verification

  • Digital Signatures: Authenticate and verify the origin of a message or document.
  • How They Work: Message is hashed, then encrypted with a private key to create the signature, which is verified using the public key.
  • Purpose: Ensures integrity, authenticity, and non-repudiation of messages or contracts.

4. Public Key Infrastructure (PKI)

  • Concept: Relies on a trusted system to verify public keys belong to specific entities.
  • Key Roles: Public keys are shared, private keys are secret. Certificate Authorities (CAs) issue digital certificates to confirm key authenticity.

5. Passkeys and Passwordless Authentication

  • Passkeys: Replace passwords with biometrics (fingerprint, face scan) or PINs.
  • How Passkeys Work: Device generates a public-private key pair per website. Public key is shared, private key stays on the device.
  • Authentication Process: Websites send a challenge; user signs it with the private key. Website verifies using the stored public key.
  • Benefits: No need for passwords, increased security, supported by Apple, Google, and Microsoft.

6. Encryption in Transit vs. Encryption at Rest

  • Encryption in Transit: Protects data as it moves from point A to point B. Used in protocols like HTTPS. Prevents "man-in-the-middle" attacks but may allow the middle server (like Gmail) to see the data.
  • End-to-End Encryption (E2EE): Encrypts data so only the sender and recipient can see it. Used by WhatsApp and iMessage. Intermediaries can't decrypt it.
  • Encryption at Rest: Encrypts stored data on devices (like hard drives) to protect against theft or loss.

7. File Deletion and Secure Deletion

  • File Deletion: Deleting a file just removes references to it; data is still present until overwritten.
  • Secure Deletion: Overwrites 0s, 1s, or random bits to ensure no file remnants remain. Full-disk encryption makes secure deletion automatic.
  • Device Disposal: Use full-disk encryption to ensure data is unreadable when selling or giving away devices.

8. Ransomware Attacks

  • What is Ransomware?: Malware encrypts files and demands payment (often in Bitcoin) for the decryption key.
  • How it Works: Hackers encrypt system files and request payment to decrypt them.
  • Prevention: Use full-disk encryption and regular backups to prevent data loss.

9. Quantum Computing and Its Impact on Cybersecurity

  • Quantum Computing: Uses qubits that can be in multiple states simultaneously, increasing computational power exponentially.
  • Threat to Security: Could break current encryption algorithms like RSA due to greater computing power.
  • Quantum-Safe Cryptography: Research is underway for "post-quantum cryptography" to withstand quantum attacks.

2024-12-20 Domain Modeling Made Functional - Scott Wlaschin - KanDDDinsky 2019 - YouTube { www.youtube.com }

image-20250120202228088

GPT Summary:Functional Programming for Domain Modeling: Functional programming simplifies modeling by separating data and behavior. It uses composable types to reflect domain concepts clearly, allowing you to model workflows and real-world scenarios with precision.

Code Reflects the Domain: Code should represent the domain's shared mental model. Concepts like "suit" or "rank" in a card game are directly encoded into the structure, ensuring that the vocabulary in code matches the language of domain experts.

Static Typing as a Domain Modeling Tool: Types are not just for error-checking but are integral to domain modeling. They enforce rules at compile-time, reducing the need for defensive programming or runtime validation. This provides compile-time unit testing for domain correctness.

Composable Type Systems: Composable type systems build new types from smaller ones using "and" (records/tuples) and "or" (choices). These allow for flexible, modular designs that adapt to changing domain requirements.

Eliminating Null Values: Null values are error-prone and should be replaced with optional types (e.g., Option<T>), which explicitly represent the presence or absence of a value. This makes code safer and self-documenting.

Replacing Primitive Types: Avoid using primitive types like string or int for domain-specific data. Instead, use wrappers (e.g., EmailAddress or CustomerID) to enforce constraints and ensure clarity.

Replacing Boolean Flags with Choices: Boolean flags are ambiguous and prone to misuse. Replace them with choice types (e.g., VerifiedEmail vs. UnverifiedEmail) to enforce business rules explicitly in the type system.

Immutability in Functional Programming: Immutability ensures that once data is validated and encapsulated, it cannot change. This eliminates repetitive validation and simplifies reasoning about state changes in the domain.

Rapid Feedback and Iteration: Collaborating with domain experts while modeling in code provides immediate feedback. This approach allows adjustments to domain understanding and code simultaneously, shortening feedback loops from weeks to minutes.

Modeling Constraints Explicitly: Use types to encode constraints, such as a string with a maximum length (String50) or a positive integer for quantities (OrderQuantity). This prevents invalid states and enforces constraints at the type level.

Making Illegal States Unrepresentable: Design your system so that invalid states (e.g., an unverified email being treated as verified) cannot be represented in the code. This reduces the need for runtime validation and minimizes bugs.

Separating Domain Logic from Implementation Details: Keep domain logic independent of technical concerns like database schemas or persistence. This is often referred to as persistence ignorance.

Refactoring Towards Deeper Insight: As you learn more about the domain, refactor code to introduce new concepts (e.g., ShuffledDeck or VerifiedEmail). This process evolves the domain model to better reflect reality.

Explicitly Modeling Relationships and Constraints: For example, if an entity must have an email or a postal address, model this as a choice type (e.g., EmailOnly, AddressOnly, or Both). This avoids ambiguous states and ensures correctness.

Process Over Product: The modeling process itself -- collaborating with stakeholders, defining concepts, and refining understanding -- is as important as the resulting code. The shared mental model is the foundation of success.

Code as a Living Document: Code is the ultimate source of truth in functional modeling. Unlike UML diagrams or external documentation, code evolves with the domain and remains in sync with business logic.

Enforcing Business Rules in the Type System: Business rules like "password resets require a verified email" can be encoded directly in the type system. This eliminates the need for external checks and makes rules unbreakable.

Modeling Actions with Functions: Actions in the domain (e.g., dealing a card or verifying an email) are modeled as functions with explicit inputs and outputs, reflecting the transformation of domain state.

Avoiding Programmer Jargon in Domain Models: Terms like "base class," "factory," or "proxy" should not appear in the domain model. Use only terms that stakeholders understand.

Facilitating Non-Programmer Feedback: Modeling in code allows non-developers to participate in reviewing and refining the domain model, ensuring alignment between technical and business perspectives.

Domain-Driven Design and Functional Programming as Allies: Functional programming and domain-driven design complement each other, providing tools for creating robust, accurate, and easily understood models of complex domains.

Use of Algebraic Data Types (ADTs): ADTs like records and discriminated unions are powerful tools for expressing complex domain concepts naturally, allowing for greater expressiveness and error prevention.

Encapsulation of Validation: Validation is done at the boundaries of the system (e.g., API inputs) and not repeatedly in the domain logic. Once data is validated, it is immutable and safe to use.

Encouraging Collaboration with Shared Language: The modeling process ensures that all stakeholders -- developers, domain experts, and product owners -- share a common understanding of the system through a ubiquitous language.

Flexibility and Extensibility: The compositional approach makes it easier to adapt the domain model to new requirements without introducing significant complexity.

2024-12-20 Introduction to Wait-free Algorithms in C++ Programming - Daniel Anderson - CppCon 2024 - YouTube { www.youtube.com }

image-20250120202843266 GPT Summary:Concurrency Concepts and Lock-Free Programming: Concurrency issues arise when multiple threads access shared resources simultaneously, potentially causing errors. Lock-based programming avoids these problems but can degrade performance due to contention. Lock-free programming ensures system-wide progress but does not guarantee individual thread progress. Key tools include atomic operations like compare-and-swap (CAS), fetch-add, and fetch-sub.

Weight-Free Algorithms: Weight-free algorithms improve on lock-free by guaranteeing progress for all threads within bounded steps. This is achieved through collaboration among threads instead of competition. The helping mechanism, where threads assist ongoing operations rather than blocking or overriding them, is central to weight-free design.

Sticky Counter as a Case Study: A weight-free counter that supports increment, decrement, and read operations was used to demonstrate weight-free algorithm design. Challenges like linearizability, handling "zero" states, and edge cases like thread descheduling were addressed using flag bits and the helping principle, ensuring correctness and bounded progress.

Design Challenges and Subtleties: Weight-free algorithms require significant redesign, as they must enable threads to detect and assist in-progress operations. Concepts like linearizability ensure that operations appear to happen in a sequential order, even if they overlap in execution. Testing and formal verification are critical for validating correctness, as subtle bugs can arise in complex concurrent systems.

Performance Implications: Weight-free algorithms perform better in high-contention scenarios, especially when operations like reads are frequent. However, performance depends on the workload. Benchmarks showed that while weight-free algorithms often outperform lock-free ones in certain workloads, lock-free approaches can be faster when contention is low or writes dominate.

Progress Guarantees and Practical Constraints: The talk clarified terms like blocking (no progress), lock-free (system-wide progress), and weight-free (thread-level progress). It emphasized that real-world constraints, such as thread scheduling and hardware architecture, must be considered when implementing concurrent algorithms.

2025-01-01 "Junior developers can't think anymore..." - YouTube { www.youtube.com }

image-20250101004520405

2025-01-02 12 Months After Layoff - The Blunt Truth - YouTube { www.youtube.com }

image-20250101193447997

¡ 35 min read

image-20241229142354666

Good Reads​

2024-12-28 Developer with ADHD? You’re not alone. - Stack Overflow {stackoverflow.blog}

Reddit: 2024-12-28 Got ADHD? Program computers? Even close with either? Talk about it here. {www.reddit.com}

Many developers with ADHD feel their job is a perfect fit for how they think and approach problems. “Coding can give ADHD brains exactly the kind of stimulation they crave,” explains full-stack developer Abbey Perini. “Not only is coding a creative endeavor that involves constantly learning new things, but also once one problem is solved, there’s always a brand new one to try.”

In addition to a revolving door of fresh challenges that can keep people with ADHD engaged, coding can reward and encourage a state of hyperfocus: a frequently cited symptom of ADHD that developer Neil Peterson calls “a state of laser-like concentration in which distractions and even a sense of passing time seem to fade away.”

2024-12-20 How types make hard problems easy • { mayhul.com }

image-20241219203410271

2024-12-16 The 70% problem: Hard truths about AI-assisted coding { addyo.substack.com }

found in programmingdigest issue1798 { programmingdigest.net }

image-20241215212205226

The hidden cost of "AI Speed": When you watch a senior engineer work with AI tools like Cursor or Copilot, it looks like magic. They can scaffold entire features in minutes, complete with tests and documentation. But watch carefully, and you'll notice something crucial: They're not just accepting what the AI suggests. They're constantly:

  • Refactoring the generated code into smaller, focused modules
  • Adding edge case handling the AI missed
  • Strengthening type definitions and interfaces
  • Questioning architectural decisions
  • Adding comprehensive error handling

In other words, they're applying years of hard-won engineering wisdom to shape and constrain the AI's output. The AI is accelerating their implementation, but their expertise is what keeps the code maintainable.

The knowledge paradox: Here's the most counterintuitive thing I've discovered: AI tools help experienced developers more than beginners. This seems backward – shouldn't AI democratize coding?

The reality is that AI is like having a very eager junior developer on your team. They can write code quickly, but they need constant supervision and correction. The more you know, the better you can guide them.

This creates what I call the "knowledge paradox":

  • Seniors use AI to accelerate what they already know how to do
  • Juniors try to use AI to learn what to do
  • The results differ dramatically

I've watched senior engineers use AI to:

  • Rapidly prototype ideas they already understand
  • Generate basic implementations they can then refine
  • Explore alternative approaches to known problems
  • Automate routine coding tasks

Meanwhile, juniors often:

  • Accept incorrect or outdated solutions
  • Miss critical security and performance considerations
  • Struggle to debug AI-generated code
  • Build fragile systems they don't fully understand

2024-12-16 A 10 Year Retrospective of a Passionate Software Engineer | by Boris Cherkasky | Nov, 2024 | Medium { cherkaskyb.medium.com }

found in programmingdigest issue1798 { programmingdigest.net }

image-20241215213500854

Take Ownership of Your Career No one is responsible for your career growth but you. While managers may offer guidance, the responsibility to seek opportunities, take initiative, and drive your own development is yours alone. Waiting for someone else to guide your progression will leave you stagnant.

"No one is responsible for your career path but you."

Seek Mentorship — It’s a Shortcut to Mastery A mentor can accelerate your growth by giving you insights, sharing their decision-making process, and exposing you to higher-level thinking. Actively seek out senior engineers, build relationships, and ask questions. This can be one of the most effective ways to "level up" faster than self-learning alone.

"Having a mentor is a force multiplier! It’s literally a means to learn faster, it’s a shortcut!"

Initiative is Always Rewarded No employer will think less of you for taking initiative. If something is blocked, find a way around it. Push for better solutions, offer new ideas, and take on challenges without being asked. This attitude of "full ownership" sets you apart. Engineers who "unblock themselves" — even by learning disciplines outside their core expertise — become the most valuable contributors.

"Nothing is more important than making your colleagues feel comfortable and safe working with you."

The Dunning-Kruger Effect is Real — Be Humble and Self-Aware At some point, you will overestimate your own skills. Recognizing this gap is essential for growth. Take feedback seriously, reflect on your mistakes, and focus on learning through deliberate practice. Switch from "just get it done" to "learn all you need to do it right." This mindset shift will elevate your skill set.

Master the "Glue Work" That Holds Teams Together It’s not enough to just write code. The ability to coordinate, track, and organize work is a rare and valuable skill. Acting as the "glue" between people, projects, and teams will make you indispensable. Track tickets, follow up on blockers, and ensure no one is left behind. Great engineers don’t just "code" — they also lead, unblock, and delegate.

Technical Excellence is Necessary, But Not Sufficient You can be a great coder, but without skills like communication, empathy, and coordination, you won’t become a senior engineer. Learn to bridge the gap between engineers, product managers, and customers. Senior engineers know how to translate customer requirements into engineering solutions and help their teams grow.

Learn to See the Business, Not Just the Code As you grow in your career, it’s not just about building "good" software — it’s about building software that drives business outcomes. Learn to ask, "How will this impact our KPIs?" and prioritize cost-efficient, high-impact solutions. This business-first mindset can distinguish you as a senior engineer and lead to better decision-making.

"At one point, we scratched the 'optimal solution' for a good enough, 10x cheaper solution. Engineering is all about tradeoffs."

Resilience and Observability Are Non-Negotiable Skills Handling production incidents teaches you to value system reliability, observability, and DevOps. As you progress, mastering monitoring, alerting, and on-call response will become essential. Developers who "speak infrastructure" become highly valuable, as they can ensure stability and avoid system failure.

"It became clear that being a developer that 'speaks' and understands infrastructure is a superpower, and a differentiating factor."

Continuous Learning is Not Optional The craft of software engineering evolves rapidly. Relying on daily work alone will not keep you at the top. You need to invest in side learning — read books like Clean Code and Designing Data-Intensive Applications, attend meetups, seek mentorship, and watch technical talks. Growth requires time and passion outside daily tasks.

"Learning through daily tasks is not enough for becoming a top-tier engineer. The craft and technology are just too complex and require a lot of passion and time."

Be a Decent Human Being — It Matters More Than You Think Nothing beats being a kind, respectful, and empathetic teammate. People remember how you make them feel. Psychological safety and trust are essential for high-performing teams. As you grow into senior roles, prioritize creating safe, welcoming environments where people can speak up, share ideas, and fail without fear of judgment.

"Nothing — and I mean it — Nothing! is more important than being a decent human being, a pleasant colleague, and a pragmatic engineer."

2024-12-15 Preferring throwaway code over design docs { softwaredoug.com }

image-20241215151801822

Another important point is on using PRs for documentation. They are one of the best forms of documentation for devs. They’re discoverable - one of the first places you look when trying to understand why code is implemented a certain way. PRs don’t profess to reflect the current state of the world, but a state at a point in time. A historical artifact. On the other hand, most design docs lie to you. They’re undead documentation. Unless you’re fastidious of keeping them up to date (most of us aren’t) they reflect an outdated view of reality.

2024-12-14 3 shell scripts: Kill weasel words, avoid the passive, eliminate duplicates { matt.might.net }

image-20241215151834173

2024-12-11 From where I left - antirez { antirez.com }

The blog post by Salvatore Sanfilippo (antirez) reflects on his journey with Redis, his departure, and his decision to return. He also shares insights into Redis's past, his thoughts on software licensing, and new technical concepts he's working on, such as vector sets for Redis. Below is a detailed digest of the key points from the article.

After leaving Redis about 4.4 years ago, Salvatore detached himself from the project's code, commits, and technical management. This detachment was not born out of resentment but rather a desire to explore other areas like writing and embedded projects, while also spending more time with family. He describes this period as a time to "hack randomly" and explore areas like neural networks and Telegram bots. However, this "random hacking" eventually left him feeling a lack of purpose, which reignited his desire to return to the tech world.

"Hacking randomly was cool but, in the long run, my feeling was that I was lacking a real purpose, and every day I started to feel a bigger urgency to be part of the tech world again."

Salvatore's return to Redis began during a trip to New York City with his 12-year-old daughter. Reflecting on life changes and purpose, he decided to re-engage with Redis. This led to a conversation with the new Redis CEO, Rowan Trollope, where they discussed Salvatore's possible role. He proposed becoming a bridge between Redis Labs and the Redis community, creating educational materials like demos, tutorials, and new design concepts. An agreement was quickly reached, allowing him to rejoin Redis in a part-time role.

"I wrote him an email saying: do you think I could be back in some kind of capacity? Rowan showed interest in my proposal, and quickly we found some agreement."

2024-12-10 What TDD is ACTUALLY Good For – Axol's Blog { theaxolot.wordpress.com }

In an earlier article, I tore through some terrible arguments used to advocate for TDD that I see all too often (even by experienced engineers). I said in that piece that I would eventually go through what I think are better arguments for TDD, so that’s what I’m gonna do now.

Brownfield work also lends itself well to TDD, but less so. It depends on the complexity of the new feature, and the extendibility of the codebase. You have to use your best judgment. If it seems like a feature requires significant changes to existing modules, I’d lean on traditional development. However, if you see a gentle path to implementing this new feature, you might reap more benefits with TDD.

Greenfield development is a big no-no for TDD (at first). I don’t care how confident you are in what your interfaces will be. You’re not that good. Everything you think you know will change in the exploratory phase of a new project as you code, and you’ll strain your sanity by rewriting tests over and over again. Don’t do this, no matter how much your TDD idol pontificates its benefits.

BuT iF yOuR’e ReWrItInG yOuR tEsTs So MuCh, YoU’rE nOt PrAcTiSiNg TdD pRoPeRlY.”

2024-12-28 Ask HN: Are you unable to find employment? | Hacker News { news.ycombinator.com }

es, this is what everybody I know is experiencing right now.

Caveat lector: This is simply a retelling of my personal experience, YMMV. This is not advice.

What has consistently worked for me: I stopped applying for jobs, and redirected all that effort into creating and publishing open source projects that demonstrate competence in the areas of work I want. And, just as importantly, I contribute to big established open source projects in those areas too.

I did not apply for my current job (started 6 months ago): they solicited me, based on my open source work. All the best jobs I've had have been like that, this is the 3rd time it worked.

When I'm unemployed, I only apply for jobs I actually want, typically spending an hour each on 0-2 extremely targeted applications per week. But I treat churning out new open source stuff as my full time job until somebody notices. In addition to successfully landing me three great jobs over the past decade, this approach has made me a much much better programmer.

Also, I strongly believe spending hours a day writing new code will enhance your ability to pass technical interviews much more than gamified garbage like leetcode.

A huge part of making this work is not living a typical valley lifestyle: I plan my life around the median national salary for a software engineer, and when I'm making more than that it all goes straight into my savings. In the bay, that requires living frugally (by bay standards...), but I can't even begin to put into words how grateful I am to past-decade-me for living like that and giving today-me the freedom to turn down the bad jobs and wait for the good ones. Obviously, I don't have children.

I do a lot more open source than a typical programmer in the valley, but I don't think I'm "exceptional" in any sense: you just have to put in the work. I do feel like I was very lucky to start my career in an extremely open-source-centric role, and in fairness that gives me a leg up here which I am probably inclined to underestimate.

Working with People​

2024-12-05 How to Grow Professional Relationships | Tejas Kumar { tej.as }

In my career, I’ve worked with some extraordinary people while also encountering the barriers of exclusionary cliques and gatekeeping. These experiences prompted me to examine how professional relationships develop, leading to the creation of the TJS (The Journey to Synergy) Collaboration Model. This framework identifies seven stages that relationships can pass through, from competitive isolation to productive collaboration.

For those striving to build stronger, more impactful connections—whether in business, creative endeavors, or personal growth—this model offers a clear lens to understand where you stand and how to move forward.

image-20241204195601197

The 7 Stages of the TJS Collaboration Model: A Quick Digest

  1. Everything is a Competition Relationships are marked by exclusion and a zero-sum mindset. Gatekeeping and discrimination dominate, with little to no collaboration or shared goals.
  2. Coexist Acknowledgment of each other's existence without meaningful interaction. There’s mutual respect but little effort to engage, often due to differing goals, values, or personalities.
  3. Communicate Basic exchange of information occurs, but interactions remain shallow. Conversations may begin, but follow-through and deeper engagement are often lacking.
  4. Cooperate Parties work together on neutral, low-stakes tasks with transactional motives. Cooperation may lead to future opportunities but doesn’t yet involve deep trust or shared investment.
  5. Coordinate One party adopts the other’s goal and takes deliberate steps to align efforts. Trust begins to form as actions are coordinated for mutual benefit, laying the groundwork for deeper collaboration.
  6. Collaborate A shared project is created together, with both parties contributing equally and meaningfully. Trust, understanding, and synergy define this stage, as both sides grow from the partnership.
  7. We Are the Same A toxic state where boundaries dissolve, leading to unhealthy co-dependence. Individuality is lost, and relationships suffer from over-enmeshment and burnout.

2024-12-07 ✨ The 6 Mistakes You’re Going to Make as a New Manager – Terrible Software { terriblesoftware.org } { people management} {engineering management}

The right amount of engagement that you should have in your team’s projects is also a tricky subject. Lean in too much, and you’re micromanaging; lean out too much, and you appear disengaged.

To find the right balance, consider the concept of Guided Autonomy. This means setting clear goals and expectations, then stepping back and letting your team figure out how to achieve them.

As an individual contributor (IC), your work spoke for itself; people could easily see it. Plain and simple. As a manager, it’s less black and white, and surprisingly, for many new managers, part of your job now involves managing how others see you.

image-20241206174841674

2024-12-07 1 in 6 Companies Are Hesitant To Hire Recent College Graduates - Intelligent { www.intelligent.com }

image-20241207155333308 image-20241207155351964 image-20241207155416479

2024-12-09 The One Good Reason to Become a Manager (and All the Bad Ones) – Terrible Software { terriblesoftware.org }

image-20241208190207861

Espanso​

image-20241229145121604

2024-11-07 Espanso - A Privacy-first, Cross-platform Text Expander { espanso.org }

2024-11-07 espanso/SECURITY.md at master ¡ espanso/espanso { github.com }

2024-11-07 Using Espanso to boost Efficiency 🚤 | Alicia's Notes 🚀 — Than... { notes.aliciasykes.com }

Really good collection of examples.

# Outputs markdown link, with clipboard contents as the URL
- trigger: ":md-link"
replace: "[$|$]({{clipboard}})"
vars:
- name: "clipboard"
type: "clipboard"

# Creates a HTML anchor element, with clipboard contents as href
- trigger: ":html-link"
replace: "<a href=\"{{clipboard}}\" />$|$</a>"
vars:
- name: "clipboard"
type: "clipboard"

# Outputs BB Code link, with clipboard contents as the URL
- trigger: ":bb-link"
replace: "[url={{clipboard}}]$|$[/url]"
vars:
- name: "clipboard"
type: "clipboard"

NiX​

2024-08-29 An unordered list of hidden gems inside NixOS — kokada { kokada.dev }

2024-12-04 Deploying Containers on NixOS { bkiran.com }

WebDev​

2024-12-09 JSON5 – JSON for Humans | JSON5 { json5.org }

JSON5 is an extension to the popular JSON file format that aims to be easier to write and maintain *by hand* (e.g. for config files). It is not intended to be used for machine-to-machine communication. (Keep using JSON or other file formats for that. 🙂)

{
// comments
unquoted: 'and you can quote me on that',
singleQuotes: 'I can use "double quotes" here',
lineBreaks: "Look, Mom! \
No \\n's!",
hexadecimal: 0xdecaf,
leadingDecimalPoint: .8675309, andTrailing: 8675309.,
positiveSign: +1,
trailingComma: 'in objects', andIn: ['arrays',],
"backwardsCompatible": "with JSON",
}

2024-12-19 How To Create Multi-Step Forms With Vanilla JavaScript And CSS | CSS-Tricks { css-tricks.com }

image-20241219105619183

2024-10-10 Liskov's Gun: The parallel evolution of React and Web Components – Baldur Bjarnason { www.baldurbjarnason.com }

2024-10-09 Why Web Components Failed - YouTube { www.youtube.com }

2024-10-09 Web Components are not Framework Components — and That’s Okay • Lea Verou { lea.verou.me }

2024-10-09 JSON•Edit•React { carlosnz.github.io }

2024-10-09 CarlosNZ/json-edit-react: React component for editing/viewing JSON/object data { github.com }

2024-10-10 player.style - Video & audio player themes for every web player & framework { player.style }

Show HN: Winamp and other media players, rebuilt for the web with Web Components (player.style)

2024-10-10 Media Chrome Docs { www.media-chrome.org }

Inspiration!​

2024-12-29 How I Automated My Job Application Process. (Part 1) { blog.daviddodda.com }

Look, I'll be honest - job hunting sucks.

It's this soul-crushing cycle of copying and pasting the same information over and over again, tweaking your resume for the 100th time, and writing cover letters that make you sound desperate without actually sounding desperate.

But here's the thing: repetitive tasks + structured process = perfect automation candidate.

So I did what any sane developer would do - I built a system to automate the whole damn thing. By the end, I had sent out 250 job applications in 20 minutes. (The irony? I got a job offer before I even finished building it. More on that later.)

Let me walk you through how I did it.image-20241228222712542 2024-12-29 I automated my job application process | Hacker News { news.ycombinator.com }

2024-12-28 How to Create HTML/ZIP/PNG Polyglot Files | Polyglot-HTML-ZIP-PNG { gildas-lormeau.github.io }

Github: gildas-lormeau/Polyglot-HTML-ZIP-PNG: Learn how to create HTML/ZIP/PNG polyglot files in JavaScript

How to Create HTML/ZIP/PNG Polyglot Files

This article is a summary of the presentation available here. The resulting demo file can be downloaded at the end of the article. The repository can be found at https://github.com/gildas-lormeau/Polyglot-HTML-ZIP-PNG.

Introduction

SingleFile, a tool for web archiving, commonly stores web page resources as data URIs. However, this approach can be inefficient for large resources. A more elegant solution emerges through combining the ZIP format’s flexible structure with HTML. We’ll then take it a step further by encapsulating this entire structure within a PNG file.

2024-12-26 Frontend Developer Roadmap: What is Frontend Development? { roadmap.sh }

by Kamran Ahmed (@kamrify) / X { x.com } kamranahmed.info

image-20241225211642337

image-20241225211806299

2024-12-22 Keeping a CHANGELOG at Work – code.dblock.org | tech blog { code.dblock.org }

image-20241222132225576

2024-12-22 Draw all roads in a city at once { anvaka.github.io }

City Roads: A tool to draw all roads in a city at once

image-20241222131852773

2024-12-21 Designing a calm web reader | James' Coffee Blog { jamesg.blog }

image-20241220205046939

Github: capjamesg/web-reader: A minimal web reader.

2024-12-21 Show HN: Artemis, a Calm Web Reader | Hacker News { news.ycombinator.com }

2024-12-21 Lenns.io - Lenns.io { lenns.io }

RSS The feed reader for people that want to be in control

2024-12-21 Instaloader — Download Instagram Photos and Metadata { instaloader.github.io }

2024-12-21 Grayjay App - Follow Creators Not Platforms { grayjay.app }

image-20241220203851851

2024-12-20 apankrat/nullboard: Nullboard is a minimalist kanban board, focused on compactness and readability. { github.com }

Nullboard is a minimalist take on a kanban board / a task list manager, designed to be compact, readable and quick in use.

image-20241219213319289

2024-12-20 mizu.js | Lightweight HTML templating library for any-side rendering { mizu.sh }

image-20241219180416729

2024-12-16 Alien Covenant (Movie Review) — Boy Drinks Ink { boydrinksink.com }

image-20241215230106026

image-20241215230148145

2024-12-16 Displaying Website Content on an E-Ink Display | Marios Fasold's Website { mfasold.net }

image-20241215225841255

2024-12-13 Perspective | Perspective { perspective.finos.org }

2024-12-13 finos/perspective: A data visualization and analytics component, especially well-suited for large and/or streaming datasets. { github.com }Perspective is an interactive analytics and data visualization component, which is especially well-suited for large and/or streaming datasets. Use it to create user-configurable reports, dashboards, notebooks and applications, then deploy stand-alone in the browser, or in concert with Python and/or Jupyterlab.

Features

  • A fast, memory efficient streaming query engine, written in C++ and compiled for WebAssembly, Python and Rust, with read/write/streaming for Apache Arrow, and a high-performance columnar expression language based on ExprTK.
  • A framework-agnostic User Interface packaged as a Custom Element, powered either in-browser via WebAssembly or virtually via WebSocket server (Python/Node).
  • A JupyterLab widget and Python client library, for interactive data analysis in a notebook, as well as scalable production Voila applications.

Found in: ✉️ 2024-12-13 JavaScript Weekly Issue 716: December 12, 2024 { javascriptweekly.com }

image-20241212192520691

2024-12-13 Termo - An Easy to use terminal for your browser { termo.rajnandan.com }

Termo is a simple terminal emulator that can be used to create a terminal-like interface on your website. It is inspired by the terminal emulator in stripe.dev. It is an wrapper on top of xterm.js.

Found in: ✉️ 2024-12-13 JavaScript Weekly Issue 716: December 12, 2024 { javascriptweekly.com }

image-20241212192328911

2024-12-07 ibttf/interview-coder { github.com }

An invisible desktop application that will help you pass technical interviews

image-20241207133259801

2024-12-07 Install Docker natively on Android Phone and use it as a Home Server | CrackOverflow { crackoverflow.com }

In this tutorial, we will guide you through the process of installing Docker on your Android phone, specifically using a OnePlus 6T with postmarketOS. I also wrote another blog post explaining how you can run this phone without a battery, allowing it to run forever as long as it remains connected to a power source. If you’re interested, feel free to check it out! This guide can be adapted only for phones on the postmarketOS device list. Please note that this process will erase all data on your phone, so it’s important to use a device you don’t need. Let’s get started! image-20241206175706311

Database​

2024-12-07 Brian Douglas' Tech Blog - Sensible SQLite defaults { briandouglas.ie }

SQLite is cool now. DHH uses it, Laravel defaults to it. Here is a list of sensible defaults when using sqlite.

2024-12-20 vlcn-io/cr-sqlite: Convergent, Replicated SQLite. Multi-writer and CRDT support for SQLite { github.com }

"It's like Git, for your data."

CR-SQLite is a run-time loadable extension for SQLite and libSQL. It allows merging different SQLite databases together that have taken independent writes.

In other words, you can write to your SQLite database while offline. I can write to mine while offline. We can then both come online and merge our databases together, without conflict.

In technical terms: cr-sqlite adds multi-master replication and partition tolerance to SQLite via conflict free replicated data types (CRDTs) and/or causally ordered event logs.

2024-10-10 Optimizing Postgres table layout for maximum efficiency { r.ena.to }

When modeling a Postgres database, you probably don’t give much thought to the order of columns in your tables. After all, it seems like the kind of thing that wouldn’t affect storage or performance. But what if I told you that simply reordering your columns could reduce the size of your tables and indexes by 20%? This isn’t some obscure database trick — it’s a direct result of how Postgres aligns data on disk.

In this post, I’ll explore how column alignment works in Postgres, why it matters, and how you can optimize your tables for better efficiency. Through a few real-world examples, you’ll see how even small changes in column order can lead to measurable improvements.

2024-11-17 What I Wish Someone Told Me About Postgres | ChallahScript { challahscript.com }

I’ve been working professionally for the better part of a decade on web apps and, in that time, I’ve had to learn how to use a lot of different systems and tools. During that education, I found that the official documentation typically proved to be the most helpful.

Except…Postgres. It’s not because the official docs aren’t stellar (they are!)–they’re just massive. For the current version (17 at the time of writing), if printed as a standard PDF on US letter-sized paper, it’s 3,200 pages long. It’s not something any junior engineer can just sit down and read start to finish.

So I want to try to catalog the bits that I wish someone had just told me before working with a Postgres database. Hopefully, this makes things easier for the next person going on a journey similar to mine.

Note that many of these things may also apply to other SQL database management systems (DBMSs) or other databases more generally, but I’m not as familiar with others so I’m not sure what does and does not apply.

Math​

2024-12-18 ✨ How I Used Linear Algebra to Build an Interactive Diagramming Editor — and Why Matrix Math is Awesome | by Ivan Shubin | Dec, 2024 | ITNEXT { itnext.io }

image-20241217222229575

C || C++​

2024-12-27 C++ 'Type Erasure' Explained | Dave Kilian's Blog { davekilian.com } { 2014 }

I recently stumbled across this pattern on a Hacker News post. It’s a neat toy, but I had a hard time finding a good explanation (most of the information I found jumped straight into examples before really motivating what was going on). In this post, I’ll try to derive the pattern from first principles instead.

CPP Mock libraties:

2024-12-21 Fixing C strings { thasso.xyz }

2024-09-29 It is never too late to write your own C/C++ command-line utilities – Daniel Lemire's blog { lemire.me }

You know those moments when your code feels sluggish, and you wonder if there’s a better way? Sometimes, there is. Daniel Lemire recently shared a cool story about swapping a Python script for a custom C++ utility and saving their company a ton of cash. The gist? Their Python script, used to process a JSON file every few seconds, was hogging a full CPU core. They reworked it into a C++ program using some smart libraries like simdjson, and the difference was night and day: over ten times faster, turning a snail into a lightning bolt.

Python is great for getting things up and running quickly, but when performance really matters—like shaving off milliseconds in a process that runs all day—C++ can be a game changer. It takes more effort to write, sure, but the payoff in speed and efficiency can be huge. Of course, it’s not all rainbows; setting up dependencies and dealing with compilation takes extra time. But tools like CMake and CPM are making that part a lot less painful these days.

Python’s convenience makes it perfect for many tasks, but when you’re pushing the limits of performance, don’t be afraid to roll up your sleeves and dive into C++. It’s a little extra work upfront, but when the results are this good, it’s worth it. Plus, you might even impress your team with how much you can squeeze out of your hardware. Sometimes, the old-school tools are still the best ones for the job.

Python comes with a lot of bundled functionality whereas C++ requires you to give more thought to dependencies. Thankfully CMake with CPM make recovering the dependencies painless:

include(cmake/CPM.cmake)
CPMAddPackage("gh:fmtlib/fmt#11.0.2")
CPMAddPackage("gh:simdjson/simdjson@3.10.1")
CPMAddPackage("gh:fastfloat/fast_float@6.1.6")
add_executable(main main.cpp)
target_link_libraries(main fmt::fmt simdjson::simdjson FastFloat::fast_float)

2024-11-30 Everything You Never Wanted To Know About Linker Script ¡ mcyoung { mcyoung.xyz }

image-20241130125138039

2024-10-13 Every bug/quirk of the Windows resource compiler (rc.exe), probably - ryanliptak.com { www.ryanliptak.com }

2024-09-13 Safe C++ { safecpp.org }

Over the past two years, the United States Government has been issuing warnings about memory-unsafe programming languages with increasing urgency. Much of the country’s critical infrastructure relies on software written in C and C++, languages which are very memory unsafe, leaving these systems more vulnerable to exploits by adversaries.

2024-09-26 Embedded Scripting Languages { caiorss.github.io }

2024-09-29 Few lesser known tricks, quirks and features of C { jorenar.com }

2024-07-01 Writing GUI apps for Windows is painful - Samuel Tulach

2024-12-21 ysc3839/win32-darkmode: Example application shows how to use undocumented dark mode API introduced in Windows 10 1809. { github.com }

2024-07-06 How to implement a hash table (in C)

2024-07-26 GitHub - cameron314/concurrentqueue: A fast multi-producer, multi-consumer lock-free concurrent queue for C++11

2024-08-22 Do low-level optimizations matter? Faster quicksort with cmov (2020) { cantrip.org }

2024-08-22 A ToC of the 20 part linker essay LWN.net { lwn.net }

2024-10-27 Alternative operator representations - cppreference.com { en.cppreference.com }

2024-10-28 The Curiously Recurring Template Pattern (CRTP) - Fluent C++ { www.fluentcpp.com }

2024-11-07 marovira/lua: The Lua Programming Language with Modern CMake { github.com }

CMake: This is a bundle of the Lua Programming Language v5.4.4 that provides a modern CMake script for easy inclusion into projects and installation. For usage instructions, see the next section.

👂 The Ear of AI​

The Era of AI is now renamed to The Ear of AI because of a silly typo

2024-12-29 How I run LLMs locally - Abishek Muthian { abishekmuthian.com }

image-20241229133313757

2024-12-21 A Gentle Introduction to Graph Neural Networks { distill.pub }

This article explores and explains modern graph neural networks. We divide this work into four parts. First, we look at what kind of data is most naturally phrased as a graph, and some common examples. Second, we explore what makes graphs different from other types of data, and some of the specialized choices we have to make when using graphs. Third, we build a modern GNN, walking through each of the parts of the model, starting with historic modeling innovations in the field. We move gradually from a bare-bones implementation to a state-of-the-art GNN model. Fourth and finally, we provide a GNN playground where you can play around with a real-word task and dataset to build a stronger intuition of how each component of a GNN model contributes to the predictions it makes. image-20241220205509409

2024-12-21 OpenAI o3 Breakthrough High Score on ARC-AGI-Pub { arcprize.org }

OpenAI's new o3 system - trained on the ARC-AGI-1 Public Training set - has scored a breakthrough 75.7% on the Semi-Private Evaluation set at our stated public leaderboard $10k compute limit. A high-compute (172x) o3 configuration scored 87.5%.

ARC-AGI serves as a critical benchmark for detecting such breakthroughs, highlighting generalization power in a way that saturated or less demanding benchmarks cannot. However, it is important to note that ARC-AGI is not an acid test for AGI – as we've repeated dozens of times this year. It's a research tool designed to focus attention on the most challenging unsolved problems in AI, a role it has fulfilled well over the past five years.

Passing ARC-AGI does not equate to achieving AGI, and, as a matter of fact, I don't think o3 is AGI yet. o3 still fails on some very easy tasks, indicating fundamental differences with human intelligence.

2024-12-21 Building effective agents Anthropic { www.anthropic.com }

When building applications with LLMs, we recommend finding the simplest solution possible, and only increasing complexity when needed. This might mean not building agentic systems at all. Agentic systems often trade latency and cost for better task performance, and you should consider when this tradeoff makes sense. image-20241220203730761

2024-12-15 They See Your Photos { theyseeyourphotos.com }

image-20241214194725429

2024-12-12 A ChatGPT clone, in 3000 bytes of C, backed by GPT-2 { nicholas.carlini.com }

image-20241211234622814

2024-11-26 Introducing the Model Context Protocol \ Anthropic { www.anthropic.com }

The Model Context Protocol is an open standard that enables developers to build secure, two-way connections between their data sources and AI-powered tools. The architecture is straightforward: developers can either expose their data through MCP servers or build AI applications (MCP clients) that connect to these servers.

2024-11-23 pingcap/autoflow: pingcap/autoflow is a Graph RAG based and conversational knowledge base tool built with TiDB Serverless Vector Storage. Demo: https://tidb.ai { github.com }

pingcap/autoflow is a Graph RAG based and conversational knowledge base tool built with TiDB Serverless Vector Storage. Demo: https://tidb.ai

image-20241122221752616

2024-11-22 AI and Everything Else - Benedict Evans | Slush 2023 - YouTube { www.youtube.com }

2024-11-22 Presentations — Benedict Evans { www.ben-evans.com }

image-20241121231940817

image-20241121232017319

2024-11-14 AI Makes Tech Debt More Expensive { www.gauge.sh }

GenAI can’t handle High Complexity If you’ve tried tools like Cursor or Aider for professional coding, you know that their performance is highly sensitive to the complexity of the code you’re working on. They provide a dramatic speedup when applying pre-existing patterns, and when making use of existing interfaces or module relationships. However, in ‘high-debt’ environments with subtle control flow, long-range dependencies, and unexpected patterns, they struggle to generate a useful response.

2024-11-07 TutoriaLLM/TutoriaLLM: Self-hosted environment for programming tutorial by LLM { github.com }

TutoriaLLM は、小中学生を対象とした、Web 上で使用できる、LLM によって提供されるセルフホスト型プログラミング学習プラットフォームです。教育コンテンツを制作する人と、そのコンテンツから学ぶ人たちのために設計されています。

TutoriaLLM is a self-hosted programming learning platform for K-12 Education that can be used on the web. It is designed for those who create educational content and those who learn from it.

2024-10-12 AI Winter Is Coming { leehanchung.github.io }

2024-08-27 System Prompts - Anthropic { docs.anthropic.com }

2024-08-28 AI Apocalypse: 80% of Projects Crash and Burn, Billions Wasted says RAND Report - SalesforceDevops.net { Vernon Keenan / salesforcedevops.net }

The most common cause of AI project failure? It’s not the technology – it’s the people at the top. Business leaders often misunderstand or miscommunicate what problems need to be solved using AI. As one interviewee put it, “They think they have great data because they get weekly sales reports, but they don’t realize the data they have currently may not meet its new purpose.”

Many executives have inflated expectations of what AI can achieve, fueled by salespeople’s pitches and impressive demonstrations. They underestimate the time and resources required for successful AI implementation. One interviewee noted, “Often, models are delivered as 50 percent of what they could have been” due to shifting priorities and unrealistic timelines.

Data quality emerged as the second most significant hurdle. “80 percent of AI is the dirty work of data engineering,” an interviewee stated. “You need good people doing the dirty work—otherwise their mistakes poison the algorithms.”

2024-09-13 Notes on OpenAI’s new o1 chain-of-thought models { simonwillison.net }

2024-09-13 2205.11916 Large Language Models are Zero-Shot Reasoners { arxiv.org }

Let's think step-by-step

2024-09-18 zlwaterfield/scramble: Open-Source Grammarly Alternative { github.com }

A very simple Chromium and Firefox extension example to fix grammar or rewrite the text on any website

2024-09-18 WonderWorld { kovenyu.com }

Interactive Scene Generation

WonderWorld allows real-time rendering and fast scene generation. This allows a user to navigate existing scenes, and specify where and what to generate a new scene. Here are examples where a user specifies scene contents (via text) and locations (via camera movement) to create a virtual world. Videos here are accelerated.

2024-09-18 STORM { storm.genie.stanford.edu }

image-20241229151442927

2024-09-18 punnerud/Local_Knowledge_Graph { github.com }

image-20241229151731168

2024-09-01 Discussion thread Programming with ChatGPT | Hacker News { news.ycombinator.com }

simonw 4 days ago | prev

I'm increasingly building entire functional prototypes from start to finish using Claude 3.5 Sonnet. It's an amazing productivity boost. Here are a few recent examples:

  • Image Resize Quality Tool: This is a tool for dropping in an image and instantly seeing resized versions of that image at different JPEG qualities, each of which can be downloaded. I used to use the (much better) Squoosh for this, but my cut-down version is optimized for my workflow (picking the smallest JPEG version that remains legible). Notes and prompts on how I built it are available here.

  • django-http-debug: This is an actual open-source Python package I released that was mostly written for me by Claude. It's a webhooks debugger where you can set up a URL, and it will log all incoming requests to a database table for you. Notes on how I built it are available here.

  • datasette-checkbox: This is a Datasette plugin that adds toggle checkboxes to any table with is_ or has_ columns. An animated demo and prompts showing how I built the initial prototype can be found here.

  • Gemini BBox Tool: This is a tool for trying out Gemini 1.5 Pro's ability to return bounding boxes for items it identifies. You'll need a Gemini API key for this one, or you can check out the demo and notes here.

  • Gemini Chat Tool: This is a similar tool for trying out different Gemini models (Google released three more yesterday) with a streaming chat interface. Notes on how I built it are available here.

I still see some people arguing that LLM-assisted development like this is a waste of time, and they spend more effort correcting mistakes in the code than if they had written it from scratch themselves.

I couldn't disagree more. My development process has always started with prototypes, and the speed at which I can get a proof-of-concept prototype up and running with these tools is quite frankly absurd.

2024-06-27 dropofahat.zone

2024-06-27 I am using AI to drop hats outside my window onto New Yorkers | Hacker News image-20241201142019035

2024-12-26 Apache NiFi { nifi.apache.org }

image-20241226141940077

image-20241226142027171

2024-12-26 Apache NiFi on Azure - Azure Architecture Center | Microsoft Learn { learn.microsoft.com }

image-20241226142318179

¡ 13 min read

⌚ Nice watch!​

2024-12-15 Dependency Injection in C++ - A Practical Guide - Peter Muldoon - C++Now 2024 - YouTube { www.youtube.com }

image-20241215015248533

Long talk! Only list of the topics covered. I personally want to focus on "Inheritance and Virtual Functions" and "Template-Based Dependency Injection" with concepts. Concepts look really cool.

Methods of Dependency Injection

  • Link-Time Dependency Injection
    • Overview and explanation
    • Issues with link-time DI (fragility, undefined behavior, ODR violations)
    • Reasons to avoid link-time DI in modern systems
  • Inheritance and Virtual Functions
    • Base class and derived classes for DI
    • Interface-based DI (abstract interfaces)
    • Drawbacks (interface bloat, large interface sizes, tight coupling)
  • Template-Based Dependency Injection
    • Using templates to achieve DI
    • Benefits of compile-time DI
    • Concepts (C++20) for template constraints
    • Pros and cons of using templates for DI
  • Type Erasure (std::function)
    • Using std::function for DI
    • Flexibility and run-time benefits
    • Overhead and runtime costs of std::function
  • Null Object Pattern
    • Creating "null" objects for dependency injection
    • Use cases and benefits
    • How to use null objects for testing
  • Setter Injection
    • Description of setter-based DI
    • Problems with setter injection (state mutation, initialization order issues)
    • Why setter injection is generally avoided
  • Method Injection
    • Description of method-level DI
    • Pros (clearer interfaces) and cons (interface bloat)
  • Constructor Injection
    • Constructor-level DI for immutability
    • Best practices for constructor injection
    • Drawbacks (API changes, large constructor argument lists)
  • Dependency Suppliers (Factory Functions)
    • Using supplier functions to control dependency injection
    • How dependency suppliers differ from service locators

2024-12-14 Master Tailwind CSS Crash Course 2024 | not a tutorial - YouTube { www.youtube.com }

by Ankita Kulkarni

image-20241214155934482

/* Introduction */
// This document serves as a comprehensive reference sheet for key Tailwind CSS concepts and utilities.
// Each section focuses on a major topic, providing a functional code sample that covers its subtopics.
// Use this guide as a quick reference for essential Tailwind features.

/* 1. Core Concepts of Tailwind CSS */
<div class="container mx-auto p-6">
<h1 class="text-4xl font-bold mb-4">Core Concepts of Tailwind CSS</h1>
<p class="text-gray-600">This paragraph demonstrates text utilities, margin, and padding.</p>
<button class="bg-blue-500 hover:bg-blue-700 text-white font-bold py-2 px-4 rounded">
Click Me
</button>
</div>

/* 2. Responsive Design */
<div class="grid grid-cols-1 sm:grid-cols-2 md:grid-cols-3 gap-4 p-6">
<div class="bg-red-500 p-4">1</div>
<div class="bg-green-500 p-4">2</div>
<div class="bg-blue-500 p-4">3</div>
<div class="bg-yellow-500 p-4">4</div>
</div>

/* 3. Grid and Flexbox */
<div class="flex flex-col md:flex-row md:justify-between p-6">
<div class="bg-purple-500 p-4 flex-1">Flex Item 1</div>
<div class="bg-orange-500 p-4 flex-1">Flex Item 2</div>
<div class="bg-teal-500 p-4 flex-1">Flex Item 3</div>
</div>

<div class="grid grid-cols-2 md:grid-cols-4 gap-6 p-6">
<div class="bg-pink-500 h-20"></div>
<div class="bg-blue-500 h-20"></div>
<div class="bg-green-500 h-20"></div>
<div class="bg-red-500 h-20"></div>
</div>

/* 4. Padding, Margins, and Spacing */
<div class="p-10 m-10 bg-gray-100">
<h2 class="mb-6">Padding and Margin Example</h2>
<p class="py-4 px-6 bg-white shadow-lg rounded">This box has custom padding and margin.</p>
</div>

/* 5. Borders and Border Radius */
<div class="border-4 border-dashed border-blue-500 rounded-lg p-6 m-6">
<h2 class="text-xl font-bold">Dashed Border with Radius</h2>
<p class="mt-4">This container demonstrates border styles and border radius utilities.</p>
</div>

/* 6. Typography and Text Styling */
<div class="p-6">
<h1 class="text-4xl font-extrabold underline decoration-pink-500">H1 Header</h1>
<h2 class="text-3xl font-semibold mt-4">H2 Header</h2>
<p class="text-base text-gray-700 leading-relaxed mt-2">This is a paragraph demonstrating text styling like font size, color, and line height.</p>
</div>

/* 7. Customizing Colors */
<div class="bg-custom-purple text-white p-6">
<h2 class="text-xl">Custom Color</h2>
<p>Custom colors can be configured in tailwind.config.js</p>
</div>

/* 8. Box Shadows and Drop Shadows */
<div class="shadow-lg p-6 m-6 bg-white rounded-lg">
<h2 class="font-bold">Box Shadow Example</h2>
<p>This container has a large box shadow applied to it.</p>
</div>

/* 9. Customizing Animations and Transitions */
<button class="bg-blue-500 hover:bg-blue-700 text-white font-bold py-2 px-4 rounded transition duration-300 ease-in-out transform hover:scale-105">
Hover Me
</button>

/* 10. Images and Transformations */
<img src="/path/to/image.jpg" class="w-64 h-64 object-cover rounded-full transform rotate-12">

/* 11. State Management */
<input type="text" placeholder="Focus Me" class="focus:outline-none focus:ring-2 focus:ring-blue-500 p-2 border border-gray-300 rounded">

/* 12. Dark Mode in Tailwind */
<div class="dark:bg-gray-800 dark:text-white p-6">
<h2 class="text-xl">Dark Mode Example</h2>
<p>This text changes color in dark mode.</p>
</div>

/* 13. Filters and Effects */
<img src="/path/to/image.jpg" class="w-64 h-64 filter grayscale hover:grayscale-0 transition duration-300">

/* 14. Custom Utility Classes */
<div class="custom-button bg-blue-500 text-white font-bold py-2 px-4 rounded">
Custom Button
</div>

/* 15. Advanced Layout Techniques */
<div class="max-w-4xl mx-auto p-6">
<h2 class="text-2xl font-bold mb-4">Advanced Layout</h2>
<div class="flex justify-center">
<div class="w-1/2 bg-red-500 p-4">50% Width</div>
</div>
</div>

/* 16. Gradients and Backgrounds */
<div class="bg-gradient-to-r from-purple-400 via-pink-500 to-red-500 text-white p-6 rounded-lg">
<h2 class="text-xl font-bold">Gradient Background</h2>
<p>This container has a beautiful gradient background.</p>
</div>

/* 17. Customizing Layouts */
<div class="grid grid-cols-2 gap-4">
<div class="bg-blue-500 h-20"></div>
<div class="bg-green-500 h-20"></div>
<div class="bg-red-500 h-20"></div>
<div class="bg-yellow-500 h-20"></div>
</div>

/* 18. Project Walkthrough */
<div class="p-6">
<h2 class="text-2xl font-bold mb-4">Project Walkthrough</h2>
<p class="text-gray-600">This project demonstrates how all the Tailwind concepts come together to create a cohesive layout.</p>
</div>

/* 19. Additional Resources */
<div class="p-6 bg-gray-100">
<h2 class="text-xl font-bold">Resources</h2>
<ul class="list-disc pl-6">
<li>Official Tailwind CSS Documentation</li>
<li>VS Code Tailwind IntelliSense Plugin</li>
<li>Learning Responsive Design and Dark Mode</li>
</ul>
</div>

2024-12-10 What's a Tensor? - YouTube { www.youtube.com }

image-20241209163109409 image-20241209163220854

2024-11-28 Playing Game on the Mall Wall: Japanese Man's Super-sized Adventure! - YouTube { www.youtube.com } image-20241127171556754

Nomad Push is a 38-year-old Japanese man who’s homeless and travels all over Japan. On his YouTube channel, he shares his daily life in a really honest and down-to-earth way. You’ll see him doing things like:

  • Sleeping in train stations
  • Exploring abandoned houses
  • Cooking simple meals in parks

Even though he’s dealing with tough times, his videos feel positive and show a side of life most people don’t get to see. A lot of people watching his channel say it’s inspiring, and he’s built a big community of fans who support him. When he hit 100,000 subscribers, another YouTuber, Oriental Pearl, even threw a celebration for him, which shows how much people believe in him.

If you’re learning Japanese, this channel is a goldmine. His videos are full of real Japanese conversations, and he adds subtitles to help viewers follow along. It’s great practice for understanding how people actually talk in Japan.

Nomad Push’s channel is like a window into his life and a journey across Japan at the same time. It’s simple, real, and worth checking out if you’re curious about a different way of seeing the world.

2024-12-02 A Day in the Life of a Japanese Hikikomori (Shut In) - YouTube { www.youtube.com }

image-20241201211155692

Inside the Life of Nito: A Hikikomori Turned Game Developer

Nito, a hikikomori living in Kobe, Japan, has spent the past decade in near-total isolation. Far from idle, he has dedicated the last five years to developing Pull Stay, an old-school beat-em-up game reflecting his experiences as a recluse. The protagonist, a hikikomori himself, battles societal judgment—a theme close to Nito’s heart. Using Unreal Engine, he has self-taught coding, 3D design, and storytelling to bring his vision to life.

A Creative Path Born from Setbacks After graduating from the University of Tokyo, Nito struggled to find his footing in traditional creative fields like writing and doujinshi (independent manga). He shifted to game development when tools like Unreal Engine became accessible. Despite the steep learning curve and his limited English skills, Nito found purpose in creating something meaningful on his own terms.

Breaking Stereotypes and Defying Odds Nito’s life defies the typical hikikomori stereotype of idleness and dependence. His determination and self-taught skills showcase resilience, proving isolation doesn’t equate to lack of ambition. Through Pull Stay, he turns personal struggles into a story that others can relate to and enjoy.

What’s Next? With Pull Stay nearing release on Steam, Nito hopes its success will enable him to collaborate with other creators and travel the world. If it doesn’t take off, he plans to use the game as a portfolio to break into the industry. For now, his story serves as an inspiring reminder of the power of creativity and persistence.

Support Nito by checking out Pull Stay on Steam or sharing his journey with others.

https://store.steampowered.com/app/1179890/Pull_Stay/

image-20241201211045494

2024-11-30 10% Of Engineers Should Get Fired - YouTube { www.youtube.com }

image-20241129170004213

What Are Ghost Engineers? Ghost engineers are unproductive employees contributing less than 10% of a median engineer’s output. They account for up to 10% of the workforce and cost companies $90 billion annually. These individuals often perform minimal tasks, such as making fewer than three commits a month or trivial changes, while collecting full salaries.

Key Insights:

  • Economic Impact: Eliminating ghost engineers could save companies billions and add $465 billion to market caps without reducing performance.
  • Remote Work Paradox: While top engineers excel remotely, the worst also thrive in remote settings. 14% of remote engineers are ghost engineers compared to 6% in-office.
  • Cultural Cost: Ghost engineers demoralize motivated teammates and occupy roles that could go to skilled newcomers.
  • Startups’ Advantage: Startups avoid this issue by demanding accountability from every team member, contributing to their ability to outperform larger organizations.

Why It Matters: Ghost engineers don’t just waste money—they stall innovation, hinder team dynamics, and damage the credibility of remote work. Companies have a unique chance during layoffs to address this inefficiency, open doors to fresh talent, and foster a culture of accountability.

The Way Forward: Fire unproductive workers, improve performance metrics, and rebuild trust in remote work by ensuring accountability. The tech industry’s future depends on tackling this hidden crisis.

Sources:

2024-11-30 Yegor Denisov-Blanch on X: "I’m at Stanford and I research software engineering productivity. We have data on the performance of >50k engineers from 100s of companies. Inspired by @deedydas, our research shows: ~9.5% of software engineers do virtually nothing: Ghost Engineers (0.1x-ers) https://t.co/uygyfhK2BW" / X { x.com }

image-20241129171111447

2024-11-30 Tech's $90B Ghost Engineer Problem: Stanford Study Finds 9.5... { socket.dev }

Das highlighted a few tools of the trade from the “quiet quitting” playbook:

  • “in a meeting” on slack
  • scheduled slack, email, code at late hours
  • private calendar with blocks
  • mouse jiggler for always online
  • “this will take 2 weeks” (1 day)
  • “oh, the spec wasn’t clear”
  • many small refactors
  • “build is having issues”
  • blocked by another team
  • will take time bcuz obscure tech reason like “race condition”
  • “can you create a jira for that?”

2024-12-07 Keynote: Advent of Code, Behind the Scenes - Eric Wastl - YouTube { www.youtube.com }

image-20241206204142279

image-20241206205615587

Hello friends! My name is Eric Wasel, and Advent of Code is a project I created to help programmers improve their skills through small, self-contained challenges. The puzzles start easy and get progressively harder, helping you learn new techniques and develop problem-solving skills. I believe the best way to learn is by solving specific problems, and this project reflects that. We even have C++ in Advent of Code, and I’ll touch on where and how during the talk. Drawing from my experience designing systems for ISPs, auction infrastructure, and marketplaces, Advent of Code is all about celebrating learning, curiosity, and the joy of programming for everyone, no matter their level.

2024-12-07 To Int or To Uint - Alex Dathskovsky - YouTube { www.youtube.com } {C++}

image-20241206224226483

image-20241207113322117

This talk provides valuable insights into handling integers in C++. Integers are fundamental in any program, but improper handling can lead to subtle bugs, undefined behavior, and poor performance. This content explores the complexities of signed and unsigned integers, common mistakes, and how to optimize performance. By understanding these nuances, you'll avoid common pitfalls, write more efficient code, and improve the overall robustness of your applications.

The Basics of Signed and Unsigned Integers

Representation in Memory

  • Unsigned Integers: Simple modulo 2 representation. Overflow behavior is well-defined, which means operations that exceed the maximum value wrap around predictably.
  • Signed Integers: Historically, C++ supported various representations like one’s complement and two’s complement. Since C++20, two’s complement is the standard. Overflow is undefined, and operations involving signed integers require careful handling to avoid unexpected behavior.

Performance Considerations Signed integers often involve additional steps in assembly code, such as preserving the sign bit during division or right shifts. This makes operations on signed integers slower compared to their unsigned counterparts, especially in performance-critical code. For example, unsigned division by two can be replaced by a simple bit shift. Signed division, on the other hand, requires arithmetic shifts that preserve the sign bit, adding extra overhead.

Best Practices for Handling Integers

Use Fixed-Width Integer Types Explicitly use types like int32_t, uint64_t, and size_t when appropriate. These make your code portable and clear about the expected range of values.

Prefer Signed Types Unless Necessary Unsigned integers should only be used when their wrapping behavior is explicitly desired. For most use cases, signed integers are safer and less prone to subtle bugs.

Leverage C++20 and C++23 Features Modern C++ provides tools like std::ssize and type traits that simplify working with integers. Use these features to avoid common pitfalls and ensure correctness.

Treat Warnings as Errors Enable strict compiler warnings (-Wall, -Wextra, and -Werror) and sanitizers to catch potential issues early. Compiler tools can often detect problems like signed-unsigned mismatches before they cause runtime errors.

Avoid Overusing auto While auto simplifies code, it can obscure type information, leading to unexpected behavior. Be explicit with integer types, especially in loops and arithmetic operations.

Author: Alex Dathskovsky

2024-12-07 Demystifying CRTP in C++: What, Why, and How { www.cppnext.com }

2024-12-07 Exposing the not-so-secret practices of the cult of DDD - Chris Klug - - YouTube { www.youtube.com }

image-20241207150005664 image-20241207151010087

image-20241207151736502

image-20241207151910484

2024-12-07 Bosses Are FIRING Gen Z Workers Just Months After Hiring Them. - YouTube { www.youtube.com }

image-20241207154832440

Source: 2024-12-07 1 in 6 Companies Are Hesitant To Hire Recent College Graduates - Intelligent { www.intelligent.com }

¡ 42 min read

Good Reads​

2024-12-01 Legacy Shmegacy - David Reis on Software { davidreiscto.substack.com }

image-20241201132743791

People call some code legacy when they are not happy with it. Usually it simply means they did not write it, so they don’t understand it and don’t feel safe changing it. Sometimes it also means the code has low quality1 or uses obsolete technologies. Interestingly, in most cases the legacy label is about the people who assign it, not the code it labels. That is, if the original authors were still around the code would not be considered legacy at all.

This model allows us to deduce the factors that encourage or prevent some code from becoming legacy:

  1. The longer are programmer’s tenure the less code will become legacy, since authors will be around to appreciate and maintain it.
  2. The more code is well architected, clear and documented the less it will become legacy, since there is a higher chance the author can transfer it to a new owner successfully.
  3. The more the company uses pair programming, code reviews, and other knowledge transfer techniques, the less code will become legacy, as people other than the author will have knowledge about it.
  4. The more the company grows junior engineers the less code will become legacy, since the best way to grow juniors is to hand them ownership of components.
  5. The more a company uses simple standard technologies, the less likely code will become legacy, since knowledge about them will be widespread in the organization. Ironically if you define innovation as adopting new technologies, the more a team innovates the more legacy it will have. Every time it adopts a new technology, either it won’t work, and the attempt will become legacy, or it will succeed, and the old systems will.

The reason legacy code is so prevalent is that most teams are not good enough at all of the above to avoid it, but maybe you can be.

🥒 2024-12-01 Tech's $90B Ghost Engineer Problem: Stanford Study Finds 9.5... { socket.dev }

Beyond the economic and productivity concerns, ghost engineers pose significant security risks. Their lack of meaningful engagement can lead to a few critical issues: unreviewed or improperly tested code changes, unnoticed vulnerabilities, and outdated systems left unpatched. A disengaged engineer might also miss—or deliberately ignore—critical security protocols, creating potential entry points for malicious actors.

When these engineers aren't actively involved in maintaining secure practices, they can create blind spots in a company’s defense strategy, increasing the risk of breaches or compliance failures. Threat actors can exploit disengaged engineers through phishing, social engineering, or leveraging neglected updates and poorly reviewed code to infiltrate systems and compromise security. Addressing these gaps requires better oversight and collaborative practices.

Before you start side-eyeing your coworkers, it’s worth noting that measuring productivity in software engineering is notoriously tricky. Commit counts or hours logged are often poor indicators of true impact. Some high-performing engineers—the mythical “10x engineers”—produce significant results with fewer, well-thought-out contributions.

However, the “ghost engineer” trend exposes systemic inefficiencies in talent management and performance evaluation. Remote work policies, once heralded as a game-changer, are now under the microscope. They’ve enabled flexibility for many but have also given rise to the ghost engineering phenomenon. The tug-of-war over remote versus in-office work is likely to intensify as companies grapple with these kinds of leadership and accountability issues.

image-20241201002539567

2024-11-30 The deterioration of Google { www.baldurbjarnason.com }

I'm Baldur Bjarnason, a web developer and writer. In my latest essay, I wrote about the decline of Google and its impact on independent publishers.

image-20241130153023657

Here's a quick summary:

  1. Independent Publishers Struggling: Many independent sites are shutting down due to a lack of traffic from Google and Facebook.
  2. Google's Machine Learning Issues: Google's attempt to improve search results with machine learning has backfired, letting spam through and delisting quality content.
  3. Economic Impact: Even frugally run sites can't survive on the remaining traffic, leading to significant financial struggles for creators.
  4. Algorithm Black Box: Google's algorithm has become so complex that even their engineers can't fully understand or fix it.
  5. Monopoly Power: Google's monopoly allows it to capture value without improving product utility, leaving users with fewer alternatives.

2024-11-30 15 Lessons From 15 Years of Indie App Development { lukaspetr.com }

Hey there, I'm Lukas Petr, an indie iOS app developer from Prague. Over the past 15 years, I've learned a lot about the ups and downs of indie app development. Here are some key takeaways:

image-20241130153146170

  1. Enjoy the Process: Loving what you do is crucial. If you don't enjoy the journey, it will be tough to stick with it.
  2. Understand Your Motivation: Know why you're doing this. For me, it's about creating something meaningful and useful.
  3. Risk and Reward: The risk is high, but the reward of fulfilling work and ownership is worth it.
  4. Find Your Niche: Focus on what you believe in and what scratches your own itch.
  5. Provide Additional Value: Aim for sustainable value over time, not just quick gains.
  6. Wear Many Hats: Be prepared to handle everything from development to marketing.
  7. Reflect Regularly: Regular introspection helps you stay on track and improve.
  8. Learn and Apply Lessons: Keep evolving and improving based on your experiences.
  9. Find Support: Surround yourself with people who can help propel you forward.
  10. Luck: Sometimes, success involves a bit of luck, but you have to put yourself out there.

I hope you find these insights helpful. If you're pursuing any creative endeavor, I'm rooting for you! Feel free to reach out if you have any questions or comments.

2024-11-24 A career ending mistake — Bitfield Consulting { bitfieldconsulting.com }

A career-ending mistake isn't always a catastrophic error like shutting down a nuclear power station or deleting a production database; it's often subtler, like failing to plan for the end of your career. The article explores how many of us rush through our professional lives without a clear destination, highlighting that "career" itself can mean "to rush about wildly." It asks the critical questions: “Where do you want to end up? And is that where you're currently heading?” Instead of drifting, the piece advises us to define what we truly want, as "The indispensable first step to getting what you want is this: decide what you want." Whether you're content in your current role or seeking something more fulfilling, understanding your end goal and working intentionally toward it is key to avoiding a career that feels out of control.

Fun quote:

Engineering managers need a solid foundation of technical competence, to be sure, but the work itself is primarily about leading, supervising, hiring, and developing the skills of other technical people. It turns out those are all skills, too, and relatively rare ones.

Managing people is hard; much harder than programming. Computers just do what you tell them, whether that’s right or wrong (usually wrong). Anyone can get good at programming, if they’re willing to put in enough time and effort. I’m not sure anyone can get good at managing, and most don’t. Most managers are terrible.

That’s quite a sweeping statement, I know. (Prove me wrong, managers, prove me wrong.) But, really, would a car mechanic last long in the job if they couldn’t fit a tyre, or change a spark plug? Would a doctor succeed if they regularly amputated the wrong leg? We would hope not. But many managers are just as incompetent, in their own field, and yet they seem to get away with it.

2024-11-23 the tech utopia fantasy is over | ava's blog { blog.avas.space }

Growing up, I had a positive view of tech, believing it would bring comfort, less work, and personalized assistance. However, the reality has been different, with tech companies failing to deliver on their promises and instead contributing to issues like disinformation, economic inequality, and environmental harm. While there have been some benefits, such as increased political knowledge and social connections, the negatives now overshadow the positives. The tech utopia fantasy is truly dead to me.

2024-11-18 Good software development habits | Zarar's blog { zarar.dev }

  1. Keep Commits Small: Keep each commit focused on a single change to make it easier to track and revert issues. Code that compiles should be committable.
  2. Refactor Continuously: Follow Kent Beck's advice: make changes easy, then make the easy changes. Frequent, small refactorings prevent complex reworks.
  3. Deploy Regularly: Treat deployed code as the only true measure of progress. Frequent deployments ensure code reliability.
  4. Trust the Framework: Don’t test features already covered by the framework; focus on testing your unique functionality, especially with small components.
  5. Organize Independently: If a function doesn’t fit anywhere, create a new module. It’s better to separate logically independent code.
  6. Write Tests First (Sometimes): If unsure about an API’s design, start with tests to clarify requirements. TDD doesn’t have to be strict—write code in workable chunks.
  7. Avoid Duplication After the First Copy-Paste: If code is duplicated, it’s time for an abstraction. Consolidating multiple versions is harder than parameterizing one.
  8. Accept Design Change: Designs inevitably get outdated. Good software development is about adapting to change, not achieving a “perfect” design.
  9. Classify Technical Debt: Recognize three types of technical debt: immediate blockers, future blockers, and potential blockers. Minimize the first, address the second, and deprioritize the third.
  10. Prioritize Testability in Design: Hard-to-test code hints at design issues. Improve testability through smaller functions or test utilities to avoid skipping tests.

🔥2024-11-14 Lessons from my First Exit · mtlynch.io { mtlynch.io }

Selling my first business was a journey filled with excitement, stress, and invaluable lessons. I want to share my experiences to help other entrepreneurs who might be considering a similar path. This post is especially relevant for small business owners and startup founders looking to navigate the complexities of a business exit.


Quote:

Used dedicated accounts for the business

Part of what made TinyPilot’s ownership handoff smooth was that its accounts and infrastructure were totally separate from my other business and personal accounts:

  • I always sent emails related to the business from my @tinypilotkvm.com email address.
  • I always used @tinypilotkvm.com email addresses whenever signing up for services on behalf of TinyPilot.
  • I kept TinyPilot’s email in a dedicated Fastmail account.
    • This wasn’t true at the beginning. TinyPilot originally shared a Fastmail account with my other businesses, but I eventually migrated it to its own standalone Fastmail account.
  • I never associated my personal phone number with TinyPilot. Instead, I always used a dedicated Twilio number that forwarded to my real number.
  • All account credentials were in Bitwarden.

After closing, handing over control was extremely straightforward. I just added the new owner to Bitwarden, and they took over from there. There were a few hiccups around 2FA codes I’d forgotten to put in Bitwarden, but we worked those out quickly.


For example, TinyPilot uses the H.264 video encoding algorithm. It’s patented, so we had to get a license from the patent holder before we shipped that feature. During due diligence, we discovered that the patent license forbade me from transferring the license in an asset sale.

I immediately started imagining the worst possible outcome. What if the patent holder realizes they can block the sale, and they demand I pay them $100k? What if the patent holder just can’t be bothered to deal with a tiny business like mine, and they block the sale out of sheer indifference?

🔥 2024-11-08 Blog Writing for Developers { rmoff.net }

Like a favourite pair of jeans that’s well-worn, comfy, and slightly saggy round the arse, I have a go-to structure for writing. Come to think of it, I use it for lots of conference talks too. It looks like this:

  1. Tell them what you’re going to tell them
  2. Tell them
  3. Tell them what you told them

What this looks like in practice is something along these lines:

  1. An intro

    What is this thing, and why should the reader give af be interested?

    This could be a brief explanation of why I am interested in it, or why you would want to read my take on it. The key thing is you’re relating to your audience here. Not everyone wants to read everything you write, and that’s ok.

    Let people self-select out (or in, hopefully) at this stage, but make it nice and easy. For example, if you’re writing about data engineering, make it clear to the appdev crowd that they should move on as there’s nothing to see here (or stick around and learn something new, but as a visitor, not the target audience).

  2. The article itself

  3. A recap

    Make sure you don’t just finish your article with a figurative mic drop—tie up it nicely with a bow (a 🙇🏻 or a 🎀, either works).

    This is where marketing would like to introduce you to the acronym CTA (Call To Action) 😉. As an author you can decide how or if to weave that into your narrative.

    Either way, you’re going to summarise what you just did and give people something to do with it next. Are there code samples they can go and run or inspect? A new service to sign up for? A video to watch? Or just a general life reflection upon which to ponder.

2024-11-07 Monorepo - Our experience { ente.io }

We switched to a monorepo nine months ago, and it’s been working well for us. Before, we had multiple repositories, which made things like managing pull requests or syncing changes a hassle. With everything in one place now, the workflow feels smoother and simpler. It wasn’t a decision we overanalyzed; it just felt like the right time to try it, and we’ve been happy with the results.

The main pros? First, there’s less repetitive work. Instead of opening multiple pull requests across repos for a single change, now it’s just one. Submodules, which were always a pain to manage, are mostly gone. Everything that needs to work together stays in sync naturally. Refactoring has also become easier because we can see the whole picture in one place, which encourages code improvements over time. Plus, being in the same repo has made us feel more connected as a team. Even small things, like seeing everyone’s changes when pulling updates, help us stay in the loop without extra effort.

As for cons, we honestly haven’t found many. A common concern is that monorepos can get messy or slow as they grow, but for our small team, it hasn’t been an issue. We kept it simple—no strict rules, just “don’t touch the root folder”—and it’s been fine. It might not work the same for larger teams or projects with different dynamics, but for us, it’s been a clear win.

2024-10-14 LogLog Games { loglog.games }

I spent three years using Rust for game development, and after shipping a few games and writing over 100,000 lines of code, I’m stepping away from it. Rust has some great qualities—its performance is top-notch, and it often lets you refactor confidently. But for fast, iterative development, which is crucial for indie games, it just doesn't align well. The borrow checker and Rust’s strictness often force unnecessary refactoring, slowing down the process of prototyping and testing new ideas. Tools like hot reloading, essential for quick feedback loops, are either clunky or nonexistent in Rust. And while the language excels in many technical areas, its game development ecosystem is still young, with fragmented solutions and limited support for things like GUI and dynamic workflows.

For small teams like ours, the priority is delivering fun, polished games quickly. With Rust, I found myself spending more time fighting the language and its ecosystem than focusing on gameplay. Moving forward, we’re transitioning to tools that better support rapid iteration and creativity, even if they’re less "perfect" on paper.

2024-09-29 It's hard to write code for computers, but it's even harder to write code for humans ¡ Erik Bernhardsson { erikbern.com }

image-20241201135538368

Onboarding is Key: Users should get started quickly and see results fast. Fix: Simplify setup. Remove steps and make the tool easy to use immediately. For example, ensure API tokens are ready without extra configuration. The faster users see success, the more likely they’ll stick around.

Show Examples First: Abstract explanations confuse users. Fix: Use examples instead of long concepts. Show how the tool works with real use cases. When I write docs, I always start with practical examples users can copy and tweak.

Errors Need Solutions: Errors frustrate users. Fix: Make error messages helpful. Suggest fixes and show code snippets. A clear path back to success turns frustration into trust.

Avoid Too Many Ideas: Too much upfront information overwhelms users. Fix: Keep it simple. Focus on a few core ideas to start. When I design a tool, I aim for 3-5 basic concepts that cover most use cases. Fewer concepts, fewer headaches.

Use Familiar Terms: New words confuse people. Fix: Use common terms like "function" instead of inventing new ones. I think about how people already think about code and try to fit my tool into their existing mental model.

Flexibility Matters: Rigid tools frustrate creative users. Fix: Let users program their own solutions with APIs or scripts. Make everything programmable so users can adapt the tool to their needs.

Don’t Overdo Magic: Hidden behaviors often fail in edge cases. Fix: Keep defaults clear and reliable. Avoid adding unnecessary complexity. Unless I’m 99% sure a “magic” behavior will always work, I avoid it. Instead, I focus on being predictable.

Clarity Over Brevity: Short, clever code is hard to read. Fix: Write clear, readable code. Make it easy to follow. I remind myself: people read code far more than they write it.

2024-09-29 Too much efficiency makes everything worse: overfitting and the strong version of Goodhart’s law | Jascha’s blog { sohl-dickstein.github.io }

When you optimize too much, you can make things worse instead of better. This is the essence of the strong version of Goodhart’s Law: when a measure becomes the target, over-optimization can degrade what you originally cared about. This principle, often studied as "overfitting" in machine learning, also applies broadly to systems like education, economics, and governance.

The Problem: When proxies (measurements or secondary goals) are optimized too well, the actual outcomes worsen. For instance, standardized testing shifts focus from genuine learning to test preparation, undermining education. Similarly, rewarding scientists for publications incentivizes trivial or false findings over meaningful progress. Overfitting to proxies creates harmful side effects, from filter bubbles in social media to inequality in capitalism. How to Fix It: Lessons from Machine Learning

  1. Better Alignment: Make proxies closer to real goals. In machine learning, this involves better data collection. In broader systems, it means crafting laws, incentives, and norms that encourage genuine outcomes, like prioritizing long-term learning over test scores.
  2. Regularization: Introduce penalties or costs for extreme behaviors. Just as machine learning uses mathematical constraints, systems can add friction:
    • Tax extreme wealth disparities or excessive lawsuits.
    • Impose costs for high-volume actions, like bulk emails or algorithmic trading.
    • Penalize complexity to discourage harmful optimization.
  3. Inject Noise: Add randomness to disrupt harmful optimization. Examples include:
    • Randomized selection in competitive admissions to reduce over-preparation.
    • Random trade processing delays to stabilize financial markets.
    • Unpredictable testing schedules to encourage holistic studying.
  4. Early Stopping: Halt optimization before it spirals out of control. In systems, this could mean:
    • Capping time spent on decision-making relative to its stakes.
    • Freezing certain information flows, like press blackouts before elections.
    • Splitting monopolies to prevent market over-consolidation.
  5. Restrict or Expand Capabilities:
    • Restrict: Limit system capacities to prevent runaway effects, like capping campaign finances or AI training resources.
    • Expand: In some cases, more capacity reduces trade-offs, such as developing clean energy or transparent information systems.
BibTeX entry for post:
@misc{sohldickstein20221106,
  author = {Sohl-Dickstein, Jascha},
  title = {{ Too much efficiency makes everything worse: overfitting and the strong version of Goodhart's law }},
  howpublished = "\url{https://sohl-dickstein.github.io/2022/11/06/strong-Goodhart.html}",
  date = {2022-11-06}
}

2024-09-29 Measuring Developers' Jobs-to-be-done - by Abi Noda { substack.com }

2024-09-29 Measuring Developers' Jobs-to-be-done | Hacker News { news.ycombinator.com }

Google used to measure how well developer tools worked by evaluating how they supported certain tasks, like "debugging" or "writing code." However, this approach often lacked specificity that would be useful for tooling teams. For instance, "searching for documentation" is a common task, but the reason behind it—whether it's to "explore technical solutions" or "understand the context to complete a work item"—can meaningfully change a developer's experience and how well tools support them in achieving their goal.

To provide better insights, Google researchers identified the key goals developers are trying to achieve in their work and developed measurements for each goal. In this paper, they explain their process and share an example of how this new approach has benefited their teams.

image-20241201140955980

2024-10-05 Bureaucrat mode - by Andrew Chen - @andrewchen { andrewchen.substack.com }

As companies scale, they often shift from the agile, conviction-driven "Founder mode" to "Bureaucrat Mode," where decision-making slows, and processes dominate. While startups thrive on speed and direct action, large organizations tend to create committees, expand scopes, and reward consensus over outcomes. These tendencies, while rooted in good intentions like collaboration and stability, can cripple innovation and efficiency when scaled excessively.

The Problem: Bureaucrat Mode emerges as companies grow, driven by processes meant to manage complexity. However, these processes often become self-perpetuating, encouraging behaviors that prioritize internal metrics, visibility, and team expansion over meaningful results. Bureaucrats, focused on navigating processes rather than solving problems, replicate themselves by hiring others who thrive in such environments. This cycle of self-replication entrenches inefficiency and resistance to change.

image-20241201141318646

2024-10-10 How to make Product give a shit about your architecture proposal – Andy G's Blog { gieseanw.wordpress.com }

When dealing with Product teams about your architecture proposal, picture yourself as a plumber who's trying to sell different service packages. This analogy highlights how you should present your technical proposals to Product in a way that aligns with their focus on business value. They’re not interested in technical jargon; they want to know how your architecture decision translates into a return on investment.

Remember that Product people are looking for results. Instead of overwhelming them with details about OLTP systems or ETL processes, you need to frame your explanation as a negotiation — highlighting the costs and benefits of each option, just like the plumber did with his service packages.

"Product doesn’t give a shit about how your data is stored. Product cares about products."

The essence here is to avoid diving into the weeds of indexes or table joins until they understand the impact on their budget and timeline. When they ask, “Why is this so expensive?” that’s your cue to explain, in clear terms, the complexity involved in implementing things like OLAP systems or setting up ETL processes.

Approach your conversation by outlining different “packages” — starting with the 🥇 platinum package that covers all technical needs but at a higher cost. This sets the stage for a value discussion, where Product sees the full picture and starts to understand the trade-offs involved.

"Now you can (gently) talk to them about the difference between online transaction processing systems (OLTP) and online analysis processing systems (OLAP)."

The trick is to guide Product through a step-by-step decision-making process, laying out each feature as a line item on an invoice. This approach helps them grasp which elements of your proposal can be trimmed down or delayed to fit within their budget constraints. For example, if they can't afford a new OLAP system, offer scaled-down options, and negotiate on scope and time rather than quality.

🔥 One of the most crucial points is not to compromise on quality. In software development, you should avoid falling into the trap of lowering standards just to meet short-term goals. Sacrificing quality often leads to delivering subpar products that can damage customer satisfaction in the long run. As the article states, “What’s worse, delivering something a customer actually hates, or delivering nothing at all?” Maintaining a baseline of quality ensures that even with limited resources, you're delivering something worthwhile.

If the Product team suggests cutting corners to fit the project into a two-week sprint, resist the temptation. The iron triangle of software development — time, scope, and budget — should always consider quality as a non-negotiable factor.

Ultimately, you're helping Product to ruthlessly prioritize tasks to deliver the best possible outcomes within the given constraints. In these negotiations, scope will often be the main variable that can be adjusted to balance the budget and timeline. And when the tables turn, and it’s your idea that needs their buy-in, present it in terms of ROI to make a compelling case.

Think like a plumber: when you know the value of what you’re selling, it’s easier to convince others to invest in the right solution instead of a quick fix. Always push for a solution that maintains a minimum level of quality, even if it means delivering less within the same time frame.

2024-11-03 Get Me Out Of Data Hell — Ludicity { ludic.mataroa.blog }

📹 2024-11-03 Nikhil Suresh - Skills that programmers need, to defend both their code and their careers - YouTube { www.youtube.com }

This blog narrates an engineer's daily struggle with an overly complex and inefficient data warehouse system. Despite working within an ostensibly supportive team, the engineer describes their workplace as a "Pain Zone," rife with convoluted processes, unchecked errors, and cultural dissonance. Here’s a detailed breakdown of the main points:

The story begins with a ritual of starting the day with a senior engineering partner. Together, they embark on a shared mission to navigate the "Pain Zone," their term for the warehouse system plagued by unnecessary complexity. The data warehouse in question involves copying text files from different systems, and ideally, this process should require only ten steps. However, the engineer discovers over 104 discrete operations in the architecture diagram, a staggering example of the platform's inefficiency.

"Retrieve file. Validate file. Save file. Log what you did. Those could all be one point on the diagram...That's ten. Why are there a hundred and four?"

The engineer describes the necessity of "Pain Zone navigation," a practice where engineers rely on pair programming for moral support to withstand the psychological toll of working in such an environment. The issue isn’t only technical; it’s deeply cultural. A culture that demands velocity while disregarding craftsmanship fosters an atmosphere where complexity and inefficiency go unchallenged. This attitude, the author suggests, results in the degradation of code quality, with engineers penalized for trying to refactor code.

To illustrate the dysfunction further, the author recounts a routine task: checking if data from sources like Google Analytics is flowing correctly. What they find instead is garbled JSON strings dumped in the logs without logical structure, with 57,000 distinct entries where there should be fifty. This revelation shows that for over a year, the team has been collecting "total nonsense" in the logs.

"We only have two jobs. Get the data and log that we got the data. But the logs are nonsense, so we aren't doing the second thing, and because the logs are nonsense I don't know if we've been doing the first thing."

Rather than address this critical error, management insists on working with the erroneous logs to maintain "velocity," a term often implying efficiency but, in this case, prioritizing speed over accuracy. The author describes the frustration of being told to parse nonsensical data instead of fixing the core issues—a situation summarized by the team motto: "Stop asking questions, you're only going to hurt yourself."

The cultural disconnect deepens as the author tries to work with data from Twitter, only to find that log events lack an event ID. A supposed expert suggests using a column with ambiguous file path strings, each lacking logical identifiers, requiring complex regular expressions to infer events.

"I am expected to use regular expressions to construct a key in my query."

In yet another disheartening revelation, the author learns that the Validated: True log entries are merely hardcoded placeholders, not actual validation statuses. The logs fail to capture real system states, effectively undermining auditability.

By the end, the author reaches a breaking point, realizing their values diverge sharply from those of the organization. This disconnect prompts them to resign, choosing to invest their time in personal projects and consulting instead. In a closing reflection, they criticize the industry for investing in trendy tools like Snowflake and Databricks, without hiring engineers who understand how to design simple, effective systems.

"I could build something superior to this with an ancient laptop, an internet connection, and spreadsheets. It would take me a month tops."

This piece is a critique of both overly complex architectures and a corporate culture that prioritizes speed over quality. It highlights the importance of valuing craftsmanship and straightforward design in building sustainable and efficient data systems.

2024-11-24 SciFi book: Manna – Table of Contents | MarshallBrain.com { marshallbrain.com } (RIP Marshall)

With half of the jobs eliminated by robots, what happens to all the people who are out of work? The book Manna explores the possibilities and shows two contrasting outcomes, one filled with great hope and the other filled with misery.

Join Marshall Brain, founder of HowStuffWorks.com, for a skillful step-by-step walk through of the robotic transition, the collapse of the human job market that results, and a surprising look at humanity’s future in a post-robotic world.

Then consider our options. Which vision of the future will society choose to follow?

image-20241124100143433

  • 😺 The building we exited was another one of the terrafoam projects. Terrafoam was a super-low-cost building material, and all of the welfare dorms were made out of it. (Chapter 4)

Newsletters​

2024-09-20 JavaScript Weekly Issue 705: September 19, 2024 { javascriptweekly.com }

2024-09-29 Digital signatures and how to avoid them { newsletter.programmingdigest.net }

2024-09-29 Implementing Blocked Floyd-Warshall algorithm { newsletter.csharpdigest.net }

2024-10-18 JavaScript Weekly Issue 709: October 17, 2024 { javascriptweekly.com }

2024-10-20 How Discord Reduced Websocket Traffic by 40% { newsletter.programmingdigest.net }

2024-10-27 A Brief Introduction to the .NET Muxer { newsletter.csharpdigest.net }

2024-10-27 That's Not an Abstraction { newsletter.programmingdigest.net }

2024-11-17 Exploring the browser rendering process { newsletter.programmingdigest.net }

2024-12-01 Legacy Shmegacy { newsletter.programmingdigest.net }

Working with People​

2024-11-23 Take the Thomas-Kilmann Instrument | Improve How You Resolve Conflict {kilmanndiagnostics.com}

image-20241123122604326

Related:

In conflict situations, individuals often exhibit different behavioral strategies based on their approach to managing disagreements. Avoiding is one strategy, and here are four others, alongside avoiding, commonly identified within conflict management models like the Thomas-Kilmann Conflict Mode Instrument (TKI):

Avoiding

  • Behavior: The individual sidesteps or withdraws from the conflict, neither pursuing their own concerns nor those of the other party.
  • When it's useful: When the conflict is trivial, emotions are too high for constructive dialogue, or more time is needed to gather information.
  • Risk: Prolonging the issue may lead to unresolved tensions or escalation.

Competing

  • Behavior: The individual seeks to win the conflict by asserting their own position, often at the expense of the other party.
  • When it's useful: When quick, decisive action is needed (e.g., in emergencies) or in matters of principle.
  • Risk: Can damage relationships and lead to resentment if overused or applied inappropriately.

Accommodating

  • Behavior: The individual prioritizes the concerns of the other party over their own, often sacrificing their own needs to maintain harmony.
  • When it's useful: To preserve relationships, resolve minor issues quickly, or demonstrate goodwill.
  • Risk: May lead to feelings of frustration or being undervalued if used excessively.

Compromising

  • Behavior: Both parties make concessions to reach a mutually acceptable solution, often splitting the difference.
  • When it's useful: When a quick resolution is needed and both parties are willing to make sacrifices.
  • Risk: May result in a suboptimal solution where neither party is fully satisfied.

Collaborating

  • Behavior: The individual works with the other party to find a win-win solution that fully satisfies the needs of both.
  • When it's useful: When the issue is important to both parties and requires creative problem-solving to achieve the best outcome.
  • Risk: Requires time and effort, which may not always be feasible in time-sensitive situations.

Each of these strategies has its strengths and limitations, and the choice of approach often depends on the context of the conflict, the relationship between the parties, and the desired outcomes.

Wellbeing​

2024-11-03 On Burnout, Mental Health, And Not Being Okay — Ludicity { ludic.mataroa.blog }

In this deeply personal blog post, the author reflects on the mental health struggles that many people face, sharing candid experiences with burnout and severe depression. They emphasize that everyone will have times when they are "Not Okay," and it's important to acknowledge this without shame. Through their own journey of overcoming hardship—ranging from academic pressures to toxic workplaces—they highlight the significance of seeking help, making lifestyle changes, and understanding that recovery is possible. The author encourages readers to care for themselves and others, reminding us that empathy and support can make a profound difference in navigating life's challenges.

✨ New wiki category:

2024-12-01 Psy-Burnout (mental wellbeing) { blog.zharii.com }

image-20241201143408170

Fun / Retro​

2024-11-23 calculatorwords.pdf 344 Words You Can Spell On a Calculator

Compiled by Jim Bennett 2014

image-20241123191431682

ALL NUMBERS ARE HERE
EnglishNumbersEnglishNumbersEnglishNumbers
BE38BEE338BEEBE38338
BEES5338BEG638BEGS5638
BEIGE36138BELIE31738BELIES531738
BELIZE321738BELL7738BELLE37738
8ELLES537738BELLIES5317738BELLS57738
BESIEGE3631538BESIEGES53631538BESS5538
BESSEL735538BESSIE315538BIB818
BIBLE37818BIBLES537818BIBS5818
BIG618BILB00.8718BILE3718
BILGE36718BILGES536718BILL7718
BILLIE317718BILLIES5317718BILLS57718
BLESS55378BLESSES5355378BLIGH46178
BLISS55178BLISSES5355178BL0B8078
BL0BS58078B0B808B0BBI18808
B0BBIE318808B0BBIES5318808B0BBLE378808
B0BBLES5378808B0BS5808B0G608
B0GGLE376608B0GGLES5376608B0GIE31608
B0GIES531608B0GS5608B0IL7108
B0ILS57108B0ISE35108B0LE3708
B0LES53708B0LL7708B0LLS57708
BOO008BOOB8008B00BIES5318008
B00BS58008B00GIE316008B00GIES5316008
B00LE37008B00S5008B00ZE32008
B00ZES532008B0SE3508B0SH4508
B0SS5508B0SSES535508B0Z00.208
B0Z0S50208EBB883EBBS5883
EEL733EELS5733EGG663
EGGS5663EGGSHELL77345663EGGSHELLS577345663
EG00.63EG0S5063EL8E3873
ELEGIES5316373ELI173ELIGIBLE37816173
ELISE35173ELISEO0.35173ELL773
ELLIE31773ELLIS51773ELLS5773
EL0ISE351073ELSE3573ELSIE31573
ESSIE31553GEE336GEES5336
GEESE35336GEL736GELS5736
GE00.36GE0L0GIES531607036GIBBS58816
GIBE3816GIBES53816GIG616
GIGGLE376616GIGGLES5376616GIGOLO0.70616
GIGOLOS5070616GIGS5616GIL716
GILES53716GILL7716GILLS57716
GISH4516GLEE3376GLI88176
GLOB8076GLOBE38076GLOBES538076
GLOBS58076GL0SS55076GL0SSES5355076
GL0SSIES53155076G0B806G0BBLE378806
G0BBLES5378806G0BI1806G0BS5806
G0EBBELS57388306G0ES5306G0G606
G0GGLE376606G0GGLES5376606G0G0L70606
G0LLIES5317706G00GLE376006G00SE35006
G00SES535006G0S506G0SH4506
G0SHES534506HB00.84HE8E3834
HEEL7334HEELS57334HEGEL73634
HELI0S501734HELL7734HELLISH4517734
HELL00.7734HELL0S507734HELLS57734
HES534HESS5534HESSE35534
HIE314HIES5314HIGH4614
HIGHS54614HILL7714HILLEL737714
HILLS57714HIS514HISS5514
HISSES535514H0B804H0BBES538804
H0BBIES5318804H0BBLE378804H0BBLES5378804
H0BBS58804H0B00.804H0B0ES530804
H0B0S50804H0BS5804H0E304
H0ES5304H0G604H0GGISH4516604
H0GS5604H0LE3704H0LES53704
H0LLIE317704H0LLIES5317704H0LLIS517704
H0SE3504H0SES53504IBIS5181
IBISES535181IB00.81IGLOOS500761
ILL771ILLEGIBLE378163771ILLS5771
ISIS5151ISLE3751ISLES53751
LIZ217LIZZIE312217L0B807
L0BBIES5318807L08E3807L08ES53807
L08S5807L0G607L0GE3607
L0GES53607L0G00.607L0G0S50607
L0GS5607L0IS5107L0LL7707
L0LLS57707L00SE35007L00SES535007
L0SE3507L0SES53507L0SS5507
L0SSES5355070BESE353800BLIGE361780
0BLIGES53617800B0E30800B0ES53080
0BSESS5535800BSESSES535535800GLE3760
0GLES537600HI00.1400H00.40
0H0S50400HS5400IL710
0ILS57100ISE35100LE00.370
0LLIE3177000ZE320000ZES53200
0SL00.7500ZZIE31220SEE335
SEES5335SEIZE32135SEIZES532135
SELL7735SELLS57735SHE345
SHELL77345SHELLS577345SHEOL70345
SHES5345SHIES53145SHILL77145
SHILLS577145SHIL0H407145SH0E3045
SH0ES53045SH00S50045SIEGE36315
SIEGES536315SIGH4615SIGHS54615
SILL7715SILLIES5317715SILLS57715
SIL00.715SIL0S50715SIS515
SISES53515SISSIES5315515SIZE3215
SIZES53215SIZZLE372215SIZZLES5372215
SLEIGH461375SLEIGHS5461375SL0B8075
SL0BS58075SL0E3075SL0ES53075
SL0G6075SL0GS56075SL0SH45075
SL0SHES5345075S0B805S0BS5805
S0H00.405S0IL7105S0ILS57105
S0L705S0LE3705S0LES53705
S0LI1705S0LIS51705S0L00.705
S0L0S50705S0LS5705ZELIG61732
ZIB00.812Z0E302Z00S5002

2024-11-23 Rendering “modern” Winamp skins in the browser / Jordan Eldredge { jordaneldredge.com }

image-20241122173439101

2024-11-11 Pieter.com - Pieter's Official Homepage { pieter.com }

image-20241110220126116

2024-11-07 MAX SIEDENTOPF — Passport Photos { maxsiedentopf.com }

image-20241106225118042

2024-10-13 stenzek/duckstation: Fast PlayStation 1 emulator for x86-64/AArch32/AArch64/RV64 { github.com }

DuckStation is an simulator/emulator of the Sony PlayStation(TM) console, focusing on playability, speed, and long-term maintainability. The goal is to be as accurate as possible while maintaining performance suitable for low-end devices. image-20241201135052551

2024-06-18 Where Did You Go, Ms. Pac-Man? — Thrilling Tales of Old Video Games

image-20241201141626837

2024-06-27 Liquid Layers

2024-06-27 Liquid Layers | Hacker News

image-20241201141748969

2024-06-27 Science Fiction Writer Robert J. Sawyer: WordStar: A Writer's Word Processor

image-20241201141845220

2024-06-28 Advent of Code 2023 Day 19: Aplenty - YouTube

Advent of Code in Excel image-20241201142055161

2024-08-29 Web Design Museum - Discover old websites, apps and software { www.webdesignmuseum.org }

image-20241201142134615

2024-09-19 crowdwave.com { www.crowdwave.com }

Show HN: I made crowdwave – imagine Twitter/Reddit but every post is a voicemail

image-20241201142255701

2024-08-28 Monkeytype | A minimalistic, customizable typing test { monkeytype.com }

image-20240827232641786

Inspiration!​

2024-12-01 Andrew Ayer in the Fediverse { follow.agwa.name }

I honestly liked the design and layout

image-20241130200134882

2024-12-01 To the Fediverse! { www.fediverse.to } image-20241130200321657

2024-12-01 Pleroma — a lightweight fediverse server { pleroma.social }

image-20241130200618867

2024-12-01 src/App.scss ¡ develop ¡ Pleroma / pleroma-fe ¡ GitLab { git.pleroma.social } Some good examples for using css variables with scss image-20241130201016998

2024-11-30 GitHub - tldraw/make-real: Draw a ui and make it real {github.com}

2024-11-30 make real • tldraw {makereal.tldraw.com}

2024-11-30 GitHub - SawyerHood/draw-a-ui: Draw a mockup and generate html for it {github.com} ✨FORK SOURCE✨

image-20241130115413381 image-20241130115438468

2024-11-30 tldraw | Steve Ruiz | Substack {tldraw.substack.com}

image-20241130151615673

2024-11-27 Text Blaze: Snippets and Templates for Chrome {blaze.today}

image-20241127150049722

2024-11-26 Monocle ¡ Access and transform immutable data { www.optics.dev }

image-20241125204603802

2024-08-28 The Monospace Web { owickstrom.github.io }

image-20241201135223177

image-20241201135322097

2024-11-24 triyanox/lla: A modern alternative to ls { github.com }

aww! ls with plugins!

image-20241124135202355

2024-11-24 I made an ls alternative for my personal use | Hacker News { news.ycombinator.com } elashri There seems to be a lot of projects that is now competing to replace ls (for people preferences)

For reference, those are the ones I am familiar with. They are somehow active in contrast to things like exa which is not maintained anymore.

eza: (https://github.com/eza-community/eza)

lsd: (https://github.com/Peltoche/lsd)

colorls: (https://github.com/athityakumar/colorls)

g: (https://github.com/Equationzhao/g)

ls++: (https://github.com/trapd00r/LS_COLORS)

logo-ls: (https://github.com/canta2899/logo-ls) - this is forked because main development stopped 4 years ago.

Any more?

Personally I prefer eza and wrote a zsh plugin that is basically aliases that matches what I have from my muscle memory.

2024-11-24 Frosted Glass from Games to the Web - tyleo.com { www.tyleo.com }

image-20241123212224379

2024-11-20 WebVM - Linux virtualization in WebAssembly { webvm.io }

+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~+
| |
| WebVM is a virtual Linux environment running in the browser via WebAssembly |
| |
| WebVM is powered by the CheerpX virtualization engine, which enables safe, |
| sandboxed client-side execution of x86 binaries, fully client-side |
| |
| CheerpX includes an x86-to-WebAssembly JIT compiler, a virtual block-based |
| file system, and a Linux syscall emulator |
| |
| [News] WebVM 2.0: A complete Linux Desktop Environment in the browser: |
| |
| https://labs.leaningtech.com/blog/webvm-20 |
| |
| Try out the new Alpine / Xorg / i3 WebVM: https://webvm.io/alpine.html |
| |
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~+

2024-11-08 Home: Mushroom Color Atlas { www.mushroomcoloratlas.com }

image-20241107222737497

2024-11-07 Your Hacker News { yourhackernews.com }

image-20241106225716990

2024-11-07 Aesop's Fables Interactive Book | Read.gov - Library of Congress { read.gov }

image-20241106224639993

image-20241106224700805

2024-11-07 McMaster-Carr { www.mcmaster.com }

McMaster-Carr’s website, www.mcmaster.com, is renowned for its speed, achieved through minimalist design, server-side rendering, and strategic use of technology like ASP.NET and JavaScript libraries. Prefetching techniques preload pages as users hover, ensuring near-instant navigation, while CDNs cache content globally to reduce latency. This streamlined, user-focused approach lets customers quickly access and order from McMaster-Carr’s extensive catalog, making it a leader in industrial supply and a favorite for its seamless, efficient experience.

image-20241106224136854

2024-10-05 Methods of Mandarin { isaak.net }

I got pretty good in Mandarin within 12 months of rigorous part-time study. I'm not even close to perfectly fluent, but I got far into intermediate fluency. Read my personal story of learning Mandarin here: isaak.net/mandarin

This post on my Methods of Mandarin (MoM) is for fellow language learners and autodidacts. This isn't a thorough how-to guide. I won't be holding your hand. It's more like a personal notebook of what worked for me. I'm sharing my personal Anki deck and then I'll describe all my methods and tips. People's styles and methods differ.

2024-08-29 sjpiper145/MakerSkillTree: A repository of Maker Skill Trees and templates to make your own. { github.com }

image-20241201134630051

2024-09-18 Dune Shell { adam-mcdaniel.github.io }

image-20241201134722315

2024-09-19 Comic Mono | comic-mono-font { dtinth.github.io }

image-20241201134806840

2024-09-20 Math4Devs: List of mathematical symbols with their JavaScript equivalent. { math4devs.com }

image-20241201134847457

¡ 15 min read

⌚ Nice watch!​

2024-11-24 Keynote: The Aging Programmer - Kate Gregory - YouTube { www.youtube.com }

image-20241123231812184

Maintain vision health by getting regular eye check-ups, using appropriate glasses, and addressing night driving challenges with clean windshields and adaptive lighting.

Build physical strength and stamina by incorporating strength training (e.g., push-ups, squats) and aerobic activities like walking or biking into daily life.

Reduce pain and joint issues with anti-inflammatories like naproxen as needed and by focusing on flexibility and range-of-motion exercises.

Protect hearing through regular hearing tests starting at age 50, using hearing aids if necessary, and avoiding loud environments or overly high headphone volumes.

Improve nutrition by prioritizing fruits, vegetables, and whole foods while limiting ultra-processed items. Eat meals made with care and hydrate appropriately.

Enhance sleep quality by focusing on creating a comfortable sleep environment (“sleep joy”) and getting the amount of rest your body needs without guilt.

Safeguard brain health using organizational strategies, pursuing lifelong learning, and embracing new tools and technologies to stay sharp.

Foster emotional resilience by prioritizing gratitude and optimism, avoiding unnecessary negativity, and working toward a calm and joyful outlook.

Adapt to changes in ability by recognizing limitations as they arise and addressing them proactively with tools, technology, and support systems.

Combat workplace biases against older programmers by emphasizing your experience, exploring consulting or freelancing, and pushing back against assumptions about learning capacity.

Plan for retirement by calculating your financial “number,” balancing saving with enjoying the present, and planning meaningful activities to avoid boredom and isolation.

Improve work-life balance through flexible work arrangements, prioritizing health, and focusing on work that aligns with your values and passions.

Build relationships by maintaining friendships across generations and engaging with new communities through hobbies, volunteering, or neighborhood activities.

Prevent loneliness by cultivating social engagement in retirement through structured activities, regular interactions, or volunteering.

Develop healthy habits by avoiding smoking, using sunscreen, and embracing preventive measures like vaccinations.

Incorporate joy and play into daily life through hobbies, nature, and small pleasures, focusing on activities that spark happiness and relaxation.

Create a lasting legacy by organizing and preserving personal and professional projects, ensuring they are meaningful and accessible for others.

Handle loss and change by accepting the inevitability of loss while actively seeking new experiences and connections to balance those losses.

Address unexpected challenges by consulting professionals for new or worsening health issues, as not all problems stem from aging.

Reflect on life purpose and make choices that align with long-term happiness and fulfillment.

Exercise regularly to support both physical and mental well-being.

Save for the future while enjoying life in the present.

Stay socially engaged through hobbies, work, or volunteering.

Eat a balanced diet and focus on whole foods for overall health.

Adapt to limitations by embracing tools and strategies that maintain independence.

Build friendships across generations for mutual support and enrichment.

Cultivate a sense of purpose through meaningful work or activities.

Kate Gregory’s message emphasizes that aging well—whether as a programmer or in any field—requires proactive effort, adaptability, and a focus on joy and purpose.

2024-11-23 My Own Nightmare HR Manager Story (Tip: Every Company Has An A-Hole) - YouTube { www.youtube.com }

image-20241123000830712

In every workplace, you’ll encounter a corporate jerk—the kind of person who thrives on creating chaos, manipulating others, and throwing people under the bus. These individuals are frustrating, but they don’t have to define your career. Let me share a condensed version of my experience dealing with one and the key strategies I used to handle it.

I took on a senior recruiter role with an RPO organization, filling high-level positions nationwide. Before my official role started, I was asked to temporarily support a chaotic plant with high turnover. From the start, the HR manager at the plant undermined my work, deviated from processes, and made false accusations to my boss about my performance. Despite the challenges, I stayed professional and focused on achieving results.

Later, when assigned to the same plant for senior-level roles, the HR manager again tried to sabotage me. This time, I was ready. Armed with detailed documentation of every interaction, I exposed her dishonesty, which damaged her credibility. Though the plant's issues persisted, I didn’t let her behavior derail me. Shortly after, I moved on to a better opportunity, taking invaluable lessons with me.

Lessons Learned

  1. Document Everything: Keep detailed records of all interactions and deliverables. These become your safety net against false accusations.
  2. Maintain Professionalism: Stay composed and formal in your interactions. Don’t stoop to their level.
  3. Set Boundaries: Be clear about your role and responsibilities. Don’t let others exploit your flexibility.
  4. Don’t Internalize Their Behavior: Their actions are a reflection of their own issues, not your worth or abilities.

Corporate jerks are an unavoidable reality in most workplaces, but they don’t have to define your career. Use strategy, stay professional, and remember: you’re in control of your trajectory—not them. When necessary, don’t hesitate to move on to an environment where you can thrive.

2024-11-23 JavaScript in places you didn’t expect - YouTube { www.youtube.com }

image-20241122165048524

JavaScript is everywhere—from browsers to unexpected platforms like game consoles and operating systems. Despite its quirks and criticisms, its versatility has made it indispensable. This post is for developers and tech enthusiasts curious about how JavaScript extends beyond typical web applications, influencing industries like gaming, desktop environments, and more.

JavaScript Beyond Browsers JavaScript is not just a browser language anymore. From GNOME’s desktop environment in Linux, which is almost 50% JavaScript, to Windows 11’s React Native-powered start menu and recommended sections, it’s embedded in operating systems. Even the PlayStation 5 relies heavily on React Native for its interface.

JavaScript in Gaming Consoles Microsoft’s Xbox and Sony’s PlayStation both integrate React Native into their systems. Historically, web technologies like HTML were also used (e.g., Nintendo Wii’s settings menu), showing a longstanding trend of leveraging web tech for ease of development in consoles.

Gaming and UI Layers Even major game titles like Battlefield 1 use JavaScript and React for their UI layers, thanks to tools like MobX for state management. Developers appreciate its flexibility in managing complex UI interactions over building bespoke solutions.

Game Development: JavaScript vs. C++ Vampire Survivors showcases a fascinating dual approach: its browser-based JavaScript version serves as the prototype, while a team ports it to C++ for consoles. This method ensures performance optimization without sacrificing the rapid development benefits of JS.

React’s Evolution and Adaptation React Lua, originally a Roblox project, brings React’s paradigms to Lua-based environments. This shows how React’s influence transcends JavaScript, becoming a staple for creating UIs even in non-JS ecosystems.

Why JavaScript? JavaScript enables faster iteration, broader developer accessibility, and reduced specialization needs. Whether it’s GNOME choosing it for extensibility or game studios adopting React for UI efficiency, its ubiquity stems from practical needs.

2024-11-18 The Most Important API Design Guideline - No, It's Not That One - Jody Hagins - C++Now 2024 - YouTube { www.youtube.com }

This talk is fun, but more like theoretical and philosophical.

image-20241117212533364 image-20241117223655034


📝 Property-Based Testing for Joining an Array to a String with Delimiter in C++

Definition
Property-based testing involves specifying general properties a function should satisfy for a wide range of inputs. In this example, we will test a function that joins an array of strings with a delimiter into a single string. The properties we want to validate are:

  1. The delimiter should only appear between elements, not at the start or end.
  2. If the array has one element, the result should be the element itself without the delimiter.
  3. An empty array should produce an empty string.

C++ Code Example using rapidcheck

Here’s a property-based test using the rapidcheck library in C++ to test a join function that joins a vector of strings with a specified delimiter:

#include <rapidcheck.h>
#include <string>
#include <vector>
#include <sstream>
#include <iostream>

// Function to join array with a delimiter
std::string join(const std::vector<std::string>& elements, const std::string& delimiter) {
std::ostringstream os;
for (size_t i = 0; i < elements.size(); ++i) {
os << elements[i];
if (i != elements.size() - 1) { // Avoid trailing delimiter
os << delimiter;
}
}
return os.str();
}

int main() {
rc::check("Joining should produce a correctly delimited string", [](const std::vector<std::string>& elements, const std::string& delimiter) {
std::string result = join(elements, delimiter);

// Property 1: The delimiter should appear only between elements
if (elements.size() > 1 && !delimiter.empty()) {
// Split result by delimiter and check the components match the input
std::vector<std::string> parts;
std::string::size_type start = 0, end;
while ((end = result.find(delimiter, start)) != std::string::npos) {
parts.push_back(result.substr(start, end - start));
start = end + delimiter.length();
}
parts.push_back(result.substr(start));

// Assert parts match elements
RC_ASSERT(parts == elements);
}

// Property 2: If there's only one element, the result should match that element directly
if (elements.size() == 1) {
RC_ASSERT(result == elements[0]);
}

// Property 3: If the array is empty, the result should be an empty string
if (elements.empty()) {
RC_ASSERT(result.empty());
}
});

return 0;
}

Notice that rc::check will run the test 100 times with different random input parameters. In case of failure, it will give the configuration and random seed info for debugging in output.

Explanation of Example

  1. Property 1: Ensures that if multiple elements are joined with a delimiter, the delimiter only appears between elements, not at the start or end.
  2. Property 2: Checks that if the array has only one element, the function returns the element itself without any delimiter.
  3. Property 3: Confirms that if the input array is empty, the output string is empty.

This approach guarantees that the join function works as expected across diverse inputs, making it more robust against edge cases such as empty arrays, single-element arrays, and unusual delimiter values.

Links

2024-11-18 emil-e/rapidcheck: QuickCheck clone for C++ with the goal of being simple to use with as little boilerplate as possible. { github.com }


2024-11-19 Stop Solving Problems for Your Development Team! - YouTube { www.youtube.com }

image-20241118223517019

image-20241118223553330


For technical leaders, the balance between leading effectively and empowering their team can be challenging. Whether you’re a software engineer managing junior developers or a product owner guiding associates, the traditional approach of “just give the answer” can lead to dependency and frustration for both you and your team. This post explores the value of coaching-driven leadership—a method that empowers your team to become self-sufficient, creative problem-solvers. If you’re in any technical or managerial role, understanding how to guide without micromanaging is essential. Learn how adopting a coaching approach can transform your team’s efficiency, autonomy, and collaboration.

The Shift from Solving Problems to Empowering People

A coaching-based leadership style redefines how leaders approach problem-solving with their teams. Instead of quickly providing answers to move tasks along, this approach encourages team members to develop the skills to tackle issues independently, ultimately creating a more resilient and capable workforce. Below are some key insights and advice on how to lead through empowerment:

Encouraging Self-Reliance Instead of Dependency

  • Why It Matters: When leaders constantly solve problems for others, it builds dependency. Empowering team members to find their own solutions helps reduce your stress and increases their confidence.
  • How to Do It: Encourage team members to exhaust all possible resources and approaches before coming to you. Ask questions like, “How would you solve this if I weren’t available?” This encourages them to think independently.

Asking Powerful, Resourceful Questions

  • Why It Matters: A quick solution often leads to repeated questions. When leaders ask resourceful questions, they prompt team members to analyze and solve problems on their own.
  • How to Do It: Instead of offering solutions, ask questions that challenge their thought processes. Examples include:
    • “What other approaches have you considered❓️”
    • “Can this problem be broken down into smaller tasks❓️”

This approach builds critical thinking and problem-solving skills.

Fostering a Growth-Oriented Mindset

  • Why It Matters: Viewing team members as capable individuals with potential is essential. By recognizing and nurturing their strengths, leaders can help people grow into their roles more effectively.
  • How to Do It: Reframe your thinking to see team members as resourceful and capable. Focus on their potential and ask questions that encourage them to broaden their perspectives, such as, “What new solutions might you try if you had more resources?”

Prioritizing Long-Term Gains Over Short-Term Fixes

  • Why It Matters: Quick answers may solve today’s problem, but they build future dependency. Investing in a coaching style fosters autonomy, saving time and stress in the long run.
  • How to Do It: Resist the urge to provide immediate solutions. Instead, encourage team members to analyze challenges thoroughly, which leads to more sustainable growth and resilience.

Practical Applications of Coaching in Technical Leadership

For leaders looking to implement these coaching principles, here are specific areas where a coaching mindset can be applied effectively:

  • Code Reviews: Instead of dictating how code should look, ask questions about their logic and problem-solving approach. This not only ensures quality but also deepens their understanding.
  • Design and Project Reviews: Use design critiques as opportunities to help team members articulate their design choices, fostering a culture of open dialogue and improvement.
  • Debugging and Troubleshooting: When assisting with debugging, ask team members to consider alternative solutions or explain their thought process rather than simply fixing the problem.
  • Project Planning: Encourage team members to independently explore solutions to potential obstacles by asking them to consider all options and resources available.

2024-11-24 How regexes got catastrophic - YouTube { www.youtube.com }

image-20241124120931384

Introduction​

Regular expressions (regexes) are a foundational tool in programming, celebrated for their ability to match patterns efficiently and elegantly. However, their widespread use has exposed critical flaws in how they are implemented in most programming environments. What begins as a theoretical marvel often translates into real-world inefficiencies and vulnerabilities, leading to catastrophic outcomes like server crashes from regex denial of service (ReDoS) attacks.

This post unpacks the evolution of regex algorithms, contrasts their efficiency, and explores how poor implementation choices have led to systemic issues. Whether you're a systems programmer, web developer, or curious about computational theory, understanding regex's hidden complexities will change how you approach pattern matching.

1. The Two Faces of Regex Algorithms

Regex engines typically rely on two main algorithms: the lockstep algorithm (also known as Thompson's algorithm) and backtracking. Here's how they stack up:

  • Lockstep Algorithm: This algorithm operates with predictable performance, scaling quadratically in worst-case scenarios and linearly when scaling only input size. It treats all possible paths through a regex simultaneously, avoiding exponential blowups.
  • Backtracking Algorithm: While intuitive and flexible (especially for complex features like backreferences and capturing groups), backtracking scales exponentially in the worst case. This flaw enables catastrophic backtracking, where a regex takes impractically long to resolve, even on short inputs.

2. Exponential Backtracking in Practice

Using backtracking means every possible path through a regex is explored individually. When paths multiply exponentially—such as in nested structures or poorly constructed patterns—the execution time balloons. For instance:

  • A regex engine using backtracking may take 24 ticks to match a complex string, compared to only 18 ticks with the lockstep algorithm.

3. Historical Decisions with Long-Lasting Impacts

The dominance of backtracking stems from historical choices made during the development of early Unix utilities:

  • Ken Thompson, the creator of regexes, implemented a lockstep-based engine in the 1960s. However, later tools like ed and grep shifted to backtracking, prioritizing simplicity and flexibility over performance.

This decision, compounded by the introduction of features like backreferences and greedy quantifiers, locked most regex engines into backtracking implementations. Over time, these became embedded in standard libraries across programming languages, making lockstep a rarity.

4. Regex Denial of Service (ReDoS)

The vulnerability of backtracking manifests starkly in ReDoS attacks:

  • A specially crafted regex input can force an engine to explore every possible path, consuming excessive CPU cycles and halting services.
  • Examples include outages at Stack Exchange (2016) and Cloudflare (2019) due to poorly constructed regexes handling unexpected inputs.

5. Features That Complicate Performance

While features like capturing groups, backreferences, and non-greedy modifiers add functionality, they exacerbate backtracking's inefficiencies. For instance:

  • Capturing groups in backtracking engines are straightforward but introduce state-tracking complexities in lockstep implementations.
  • Backreferences break the theoretical constraints of regular languages, making efficient lockstep implementations infeasible.

6. Modern Solutions

Some modern regex engines, like Google's RE2, abandon backtracking altogether, focusing on performance and predictability. RE2 enforces strict adherence to regular language constraints, ensuring linear or quadratic time complexity.

While sacrificing backreferences and some advanced features, engines like RE2 are critical for applications requiring robust and reliable performance, such as large-scale web services.

¡ 40 min read

⌚ Nice watch!​

In this blog post, I'll be sharing a collection of videos with concise content digests. These summaries extract the key points, focusing on the problem discussed, its root cause, and the solution or advice offered. I find this approach helpful because it allows me to retain the core information long after watching the video. This section will serve as a dedicated space for these "good watches," presenting only the most valuable videos and their takeaways in one place.

2024-10-16 Can Chinese Speakers Read Japanese? - YouTube { www.youtube.com }

image-20241016003254921

image-20241016003344065

2024-11-03 Keynote: Learning Is Teaching Is Sharing: Building a Great Software Development Team - BjĂśrn Fahller - YouTube { www.youtube.com }

image-20241102181531241

We attended a talk by BjĂśrn Fahller at ACCU 2024, focusing on how learning, teaching, and sharing are interdependent and critical to team success and personal growth. Below are key steps and ideas that were covered, with some outcomes noted and a few clarifications where needed.

1. Emphasizing Open Sharing for Safety and Improvement (13:52-14:36): Fahller shared an anecdote from 1968 about Swedish military aviation, highlighting the importance of allowing team members to communicate openly, especially about mistakes or difficulties, without fear of punishment. This approach encourages honesty and helps prevent repeated mistakes.

"Military aviation is dangerous... let them openly, and without risk for punishment, share the problems they face while flying."

Outcome: Building a safe environment for sharing leads to a culture where team members can discuss failures without fear, helping the team learn from each experience and improve.

🤖 GPT: Fahller’s translation suggests he views open communication as essential to growth and trust in teams, especially in high-stakes fields.

2. Encouraging Question-Asking and Knowledge Sharing (20:00): In discussing "Sharing is Caring," Fahller emphasized the need for team members to bring up issues or observations that might seem trivial to ensure continuous improvement. He gave examples from aviation, such as pointing out gusts of wind affecting landing, to show how small insights can contribute to collective knowledge.

Outcome: Actively sharing observations improves understanding and may reveal underlying problems that would otherwise go unnoticed. Open communication is key to refining processes.

🤖 GPT: Fahller’s examples reinforce the idea that even seemingly minor details should be voiced -- they may be crucial in the big picture.

3. Addressing Information Overload in Teams (37:52): New team members often feel overwhelmed by the volume of information shared by experienced team members. Fahller suggested that newcomers should ask experienced members to slow down, provide context, and "paint the scene" so they can understand the background of the tasks.

"Ask them to paint the scene. What are they trying to achieve? What is it that is not working?"

Outcome: When we take the time to explain context to newcomers, it helps bridge knowledge gaps and allows everyone to contribute effectively.

🤖 GPT: This approach builds understanding but also patience and humility in experienced team members by reminding them to make knowledge accessible.

4. Creating a Positive Review Culture (33:47): In discussing code reviews, Fahller contrasted two styles: dismissive comments (e.g., "I don’t understand. Rewrite!") vs. constructive feedback (e.g., "Can you explain why you chose to do it this way?"). He emphasized that reviews should be treated as educational opportunities rather than judgment sessions.

Outcome: Constructive reviews foster a growth-oriented environment and allow both the reviewer and reviewee to learn. Constructive feedback motivates improvement, while dismissive comments discourage engagement.

🤖 GPT: A consistent, constructive review culture also promotes long-term trust and makes code quality a shared team responsibility.

5. Handling Toxicity in the Workplace (55:45):

In this segment, Björn Fahller tackled the issue of toxicity within teams and its corrosive effects on collaboration, morale, and individual well-being. He addressed specific toxic behaviors that often crop up in workplaces, describing them not as isolated incidents but as patterns that can erode trust and productivity if left unchecked. Fahller’s examples of toxic behavior included:

  • "The weekly dunce hat" – Singling out someone each week as a scapegoat or object of ridicule, effectively creating an atmosphere of shame and fear.
  • Blame-seeking – Looking for someone to hold responsible for problems, rather than investigating issues constructively or as a team.
  • Threats, pressure, fear, and bullying – Using intimidation tactics to push individuals into compliance, often stifling creativity, openness, and morale.
  • Ghosting – Ignoring someone’s contributions or input entirely, which Fahller noted can make people feel alienated and undervalued.
  • Stealing credit – Taking recognition for someone else’s work, which not only demoralizes the actual contributor but also creates a culture of mistrust.

Fahller stressed that these behaviors are not only demoralizing but actively prevent individuals from sharing ideas and asking questions openly. Such an environment can force people into silence and self-protection, hindering the team’s ability to learn from mistakes and innovate. He emphasized that the first step in combating toxicity is recognition—understanding and identifying toxic patterns when they appear.

"If you're not respected at work," Fahller advised, the first course of action is to try to find an ally. An ally can provide a supportive voice and help validate one's experiences, which can be especially important if toxic behavior is widespread or normalized within the team. An ally may be able to speak up on your behalf, lend credibility to your concerns, and offer support when you’re confronting challenging dynamics. This shared voice can help to bring attention to the toxicity and, ideally, drive change.

However, Fahller acknowledged that finding an ally may not always be enough. If a toxic environment persists despite attempts to address it, he advised a more decisive response: leaving. He argued that individuals should not allow themselves to be "ignored, threatened or made fun of," as staying in such an environment can be mentally and emotionally draining, ultimately leading to burnout and disengagement.

"If all else fails, go elsewhere. Don’t allow yourself to be ignored, threatened or made fun of."

This recommendation underscores Fahller's stance that no one should feel compelled to remain in an unchangeable toxic environment. He suggested that people value their self-respect and mental health over job stability if the work culture is irredeemably harmful.

Fahller’s advice reflected a pragmatic approach to toxicity: address it internally if possible, but recognize when to prioritize personal well-being over enduring a dysfunctional work environment. While leaving a job is often a difficult decision, Fahller's message was clear -- don’t compromise on respect and support. A healthy team environment where people feel safe and valued is essential not just for individual satisfaction but also for collective success.

2024-11-03 Nikhil Suresh - Skills that programmers need, to defend both their code and their careers - YouTube { www.youtube.com }

image-20241102200908952

In his talk, Nikhil Suresh, the director of Hermit Tech, explores the challenges that software engineers face in the corporate world. He begins with an old animal fable about a scorpion and a frog to illustrate the dynamics between programmers and businesses.

"The scorpion wants to ship a web application but cannot program, so it finds a frog because frogs are incredible programmers."

The scorpion assures the frog that it won't interfere with his work. However, after some time, the scorpion hires an agile consultant and imposes new restrictions, disrupting the frog's workflow. This story mirrors how businesses often unknowingly hinder their own developers.

Nikhil emphasizes that most companies don't know much about software, making it difficult for programmers to clearly indicate their value. He refers to Sturgeon's Law, which states that "90% of everything is bad," highlighting the prevalence of low standards in the industry.

He shares personal experiences where previous engineers lacked basic competence, such as not setting primary keys in databases or causing exorbitant costs due to misconfigured systems. These anecdotes illustrate that businesses cannot tell the difference between good and bad programmers, leading to competent developers being undervalued.

Introducing the concepts of profit centers and cost centers, Nikhil explains that IT departments are often seen as cost centers, affecting how programmers are treated within organizations. He points out that being better at programming isn't always highly valued by companies because they may not see a direct link between technical skill and profit.

To navigate these challenges, Nikhil advises developers to never call themselves programmers. He argues that the term doesn't convey meaningful information and can lead to misconceptions.

"If you tell someone who doesn't program that you're a programmer, their first thought is like, 'Ah, one of those expensive nerds.'"

He recommends reading Patrick McKenzie's article "Don't Call Yourself a Programmer, and Other Career Advice", which offers insights into presenting oneself more effectively in the professional sphere.

Nikhil encourages developers to write about their experiences and share them online. By doing so, they can showcase their unique ideas and differentiate themselves in the field. He believes that your unique ideas are what differentiate you from others and that sharing them helps in building a personal brand.

He also suggests that programmers should read outside of IT and delve into the humanities. This broadens their perspectives and provides valuable analogies for complex ideas. Nikhil shares how his involvement in improvised theater and reading "Impro: Improvisation and the Theatre" by Keith Johnstone helped him understand status dynamics in professional interactions.

Understanding these dynamics allows developers to navigate job interviews and workplace relationships more effectively. Nikhil emphasizes the importance of taking control of your career and making decisions that enhance your value to both yourself and society.

In conclusion, Nikhil urges developers to recognize that technical skill isn't the main barrier to having a better career. Factors like communication, strategic thinking, and understanding corporate dynamics play crucial roles. By focusing on these areas, developers can transform their passion into something that has greater value for both themselves and the broader community.

2024-11-02 Get old, go slow, write code! - Tobias Modig - NDC Oslo 2024 - YouTube { www.youtube.com }

image-20241116004657646

📝 Sustainable Software Development Careers: Aging, Quality, and Longevity in Tech

Introduction
In the fast-evolving world of software development, many professionals feel the pressure to stay young, move fast, and keep up with new trends. But does speed really equal success in this field? This post is for experienced developers, tech managers, and anyone considering a long-term career in software. We'll explore why sustainability in development—focusing on quality, experience, and career longevity—matters and how you can embrace aging as an asset, not a setback.

Why You Should Care
The tech industry often promotes rapid career progression and cutting-edge skills over stability and endurance. However, valuing experience, avoiding burnout, and emphasizing quality over speed are essential for creating durable, impactful software and ensuring personal career satisfaction.

Embracing Aging as a Developer

Many developers worry about becoming irrelevant as they age, yet experience can be a strength. Research shows the average age of developers is among the lowest across professional fields, meaning many leave the field early. However, experience contributes to problem-solving, architectural insights, and higher quality standards. Older developers often provide unique perspectives that younger professionals may lack, particularly in maintaining and improving code quality.

Slowing Down for Quality

Too many developers face intense pressure to deliver quickly, often sacrificing quality. This results in technical debt and rushed code that becomes difficult to maintain. The speaker argues that development is a marathon, not a sprint. Slowing down and building sustainable software creates long-term benefits, even if it appears slower at first. By prioritizing thoughtful coding and taking the time to address technical debt, developers can create resilient, maintainable systems.

Challenges with Traditional Career Progression

Many companies push experienced developers into management roles, which can leave skilled coders dissatisfied and underutilized. Known as the Peter Principle, this approach often results in skilled developers becoming ineffective managers. For those passionate about coding, staying in development roles—rather than climbing the corporate ladder—can offer fulfillment, especially if companies recognize and reward this choice.

Common Reasons Developers Leave the Field

Major reasons include burnout, shifting to roles with higher prestige, and losing the spark for coding. Additionally, aging can lead to insecurities about keeping up. To combat these trends, developers should prioritize work-life balance, take time to learn, and avoid the mindset that career progression has to mean management.

Practical Ways to Build a Sustainable Career

  • Commit to Continuous Learning: Attend conferences, read, and experiment with code to stay current.
  • Focus on Quality over Speed: Embrace practices like regular code reviews, refactoring, and retrospectives to build robust systems.
  • Build Team Trust and Psychological Safety: A supportive environment enhances productivity, allowing team members to grow together.
  • Incorporate Slack Time: Give yourself unstructured time to think, learn, and work creatively, helping avoid burnout and stagnation.

Let Experience Be Your Advantage

Staying relevant as a developer means focusing on the quality of your contributions, leveraging your experience to guide teams, and advocating for sustainable practices that benefit the entire organization. By valuing experience, resisting the rush, and maintaining passion, you can contribute meaningfully to tech at any age.

Quotes

"Getting old in software development is not a liability—it's an asset. Make those gray hairs your biggest advantage and let your experience shine through in quality code."

"Software development is not a sprint; it's a marathon. We need to slow down, find a sustainable pace, and stop rushing to deliver at the expense of quality."

"Don't let your career be dictated by the Peter Principle—just because you're a great developer doesn’t mean you’ll enjoy management. Stay with your passion if it’s coding."

"Poor quality code isn’t just a short-term fix; it’s a long-term burden. Building things right the first time is the fastest way to long-term success."

"There’s no need to be Usain Bolt in development; be more like a marathon runner. Set a steady, sustainable pace, focus on quality, and enjoy the journey."

2024-10-29 The Evolution of Functional Programming in C++ - Abel Sen - ACCU 2024 - YouTube { www.youtube.com }

image-20241116004141231

2024-11-04 Functional C++ - GaĹĄper AĹžman - C++Now 2024 - YouTube { www.youtube.com }

image-20241103203715424

This is procedural version of code that is ugly and has to be modernized with functional programming.

// procedural example
auto is_hostname_in_args(int, char const* const*) -> bool;
auto get_hostname_from_args(int, char const* const*) -> char const*;

auto get_hostname(int argc, char const* const* argv, std::string default_hostname) -> std::string {
// Split query / getter
if (is_hostname_in_args(argc, argv)) {
// Perhaps... might use optional here too?
return get_hostname_from_args(argc, argv);
}

// Ad-hoc Maybe
if (char const* maybe_host = getenv("SERVICE_HOSTNAME");
maybe_host != nullptr && *maybe_host != '\0') {
return maybe_host;
}

return default_hostname;
}

Unfortunately, I cannot provide the function version, because I don't understand it.

2024-11-07 Reintroduction to Generic Programming for C++ Engineers - Nick DeMarco - C++Now 2024 - YouTube { www.youtube.com }

image-20241107003250333

🔥🔥🔥2024-11-06 LEADERSHIP LAB: The Craft of Writing Effectively - YouTube { www.youtube.com }🔥🔥🔥

image-20241105224810433

found in 2024-11-06 Blog Writing for Developers { rmoff.net }

Introduction Writing isn’t just about sharing information; it’s about making an impact. In this insightful lecture, a distinguished writing instructor from the University of Chicago's Writing Program emphasizes that effective writing requires understanding your audience, establishing relevance, and creating a compelling narrative. This article captures the speaker’s key advice on improving writing by focusing on purpose, value, and the reader's needs.


  1. Focus on Value, Not Originality
  • Advice: The speaker challenges the idea that writing must always present something "new" or "original." Instead, writers should prioritize creating valuable content that resonates with their audience.
  • Application: Rather than striving for originality alone, focus on producing content that addresses the reader’s concerns or questions. A piece of writing is valuable if it enriches the reader’s understanding or helps solve a problem they care about.
  1. Define the Problem Clearly
  • Advice: To make a piece of writing compelling, start by establishing a problem that is relevant to your audience. A well-defined problem creates a sense of instability or inconsistency, which engages readers and positions the writer as a problem-solver.
  • Application: Use contrasting language to highlight instability—words like "but," "however," and "although" signal unresolved issues. This approach shifts the reader’s focus to the problem at hand, making them more receptive to the writer's proposed solution.
  1. Understand and Address Your Reader’s Needs
  • Advice: A writer’s task is to understand the specific needs and concerns of their reading community. This involves identifying problems that resonate with them and framing your thesis or solution in a way that is relevant to their lives or work.
  • Application: In academic and professional settings, locate problems in real-world contexts. Rather than presenting background information, articulate a challenge or inconsistency that is specific to the reader’s field or interests, making your argument compelling and directly relevant.
  1. Use the Language of Costs and Benefits
  • Advice: Writers should make it clear how the identified problem affects the reader directly. Frame issues in terms of "costs" and "benefits" to emphasize why addressing the problem is essential.
  • Application: Highlight the impact of ignoring the problem versus the benefits of solving it. This approach reinforces the relevance of your writing by aligning it with the reader’s motivations and concerns.
  1. Beware of the "Gap" Approach
  • Advice: Avoid using the concept of a "knowledge gap" as the sole justification for writing on a topic. While identifying gaps in research can work, it often lacks the urgency or impact required to engage readers fully.
  • Application: Rather than just pointing out missing information, emphasize the practical implications of filling that gap. Explain how the lack of certain knowledge creates instability or inconsistency in the field, making the need for your insights more compelling.
  1. Adopt a Community-Centric Perspective
  • Advice: Tailor your writing to the specific communities who will read it. Different communities (e.g., narrative historians vs. sociologists) have distinct approaches to problems and value different types of arguments.
  • Application: Define and understand the community of readers your work is meant to serve. Address their concerns directly and frame your argument in terms that align with their unique perspectives and values.
  1. Learn from Published Articles
  • Advice: Published work often contains subtle rhetorical cues about what resonates with readers in a specific field. Study these articles to understand the language, structure, and approach that successful writers use.
  • Application: Identify patterns in the language of published work within your target field. For instance, if a journal commonly uses cost-benefit language, incorporate it into your writing to align with reader expectations.
  1. Emphasize Function Over Form
  • Advice: Writing should serve a clear function beyond just following formal rules. Effective writing achieves its purpose by clearly communicating the problem and its significance to readers.
  • Application: Instead of focusing solely on rules or formalities, think about what your writing needs to accomplish for your audience. Make sure that every section and statement reinforces your overall argument and purpose.

2024-11-08 Developer Joy – How great teams get s%*t done - Sven Peters - NDC Oslo 2024 - YouTube { www.youtube.com }

image-20241107182436885

image-20241107182600363

In today’s fast-evolving tech landscape, “Developer Joy” is emerging as a crucial focus for engineering teams striving to deliver high-quality, innovative software. For those in software engineering or tech management, this concept brings a fresh perspective, shifting away from traditional productivity metrics and emphasizing a developer’s experience, satisfaction, and creativity. By focusing on Developer Joy, teams can foster an environment where developers not only perform optimally but also find deep satisfaction in their craft. This shift is more than just a trend; it’s a rethinking of how we define and sustain productivity in a complex, creative field like software development.

The Problem with Traditional Productivity Metrics

Traditional productivity measures, like lines of code or tasks completed, often fail to capture a developer's real impact. Software development, unlike factory work, requires creativity, problem-solving, and adaptability—traits that are poorly reflected in industrial-era metrics. Instead of simply measuring output, focusing on Developer Joy acknowledges the unique, non-linear nature of coding and innovation.

Developer Joy: A New Approach to Productivity

Developer Joy isn't about doing more in less time; it’s about creating an environment where developers thrive. When developers are joyful, they produce better code, collaborate more effectively, and sustain their motivation over time. Atlassian’s approach to Developer Joy incorporates several elements to support this environment:

  • High-quality Code: Developers enjoy working with well-structured, maintainable code.

  • Progressive Workflows: Fast, friction-free pipelines allow developers to take an idea from concept to deployment quickly.

  • Customer Impact: When developers know they’re making a meaningful difference for users, they feel a greater sense of pride and accomplishment.

    Tools and Processes to Foster Developer Joy

    To enable Developer Joy, teams at Atlassian have implemented practical solutions:

  • Constructive Code Reviews: By establishing a code review culture where feedback is respectful and constructive, teams can maintain high standards without discouraging or frustrating developers. Guidelines like assuming competence, offering clear reasoning, and avoiding dismissive comments make reviews both productive and uplifting.

  • Flaky Test Detection: The Confluence team developed an internal tool that identifies “flaky tests” (tests that fail intermittently) to save developers from unnecessary debugging. This tool boosts productivity by automating the detection and removal of unreliable tests.

  • The Punit Bot for Review Notifications: Timely code reviews are essential for maintaining team flow. The Punit Bot automatically notifies team members when their input is needed on pull requests, cutting down waiting times and keeping development on track.

Cross-Functional, Autonomous Teams

Teams need the freedom to work independently while staying aligned on goals. By embedding key functions within each team (like design, QA, and operations), Atlassian ensures that teams can progress without external dependencies. This “stack interchange” model allows each team to flow without bottlenecks.

Quality Assistance over Quality Assurance

Developers at Atlassian don’t rely solely on QA engineers to validate code. Instead, they partner with QA in the planning stage, gaining insights on testing best practices and writing their own test cases. This approach, called “Quality Assistance,” keeps quality embedded throughout the process and gives developers more control over the software they release.

Collaborating with Product Teams

Effective collaboration with product teams is crucial. Atlassian integrates developers into the full product lifecycle—from understanding the problem to assessing impact after release. This holistic involvement reduces miscommunication, enables rapid adjustments based on early feedback, and fosters a sense of ownership and pride in the end product.

The Developer Joy Survey: Measuring What Matters

To ensure Developer Joy remains high, Atlassian conducts regular “Developer Joy Surveys,” asking developers about their satisfaction in areas such as tool access, wait times, autonomy, and overall work satisfaction. By measuring both satisfaction and importance, teams identify and address specific challenges to ensure joy remains a central part of their development culture.


Notable Quotes and Jokes

  • “Developer Joy is about creating an environment where developers thrive, not just survive.”
  • “If you can’t measure Developer Joy, you’re probably measuring the wrong thing.”
  • “Code reviews should be about learning, not earning jerk points.”
  • “Productivity isn’t about lines of code; it’s about finding joy in the code you write.”

2024-11-09 Herding cats: lessons from 15 years of managing engineers at Microsoft - Kevin Pilch - YouTube { www.youtube.com }

image-20241109130107457

image-20241109130841936

Introduction Purpose and Relevance
This talk explores the nuances of managing software engineering teams. It’s particularly relevant for new or seasoned managers, especially those transitioning from technical roles to leadership. The speaker, Kevin Pilch, leverages his extensive experience managing engineering teams at Microsoft to provide insights into effective management strategies, challenges, and actionable advice.

Target Audience
Ideal for current and aspiring managers of software engineering teams, as well as individual contributors considering a management path.

Main Content

Coaching vs. Teaching
The emphasis here is on coaching engineers rather than simply teaching them. Coaching means asking questions that encourage team members to find solutions independently, fostering growth and engagement. By using the "ask solution" quadrant approach, managers can guide engineers toward problem-solving rather than directly offering answers, which enhances ownership and accountability.

Focus on Top Performers
Spend more time supporting top performers instead of focusing solely on underperformers. The impact of losing a high performer is significant—they are often highly sought after and can easily find other opportunities. Retaining skilled contributors by offering continuous support and new challenges is essential.

Importance of Self-Evaluation
The self-evaluation process is a valuable opportunity for engineers to reflect on their career paths, skill gaps, and accomplishments. By encouraging engineers to take ownership of self-assessments, managers promote introspection and personal growth, while also creating useful documentation for future managers and potential promotions.

Providing Clear Feedback
When giving performance feedback, it’s essential to avoid “weasel words” and sugarcoating, which soften the message and create misunderstandings. Use specific language that correlates to performance expectations—such as “lower than expected impact”—to ensure feedback is clear, actionable, and direct.

Encouraging Constructive Failure
Allow team members to experience failure on controlled projects to enhance learning and resilience. This approach lets engineers learn from mistakes without jeopardizing critical objectives. By creating “safe-to-fail” environments, managers can frame certain projects as experiments and define success metrics upfront, avoiding sunk cost fallacies and confirmation biases.

Task Assignment Using the ABC Framework
Assign tasks based on complexity relative to each team member’s skill level. Above-level tasks serve as stretch assignments to promote growth, current-level tasks reinforce skills, and below-level tasks include routine but necessary responsibilities that everyone shares. Balancing these types keeps team members challenged and engaged while ensuring essential work is completed.

Motivating Different Personality Types
The SCARF model—Status, Certainty, Autonomy, Relatedness, Fairness—can help recognize diverse motivators across the team. Managers should tailor interactions to each team member’s unique motivators, fostering a supportive environment that avoids triggering negative responses.

2024-11-12 Success On Your Own Terms - Todd Gardner - CPH DevFest 2024 - YouTube { www.youtube.com }

image-20241111212449932

Defining Success on My Own Terms: Lessons from My Journey in Tech

For over 25 years, I've navigated the ever-changing landscape of the tech industry. This journey has been filled with successes, failures, and invaluable lessons that have shaped not only my career but also my understanding of what success truly means. If you're a developer, entrepreneur, or someone contemplating your own path in tech, perhaps my experiences can offer some insights.

The Evolution of Success

My definition of success has shifted throughout my career. It began with a desire for prestige, evolved into a quest for independence, and later transformed into valuing time above all else. I've come to realize that success isn't a fixed destination but a moving target that changes as we grow.

"The definition of success for me has shifted throughout my career. It used to just mean prestige. Then it meant independence, and then it meant time, and it's probably going to change again."

Building Request Metrics

I founded Request Metrics with the goal of addressing a critical problem: web performance. Initially, we focused on client-side observability, aiming to help developers monitor their websites and applications. However, we soon discovered that web performance is a complex issue, laden with constantly changing metrics and definitions.

The Challenge of Web Performance

Developers often struggle with understanding and improving web performance. The industry's metrics seem to continually shift, making it hard to pin down what "fast" truly means. This confusion was costing businesses real money, especially as user expectations for speed grew.

"It turns out developers don't know how to make things fast, and it's a problem that got a lot more important recently because of a thing Google did called the Core Web Vitals."

Google's Core Web Vitals

The game changed when Google introduced Core Web Vitals—a set of metrics that directly impact search rankings. Suddenly, web performance wasn't just a technical concern but a business-critical issue. Companies that relied on SEO for visibility faced tangible consequences if their websites didn't meet these new standards.

"Google said, 'This is how fast you need to be,' and if you don't, you're going to lose page rank. So now this suddenly got way more... now there is a cost to do this. If you are an e-commerce store or you are a content publisher... you care a whole lot about the Core Web Vitals; you care about performance."

Pivoting to Solve Real Problems

Recognizing this shift, we pivoted Request Metrics to focus on helping businesses understand and improve their Core Web Vitals. We developed tools that provide clear, actionable insights into performance issues. By doing so, we addressed a real pain point, offering solutions that companies were willing to invest in to protect their search rankings and user experience.

"We started building a new thing that was all about the Core Web Vitals. It was like, 'This is the problem that we need to solve.' Businesses that depend on their SEO... it's not clear when they're about to lose their SEO ranking because of performance issues. So let's focus on that."

Lessons Learned

Throughout this journey, I've learned several key lessons.

Time is precious. Life is unpredictable, and opportunities can be fleeting. It's crucial to focus on what truly matters and act promptly.

"First, you don't have as much time as you think. This story can end for any one of us tomorrow... It might all be over tomorrow, so do what you think is important."

Embracing uncertainty is essential. Feeling unprepared is natural. Many successful endeavors begin without a clear roadmap. Confidence often comes from taking action and learning along the way.

"Don't worry if you don't know, if you don't feel confident in what you're doing. None of us know what we're doing when we start... They just started and figured it out as they went. You can do that too."

Building relationships is vital. Success isn't achieved in isolation. Cultivating strong relationships and working collaboratively can open doors you never knew existed.

"Remember, no matter what you do or what you want out of life, you need to build relationships with people around you. Don't isolate yourself and think you can solve it all by yourself. Those relationships... are going to pay huge dividends that you could never imagine."

Solving real problems should be a priority. Focus on creating solutions that address genuine needs. If your product solves a real problem, people are more likely to value and pay for it.

"Be sure to build products that actually solve real problems that cost people money. Otherwise, you might find yourself building something really cool that nobody is ever going to pay you for."

Adapting and evolving are necessary. Be prepared to change course. Flexibility is key to staying relevant and achieving long-term fulfillment.

"We found through this we found a problem that was costing money to real people, and this is the path that we're on right now... because now we're solving a problem for people that... it's cheaper to pay us to solve the problem than to deal with the risks."

Taking risks and shipping early can lead to growth. Don't wait for perfection. Launching early allows you to gather feedback and iterate, which is more valuable than holding back out of fear.

"If you're going to build something successful and durable... you're going to need people to help. And be sure to build products that actually solve real problems... But you won't hit them unless you ship something, and if you're not embarrassed of it, you're waiting too long. Just throw something together and get it out there and see if anybody cares."

Moving Forward

As I continue on this path, I understand that my definition of success will keep evolving. What's important is to remain true to oneself, prioritize meaningful work, and leverage relationships to create lasting impact.

2024-11-14 Windows: Under the Covers - From Hello World to Kernel Mode by a Windows Developer - YouTube { www.youtube.com }

image-20241116005812334

For programmers and tech enthusiasts, "Hello World" is a rite of passage, a first step in coding. But behind the simplicity of printing "Hello World" on the screen, there lies a deeply intricate process within the Windows operating system. This article uncovers the fascinating journey that a simple printf command in C takes, from the initial code execution to the text’s appearance on the screen, traversing multiple layers of software and hardware. If you're curious about what happens behind the scenes of an OS or want a glimpse into the hidden magic of programming, this guide is for you.

  1. Starting Point: Writing Hello World in C

    • The classic C code printf("Hello, World!"); initiates the journey. In this line, the printf function doesn't directly display text. Instead, it prepares data for output, setting off a series of calls to the OS to manage the display of the text.
  2. Processing printf: User Mode to Kernel Mode

    • The runtime library processes printf, identifying format specifiers and preparing raw text to be sent to the output. This initiates a function call, like WriteFile or WriteConsole, which interacts with Windows’ Win32 API—a vast interface linking programs to system resources.
    • Kernel32.dll: Despite its name, Kernel32.dll operates in user mode, providing system access without directly tapping into the kernel. Named for historical reasons, it bridges functions requiring OS kernel resources by keeping security intact.
  3. Transitioning with System Calls

    • System calls serve as gates from user mode (where applications operate) to kernel mode (where core OS processes run). Here, Windows uses the System Descriptor Table and system calls like int 2E to cross into kernel mode securely, ensuring only validated programs access system resources.
  4. Windows Kernel Processing with ntoskrnl.exe

    • After the system call, ntoskrnl.exe checks permissions and validates parameters to ensure secure execution. This step guarantees the program isn’t making unauthorized access attempts, which fortifies Windows against possible exploits.
  5. Console Management through csrss.exe

    • The Client Server Runtime Subsystem (csrss.exe) manages console windows in user mode. csrss updates the display buffer, which holds the text data ready for rendering. It keeps a two-dimensional array of characters, handling all aspects like color, intensity, and style to maintain the console window’s appearance.
  6. Rendering Text with Graphics Device Interface (GDI)

    • GDI takes over for text rendering within the console, providing essential drawing properties like font and color. The console then relies on the Windows Display Driver Model (WDDM), which bridges communication between software and the graphics hardware.
  7. The GPU and Frame Buffer

    • The GPU receives the data, rendering the text by processing pixel-by-pixel instructions into the frame buffer. This buffer, a region of memory storing display data, holds the image of "Hello World" that will appear on screen. The GPU then sends this image to the display via HDMI or another interface.
  8. From Monitor to Visual Cortex

    • The display presents the text through LED pixels, and from there, light travels to the viewer’s eyes. Visual processing occurs in the brain's visual cortex, ultimately registering "Hello World" in the viewer's consciousness—a culmination of hardware, software, and human biology.

Notable Quotes and Jokes from Dave Plummer:

  • "Imagine the simplest Windows program you could write...but do you know how the magic happens?"
  • "Our journey begins in userland within the heart of your C runtime library."
  • "Calling printf is like sending a messenger on a long cross-country journey from high-level code to low-level bits and back again."
  • "When 'Hello World' pops up on the screen, you’re witnessing the endpoint of a complex, coordinated process..."

2024-11-14 In Prompts We Trust - Jiaranai Keatnuxsuo - CPH DevFest 2024 - YouTube { www.youtube.com }

image-20241116010954387 For those diving into AI applications, especially prompt engineering with generative AI, understanding trust-building and prompt precision is key to leveraging AI effectively. If you’re an AI practitioner, developer, or someone interested in optimizing how language models generate outputs, this guide explores techniques to achieve trustworthy and accurate AI responses. By improving prompt engineering skills, you’ll better navigate the complexities of AI interactions and make your AI applications more reliable, relevant, and valuable.

Core Techniques and Strategies in Prompt Engineering

When working with generative AI, the goal is to create prompts that elicit useful, accurate, and relevant responses. This requires understanding both the technical aspects of prompt engineering and the psychological aspects of trust. Here are key techniques for mastering this process:

The Importance of Trust in AI Outputs

Trust plays a central role in whether users accept or reject AI-generated outputs. As the speaker noted, “Trust is the bridge between the known and the unknown.” For AI to be effective, especially in high-stakes fields like medicine or government applications, users must feel confident in the system’s reliability and fairness. Factors that foster this trust include:

  • Accuracy: Ensuring the output is based on factual information and up-to-date sources.
  • Reliability: Confirming that outputs remain consistent across different scenarios.
  • Personalization: Tailoring responses to individual needs and contexts.
  • Ethics: Adhering to ethical guidelines, avoiding bias, and maintaining cultural sensitivity.

Precision in Prompt Engineering: Essential Techniques

To build trust, prompts need to be structured in a way that maximizes clarity and minimizes ambiguity. Key methods include:

  • Role Prompting: Assigning specific roles, such as “act as a coding assistant,” guides the model in responding within a particular expertise framework. As the speaker shared, “Role prompting is really good in terms of getting it to go find all those billions of web pages it was trained on.”

  • Chain of Thought Prompting: By instructing the model to provide step-by-step reasoning, this method helps in breaking down complex queries and reducing errors. For example, prompting the model to explain each step in a calculation avoids “error piling,” where initial mistakes skew subsequent responses.

  • System Messages: Used primarily by developers, system messages define overarching rules or tones for the AI. These instructions are hidden from the end-user but ensure the model stays consistent, ethical, and aligned with specific guidelines.

Handling AI’s Limitations: Mitigating Hallucinations and Bias

“Hallucination” refers to instances where AI generates plausible-sounding but incorrect information. The speaker explained, “We all think that hallucination is a bug; it’s actually not a bug—it’s a feature, depending on what you’re trying to do.” For applications where accuracy is crucial, employing techniques like Retrieval-Augmented Generation (RAG) helps ground AI responses by referencing reliable external sources.

Optimizing Prompt Parameters for Desired Outputs

Adjusting parameters such as temperature, frequency penalties, and presence penalties can enhance the creativity or precision of AI responses. For example, higher temperatures lead to more creative, varied outputs, while lower settings make responses more predictable and factual. As the speaker noted, “Every word in a prompt matters,” so these settings allow for fine-tuning responses to suit specific needs.

Recap & Call to Action

Effective prompt engineering isn’t just about crafting prompts—it’s about understanding trust and precision. Key strategies include role prompting, step-by-step guidance, and adjusting AI parameters to manage reliability and relevance. Remember, the goal is to enhance user trust by ensuring outputs are clear, relevant, and ethically sound. Try implementing these techniques in your next AI project to see how they impact the quality and trustworthiness of your results.

2024-11-14 Gwern Branwen - How an Anonymous Researcher Predicted AI's Trajectory { www.dwarkeshpatel.com }

image-20241114141855768

Gwern is a pseudonymous researcher and writer. He was one of the first people to see LLM scaling coming. If you've read his blog, you know he's one of the most interesting polymathic thinkers alive.

In order to protect Gwern's anonymity, I proposed interviewing him in person, and having my friend Chris Painter voice over his words after. This amused him enough that he agreed.

2024-11-16 Modern & secure adaptive streaming on the Web - Katarzyna Dusza - CPH DevFest 2024 - YouTube { www.youtube.com }

image-20241115230122104

Introduction In today’s streaming-centric world, the demand for smooth, high-quality, and secure content playback has never been higher. Whether it’s movies, music, or live broadcasts, users expect seamless experiences across multiple devices and network conditions. For developers and media engineers, understanding adaptive streaming and secure content delivery on the web is critical to meet these demands. This guide dives into adaptive streaming, DRM encryption, and decryption processes, providing the essential tools and concepts to ensure secure, efficient media delivery.

Who This Guide Is For This guide is intended for software engineers, streaming platform developers, and media engineers focused on optimizing web streaming quality and security. Those interested in learning about adaptive bitrate streaming, DRM protocols, and encryption processes will find valuable insights and practical applications.

2024-11-16 Back to Basics: Unit Testing in C++ - Dave Steffen - CppCon 2024 - YouTube { www.youtube.com }

image-20241116002211334

Introduction

In modern software development, unit testing has become a foundational practice, ensuring that individual components of code—specifically functions—perform as expected. For C++ developers, unit testing offers a rigorous approach to quality control, catching bugs early and enhancing code reliability. This article covers the essentials of unit testing in C++, focusing on why and how to apply it effectively in your projects. Whether you’re an experienced developer or a newcomer in C++, this guide will clarify best practices and introduce powerful frameworks to streamline your testing efforts.

Core Concepts and Challenges in Unit Testing

Understanding Unit Testing in C++
Unit testing verifies the smallest unit of code, usually a function, to confirm it works as intended. Over the past decade, it has become essential for software development projects, preventing critical bugs from reaching production and reducing the risk of project failures. While the concept is straightforward, implementing effective unit tests in C++ brings unique challenges, such as determining what to test and choosing the right framework to manage tests efficiently.

Addressing Key Challenges

  1. Framework Selection: C++ offers various testing frameworks like Catch2, which simplifies setting up unit tests and provides structured error reporting.
  2. Consistent Definitions: Defining what qualifies as a unit test varies across the industry. This inconsistency can complicate efforts to standardize testing practices.
  3. Testing Complexity: Many projects require extensive, comprehensive testing to cover complex logic, edge cases, and integration points without compromising performance.

Implementing Unit Tests Effectively

Using a Framework
Frameworks like Catch2 streamline test organization, allowing developers to structure tests in isolated, repeatable units. They provide clear output, automated reporting, and enable testing of all components, highlighting each failure without halting the entire test process. The framework choice is critical in ensuring that tests are not only functional but also maintainable and understandable.

Structure and Placement of Tests
The closer tests are to the code they evaluate, the easier they are to maintain. Best practices recommend keeping test files within the same project structure, allowing for easy updates and reducing the chance of disconnects between tests and the code they assess.

Scientific Principles in Unit Testing

Effective unit testing is analogous to scientific experimentation. Each test is an “experiment” designed to verify code behavior by testing specific inputs and expected outcomes. Emphasizing falsifiability ensures that tests are objective and replicable, providing clear indications of any issues. Core scientific principles in testing include:

  1. Repeatability and Replicability: Tests should yield consistent results on repeated runs.
  2. Precision and Accuracy: Tests should be specific and unambiguous, with clear indications of success or failure.
  3. Thorough Coverage: Effective tests cover all code paths and edge cases, ensuring all possible scenarios are addressed.

Valid and Invalid Tests: Ensuring Accuracy

Accurate tests provide clear insights into code functionality. Avoid using the code’s output as its own test standard—known as circular logic—because it cannot reliably reveal bugs. Instead, source test expectations from reliable, external standards or reference calculations to ensure validity and rigor.

White Box vs. Black Box Testing Approaches

Two approaches define C++ unit testing:

  • White Box Testing: Tests directly access private code areas using workarounds like friend classes, allowing tests to examine internal states. However, this method ties tests closely to code structure, making future refactoring more challenging.
  • Black Box Testing: Tests only interact with public interfaces, testing expected behaviors from an end-user perspective. Black Box Testing is recommended for maintainability, as it allows refactoring without breaking tests by focusing on behavior rather than code internals.

Behavior-Driven Development (BDD) and Documentation

BDD guides developers to create tests focused on expected behaviors, providing intuitive documentation. Each test names and validates a specific behavior, such as "a new cup is empty," which makes understanding the code straightforward for future developers.

Designing Readable and Maintainable Tests

Readable and maintainable tests are simple and free of unnecessary complexity. Every unit test should focus on a single behavior, making tests easy to interpret and troubleshoot. This clarity is essential for enabling reviewers to understand test intentions without knowing the code intimately.

Test-Driven Development (TDD) and Its Role in Design

TDD reinforces software design by encouraging developers to write tests before code. Known as the Red-Green-Refactor cycle, TDD begins with writing a failing test (Red), creating code to make the test pass (Green), and refining the code (Refactor). This practice minimizes bugs from the outset, refines design, and builds a stable foundation of tests to verify code during refactoring.

¡ 36 min read

⌚ Nice watch!​

In this blog post, I'll be sharing a collection of videos with concise content digests. These summaries extract the key points, focusing on the problem discussed, its root cause, and the solution or advice offered. I find this approach helpful because it allows me to retain the core information long after watching the video. This section will serve as a dedicated space for these "good watches," presenting only the most valuable videos and their takeaways in one place.

2024-08-18 Burnout - When does work start feeling pointless? | DW Documentary - YouTube { www.youtube.com }

image-20240817174213143

High-Level Categories and Subcategories of Problems in the Transcript​

1. Workplace Dysfunction​

1.1 Bureaucracy and Sabotage

  • Problem: Office life has adopted tactics of sabotage (00:01:13) similar to a WWII manual, where inefficiency is encouraged through endless meetings, paperless offices, and waiting for decisions in larger meetings.

  • Root Cause: Bureaucratic processes have unintentionally adopted methods once used deliberately to disrupt efficiency.

  • Solution: Recognize the signs of sabotage in office routines and seek to streamline decision-making and reduce unnecessary meetings.

    1.2 Administrative Bloat

  • Problem: Administrative jobs (00:03:28) have increased from 25% to 75% of the workforce. These include unnecessary supervisory, managerial, and clerical jobs.

  • Root Cause: Expansion of administrative roles rather than reducing workload with technology.

  • Solution: A shift towards more meaningful roles and reducing bureaucratic excess would help in streamlining operations.

2. Employee Burnout and Mental Health​

2.1 Physical and Emotional Exhaustion

  • Problem: Burnout (00:10:11) manifests in intense physical exhaustion, to the point of difficulty performing basic tasks, and emotional breakdowns.

  • Root Cause: Overwork, perfectionism, and the pressure to perform.

  • Solution: Recognize the early signs of burnout, reduce workloads, and address stress proactively through support and time off.

    2.2 Pluralistic Ignorance

  • Problem: Employees feel isolated, believing they are the only ones struggling (00:15:19), while everyone else seems fine.

  • Root Cause: Lack of open communication about stress and burnout in the workplace.

  • Solution: Encourage honest discussions about workplace difficulties to reduce isolation and collective burnout.

3. Managerial and Leadership Failures​

3.1 Misaligned Management Expectations

  • Problem: Many managers are promoted based on tenure or individual performance (00:24:26), rather than leadership skills, leading to poor team management.

  • Root Cause: Promotions based on irrelevant criteria, such as tenure, rather than leadership capability.

  • Solution: Companies need to create pathways for individual contributors to be rewarded without forcing them into management roles.

    3.2 Disconnect Between Managers and Employees

  • Problem: Managers often do not engage with employees on a personal level (00:26:32), leading to isolation and poor job satisfaction.

  • Root Cause: Lack of training for managers to build relationships with their teams.

  • Solution: Managers should be trained in emotional intelligence and encouraged to have personal conversations with employees.

4. Corporate Culture and Value Conflicts​

4.1 Corporate Reorganizations

  • Problem: Reorganizations, layoffs, and restructuring cause ongoing stress for employees (00:34:28). People live in fear of losing their jobs despite hard work.

  • Root Cause: Frequent corporate restructuring often lacks a clear purpose beyond satisfying financial analysts or stockholders.

  • Solution: Limit reorganizations to only when necessary and focus on transparent communication to reduce employee anxiety.

    4.2 Cynicism Due to Unfair Treatment

  • Problem: When workplaces are seen as unfair (00:46:43), cynicism grows, leading to a toxic environment.

  • Root Cause: Lack of transparency and fairness in company policies and actions, leading to distrust.

  • Solution: Implement fair policies and involve employees in decision-making to reduce feelings of exploitation.

5. Misalignment of Work and Purpose​

5.1 Lack of Value in Work

  • Problem: Employees feel their work lacks social value (00:33:00). Despite hard work, they see no real-world impact or meaning.
  • Root Cause: The economic system rewards meaningless work more than jobs that provide immediate, tangible benefits to society.
  • Solution: Employers should align tasks with broader human values and ensure that workers understand the social impact of their contributions.

Summary of Key Problems and Solutions​

  1. Workplace Dysfunction: Bureaucratic inefficiency, administrative bloat, and unnecessary meetings create a sense of sabotage in modern offices. Solution: Streamline decision-making and reduce bureaucratic roles.
  2. Employee Burnout: Burnout is widespread due to overwork, isolation, and emotional stress. Solution: Acknowledge the signs of burnout, reduce workload, and foster open communication.
  3. Managerial Failures: Many managers lack the skills to lead effectively, causing disengagement and poor team dynamics. Solution: Train managers in leadership and emotional intelligence.
  4. Corporate Culture: Frequent reorganizations and unfair treatment create cynicism and stress among employees. Solution: Ensure fair policies and minimize unnecessary restructurings.
  5. Lack of Meaningful Work: Employees feel disconnected from the social value of their work, seeing it as pointless. Solution: Align work tasks with human values and meaningful contributions.

The most critical issues are employee burnout and the disconnect between management and workers, both of which contribute to widespread dissatisfaction and inefficiency in workplaces. Addressing these through better leadership training, reducing unnecessary work, and improving workplace communication can lead to healthier, more engaged employees.

2024-10-13 How to Spend 14 Days in JAPAN 🇯🇵 Ultimate Travel Itinerary - YouTube { www.youtube.com }

image-20241013110107937

Here’s a streamlined travel plan for visiting some of Japan’s most iconic destinations, focusing on the essential experiences in each place. Follow this itinerary for a mix of history, nature, and food.

1. Shirakawago
Start your journey in Shirakawago, a mountain village known for its traditional Gassho-zukuri farmhouses and heavy winter snowfall. The buildings are arranged facing north to south to minimize wind resistance. Stay overnight in one of the farmhouses to fully experience the town.

  • Don't miss: The House of Pudding, serving Japan’s best custard pudding (2023 winner).

2. Takayama
Head to Takayama, a town in the Central Japan Alps, filled with traditional architecture and a retro vibe. Walk through the Old Town, and visit the Takayama Showa Museum, which perfectly captures Japan in the 1950s and 60s.

  • Must-try food: Hida Wagyu beef is a local specialty, available in street food stalls or restaurants. You can enjoy a stick of wagyu for around 600 yen.

3. Kyoto
Next, visit the cultural capital, Kyoto, and stay in a Machiya townhouse in the Higashiyama district for an authentic experience. Kyoto offers endless shrines and temples to explore.

  • Fushimi Inari Shrine: Famous for its 10,000 red Torii gates leading up Mount Inari. The gates are donated by businesses for good fortune.
  • Kinkakuji (Golden Pavilion): One of Kyoto’s most iconic landmarks, glistening in the sunlight.
  • Tenryuji Temple: A 14th-century Zen temple with a garden and pond, virtually unchanged for 700 years.

4. Nara
Travel to Nara, a smaller city where you can explore the famous Nara Park, home to 1,200 friendly deer. You can bow to the deer, and they'll bow back if they see you have crackers.

  • Todaiji Temple: Visit the 49-foot-tall Buddha and try squeezing through the pillar’s hole (said to grant enlightenment).
  • Yomogi Mochi: Don’t miss this chewy rice cake treat filled with red bean paste, but eat it carefully!

5. Osaka
End your trip in Osaka, known as the nation’s kitchen. Stay near Dotonbori to experience the neon lights and vibrant nightlife.

  • Takoyaki: Grab some fried octopus balls, Osaka’s most famous street food, but be careful—they’re hot!
  • Osaka Castle: Explore this iconic castle, though the interior is a modern museum.

This travel plan covers historical landmarks, must-try local foods, and unique cultural experiences, offering a comprehensive taste of Japan.

2024-10-12 How to Delete Code - Matthew Jones - ACCU 2024 - YouTube { www.youtube.com }

image-20241012110250287

Quote from attendee:

"Code is a cost. Code is not an asset. We should have less of it, not more of it."

Other thoughts on this topic:

Martin Fowler (Agile advocate and software development thought leader) has expressed similar thoughts in his writings. In his blog post "Code as a Liability," he explains that every line of code comes with maintenance costs, and the more code you have, the more resources are needed to manage it over time:

"The more code you have, the more bugs you have. The more code you have, the harder it is to make changes."

John Ousterhout, a professor and computer scientist, has echoed this in his book "A Philosophy of Software Design." He talks about code complexity and how more code often means more complexity, which in turn leads to more problems in the future:

"The most important thing is to keep your code base as simple as possible."

(GPT Summary)

Cppcheck - A tool for static C/C++ code analysis

  1. Dead Code Identification and Removal

    • Importance of removing dead code: Dead code clutters the codebase, adds complexity, and increases maintenance costs. Action: Actively look for dead functions or features that are no longer in use. For example, if a feature has been deprecated but not fully removed, ensure its code is deleted.
    • Techniques for identifying dead code: Use tools like static analysis, manual code review, or testing. Action: Rename the suspected dead function, rebuild, and let the compiler flag errors where the function is still being used.
    • Using static analysis and compilers: These tools help identify unreachable or unused code. Action: Regularly run tools like CPPCheck or Clang Analyze in your CI pipeline to detect dead code.
    • Renaming functions to detect dead code: A simple way to identify unused code. Action: Rename a function (e.g., myFunction to myFunction_old), and see if it causes errors during the build process. If not, the function is likely dead and can be safely removed.
    • Deleting dead features and their subtle dependencies: Features often have dependencies that may be missed. Action: When removing a dead feature, check for subtle references, such as menu items, command-line flags, or other parts of the system that may still rely on it.
  2. Caution with Large Codebase Changes

    • Taking small, careful steps: Removing too much at once can lead to major issues. Action: Remove a small function or part of the code, test, and repeat. For example, instead of removing an entire module, start with one function.
    • Avoiding aggressive feature removal: Over-removal can cause unexpected failures. Action: Approach code deletion incrementally. Don’t aim to delete an entire feature at once; instead, tease out its components slowly to avoid breaking dependencies.
    • Moving code to reduce scope: If code is not needed at the global scope, move it to a more local context. Action: Move public functions from header files to .cpp files and see if any errors occur. This can help isolate the function’s scope and make it easier to remove later.
    • Risk of breaking builds: Avoid breaking the build with massive deletions. Action: Ensure you take incremental steps, test continuously, and use atomic commits to revert small changes if needed.
  3. Refactoring Approaches

    • Iterative refactoring and deletion: Refactor code in small steps to ensure stability. Action: When removing a dead function, check what other code depends on it. If a function calling it becomes unused, continue refactoring iteratively.
    • Refactoring legacy code: Legacy code can often hide dead functions. Action: Slowly reduce the scope of legacy functions by moving them to lower levels (like .cpp files) to see if their usage drops. If not used anymore, delete them.
    • Using unit tests for refactoring: Ensure that code works after refactoring. Action: Wrap legacy string classes or custom utility functions in unit tests, then replace the core logic with modern STL alternatives. If the tests pass, the old code can be removed safely.
    • Replacing custom features with third-party libraries: Many custom solutions from the past can now be replaced by modern libraries. Action: If you have a custom logger class, consider replacing it with a more standardized and robust library like spdlog.
  4. Working with Tools

    • Using plugins or IDEs: Most modern IDEs can help identify dead code. Action: Use Visual Studio or IntelliJ plugins that flag unreachable code or highlight unused functions.
    • Leveraging Compiler Explorer: Use online tools to isolate and test specific snippets of code. Action: If you can’t refactor in the main codebase, copy the function into Compiler Explorer (godbolt.org) and experiment with it there before making changes.
    • Setting compiler flags: Enable warnings for unreachable or unused code. Action: Use -Wall or -Wextra in GCC or Clang to flag potentially dead code. For example, set Wextra in your build system to catch unused variables and unreachable code.
    • Running static analysis tools: Integrate tools like CPPCheck into your CI pipeline. Action: Add CPPCheck to Jenkins and run it with -j to detect dead functions across multiple translation units.
  5. Source Control Best Practices

    • Atomic commits: Always break down deletions into small, reversible changes. Action: Commit changes one at a time and with meaningful messages, such as "Deleted unused function myFunction()." This allows you to easily revert just one commit if needed.
    • Small steps and green builds: Ensure the build passes after each commit. Action: Commit your changes, wait for the CI pipeline to return a green build, and only proceed if everything passes.
    • Keeping history in the main branch: Deleting code in a branch risks losing history. Action: Perform deletions in the main branch with proper commit messages. In Git, avoid squashing commits when merging deletions, as this may obscure your work history.
  6. Communication and Collaboration

    • Educating teams about dead code: Not everyone understands the importance of cleaning up dead code. Action: When you find dead code, educate the team by documenting what you’ve removed and why.
    • Communicating when deleting shared code: Deleting code that others may rely on needs consensus. Action: Start a conversation with the team and document the code you intend to delete. Make sure the removal won’t disrupt anyone’s work.
    • Seasonal refactoring: Pick quieter periods like holidays for large-scale refactoring. Action: Plan code cleanups during slower times (e.g., Christmas or summer) when fewer developers are working. For example, take the three days between Christmas and New Year to remove unused code while avoiding merge conflicts.
  7. Handling Legacy Features

    • Addressing dead features tied to legacy systems: These can be tricky to remove without causing issues. Action: Mark features as deprecated first, communicate with stakeholders, and plan their removal after a safe period.
    • Managing end-of-life features carefully: Inform customers and stakeholders before removing any external-facing features. Action: Announce the feature’s end-of-life, allow time for feedback, and only remove the feature after this period (e.g., six months).
  8. Miscellaneous Code Cleanup

    • Removing unnecessary includes: Many includes are added but never removed. Action: Comment out all include statements at the top of a file, then add them back one by one to see which ones are actually needed.
    • Deleting repeated or needless code: Repeated code should be factored into functions or libraries. Action: If you find duplicated code, refactor it into a helper function or a shared library to reduce repetition.
  9. Comments in Code

    • Avoiding inane comments: Comments that explain obvious code operations are distracting. Action: Delete comments like “// increment i by 1” that explain simple logic you can deduce from reading the code.
    • Recognizing risks in outdated comments: Old comments can hide the fact that code has changed. Action: When refactoring, ensure comments are either updated or removed to avoid misleading information about the code’s purpose.
    • Focusing on clean code: Let the code speak for itself. Action: Favor well-written, self-explanatory code that requires minimal commenting. For instance, use descriptive function names like calculateTotal() instead of adding comments like “// This function calculates the total.”
  10. When to Delete Code

    • Timing deletions carefully: Avoid risky deletions right before a release. Action: Plan large code cleanups in advance, and avoid removing any code near a major product release when stability is crucial.
    • Refactoring during quiet periods: Use downtimes, such as post-release, for cleanup. Action: After a major release or during holidays, revisit old tasks marked for deletion.
    • Tracking deletions in the backlog: Use a backlog to schedule code deletions that can’t be done immediately. Action: Create a "technical debt" section in your backlog and record all dead code identified for future cleanup.
  11. Final Thoughts on Refactoring

    • Challenging bad habits: Sometimes teams resist deleting old code. Action: Slowly introduce refactoring practices, starting small to show the benefits.
    • Measuring and recording progress: Keep track of all dead code and document changes. Action: Use tools like Jira to track deletions and improvements in code health.
    • Deleting responsibly: Don’t delete code just for the sake of it. Action: Ensure that deleted code is truly unused and won’t cause issues down the line. For example, test thoroughly before removing any core functionality.

2024-09-29 Insights From an L7 Meta Manager: Interviews, Onboarding, and Building Trust - YouTube { www.youtube.com }

image-20241013110406428

High-Level Categories of Problems and Solutions​

1. Onboarding and Adjustment in New Senior Roles (00)

  • Problem: Senior engineers often struggle when transitioning to new companies, particularly in adjusting to different company cultures and technical structures.

    • Context: Moving between large tech companies like Amazon and Meta presents challenges due to different coding practices (e.g., service-oriented architecture vs. monorepo) and operational structures.

    • Root Cause: A mismatch between previous experiences and new company environments.

    • Solution: Avoid trying to change the new environment immediately. Instead, focus on learning and adapting to the culture. Build trust with the team over six to seven months before attempting major changes.

    • Timestamp: 00:03:30

    • Quote:

      "If you go join another company, you've got a lot to learn, you've got a lot of relationships to build, and you ultimately need to figure out how to generalize your skill set."

2. Building Trust and Relationships in Senior Roles (00)

  • Problem: Senior engineers often fail to invest time in building relationships and trust with new teams.

    • Context: New senior engineers may rush into projects without first establishing rapport with their colleagues.

    • Root Cause: Lack of emphasis on trust-building leads to resistance from teams.

    • Solution: Dedicate the first few months to relationship-building and understanding the team’s dynamics. Don’t attempt large projects right away.

    • Timestamp: 00:05:00

    • Quote:

      "If you rush that process, you're going to be in for a hell of a lot of resistance."

3. Poor Ramp-up Periods for New Engineers (00)

  • Problem: New hires are often not given enough time to ramp up before being evaluated in performance reviews.

    • Context: Lack of structured ramp-up time for new senior hires can lead to poor performance evaluations early on.

    • Root Cause: Managers failing to allocate sufficient time for new employees to learn and adapt.

    • Solution: Managers should provide clear onboarding timelines (6-7 months) for engineers to integrate into teams, with gradual increases in responsibility.

    • Timestamp: 00:09:00

    • Quote:

      "The main thing that we did is just basically give them a budget of some time... to build up their skill set and trust with the team."

4. Mistakes in Adapting to New Cultures (00)

  • Problem: Senior engineers often try to change new environments too quickly, leading to friction.

    • Context: Engineers accustomed to one type of tech stack or organizational process may attempt to enforce old methods in a new setting.

    • Root Cause: Engineers feel uncomfortable in the new culture and attempt to recreate their old environment.

    • Solution: Focus on understanding the reasons behind the new company's practices before suggesting any changes.

    • Timestamp: 00:07:00

    • Quote:

      "Failure mode... is to try to change everything... and that's almost always the wrong approach."

Performance Reviews and Evaluations​

5. Misunderstanding the Performance Review Process (00)

  • Problem: Engineers sometimes misunderstand how they are evaluated in performance reviews, especially during their first year.

    • Context: There’s often confusion about how contributions during the onboarding period are assessed.

    • Root Cause: Lack of transparency or communication from managers regarding performance criteria.

    • Solution: Managers must clarify performance expectations and calibration processes, while engineers should ask for regular feedback to stay on track.

    • Timestamp: 00:10:00

    • Quote:

      "Some managers just don't do a good job of actually setting the stage for new hires."

6. Lack of Visibility in Performance Reviews (00)

  • Problem: Senior engineers often fail to showcase their work to the broader team, limiting their visibility in performance reviews.

    • Context: In larger organizations, a single manager is not solely responsible for performance evaluations. Feedback from other team members and leadership is critical.

    • Root Cause: Not socializing work with peers or senior leadership.

    • Solution: Regularly communicate your contributions to multiple stakeholders, not just your direct manager.

    • Timestamp: 00:14:00

    • Quote:

      "Socialize the work that you're doing with those other people... it's even better if you've had a chance to actually talk with them."

7. Taking on Projects Too Early (00)

  • Problem: Engineers may overestimate their readiness and take on large projects too soon after joining a new company.

    • Context: Jumping into big projects without adequate preparation can lead to mistakes and strained relationships.

    • Root Cause: Lack of patience and eagerness to prove oneself.

    • Solution: Focus on smaller tasks and gradually scale up responsibility after establishing trust and familiarity with the environment.

    • Timestamp: 00:06:30

    • Quote:

      "Picking up a massive project as soon as you join a company is probably not the best idea."

Behavioral and Technical Interviews​

8. Lack of Depth in Behavioral Interviews (00)

  • Problem: Engineers often struggle with behavioral interviews, particularly when it comes to self-promotion and clearly discussing their impact.

    • Context: Senior engineers may downplay their role in leading large projects, failing to convey their leadership and influence.

    • Root Cause: Engineers often feel uncomfortable talking about their own contributions.

    • Solution: Engineers need to learn how to take credit for their work and articulate the complexity of their projects in interviews.

    • Timestamp: 00:19:00

    • Quote:

      "If you simply talk about your team and you aren't framing this as you driving, it doesn't demonstrate the level that I'm looking for."

9. Over-Reliance on Rehearsed Answers in Design Interviews (00)

  • Problem: In design interviews, engineers sometimes rely on rehearsed answers, which doesn’t showcase their real problem-solving abilities.

    • Context: Instead of improvising, engineers often recite previously learned solutions that don't apply to the specific design problem at hand.

    • Root Cause: A lack of confidence in applying their experience to new problems.

    • Solution: Approach design problems creatively by focusing on unique elements of the task and how past experience can offer novel solutions.

    • Timestamp: 00:17:00

    • Quote:

      "You're really supposed to be scribbling outside the lines."

Key Problems and Their Solutions Summary:​

  1. Onboarding and Adjustment: Senior engineers often face challenges adapting to new company cultures. Solution: Focus on learning the environment, and avoid trying to change it too quickly.
  2. Trust and Relationships: Lack of relationship-building leads to resistance. Solution: Take time to build rapport and trust with the team before diving into big projects.
  3. Performance Reviews: New hires may not understand performance expectations. Solution: Ensure transparency in review processes and socialize your contributions with key stakeholders.
  4. Interviews: Engineers may struggle in behavioral and design interviews. Solution: Take ownership of your contributions and avoid relying on rehearsed answers.

These are the most critical problems discussed in the transcript, with clear, actionable advice for each.

2024-09-24 LLMs gone wild - Tess Ferrandez-Norlander - NDC Oslo 2024 - YouTube { www.youtube.com }

Tess Ferrandez-Norlander (works at Microsoft)

image-20240923230759509

image-20240923231052659

2024-09-24 Overview - Chainlit { docs.chainlit.io }

image-20240923232539521

image-20240924110712641

2024-09-24 2406.04369 RAG Does Not Work for Enterprises { arxiv.org }

image-20240924111707846

2024-09-26 Your website does not need JavaScript - Amy Kapernick - NDC Oslo 2024 - YouTube { www.youtube.com }

image-20240926005552838

No JS (amyskapers.dev)

image-20240926235229401

2024-09-27 amykapernick/no_js { github.com }

2024-08-24 Reducing C++ Compilation Times Through Good Design - Andrew Pearcy - ACCU 2024 - YouTube { www.youtube.com }

image-20241013111059590

  1. Precompiled Headers: One of the most effective methods is using precompiled headers (PCH). This technique involves compiling the header files into an intermediate form that can be reused across different compilation units. By doing so, you significantly reduce the need to repeatedly process these files, cutting down the overall compilation time. Tools like CMake can automate this by managing dependencies and ensuring headers are correctly precompiled and reused across builds.

  2. Parallel Compilation: Another approach is parallel compilation. Tools like Make, Ninja, and distcc allow you to compile multiple files simultaneously, taking advantage of multi-core processors. For instance, using the -j flag in make or ninja enables you to specify the number of jobs (i.e., compilation tasks) to run in parallel, which can dramatically reduce the time it takes to compile large projects.

  3. Unity Builds: Unity builds are another technique where multiple source files are compiled together as a single compilation unit. This reduces the overhead caused by multiple compiler invocations and can be particularly useful for large codebases. However, unity builds can introduce some challenges, such as longer error messages and potential name collisions, so they should be used selectively.

  4. Code Optimization: Structuring your code to minimize dependencies can also be highly effective. Techniques include forward declarations, splitting projects into smaller modules with fewer interdependencies, and replacing heavyweight standard library headers with lighter alternatives when possible. By reducing the number of dependencies that need to be recompiled when a change is made, you can significantly decrease compile times.

  5. Caching Compilation Results: Tools like ccache store previous compilation results, which can be reused if the source files haven’t changed. This approach is particularly useful in development environments where small, incremental changes are frequent.

Here is the detailed digest from Andrew Pearcy's talk on "Reducing Compilation Times Through Good Design", along with the relevant project homepages and tools referenced throughout the discussion.

Video Title: Reducing Compilation Times Through Good Design

Andrew Pearcy, an engineering team lead at Bloomberg, outlines strategies for significantly reducing C++ compilation times. The talk draws from his experience of cutting build times from one hour to just six minutes, emphasizing practical techniques applicable in various C++ projects.

Motivation for Reducing Compilation Times​

Pearcy starts by explaining the critical need to reduce compilation times. Long build times lead to context switching, reduced productivity, and delays in CI pipelines, affecting both local development experience and time to market. Additionally, longer compilation times make adopting static analysis tools like Clang-Tidy impractical due to the additional overhead. Reducing compilation time also optimizes resource utilization, especially in large companies where multiple machines are involved.

Overview of the C++ Compilation Model​

He recaps the C++ compilation model, breaking it down into phases: pre-processing, compilation, and linking. The focus is primarily on the first two stages. Pearcy notes that large header files and unnecessary includes can significantly inflate the amount of code the compiler must process, which in turn increases build time.

Quick Wins: Build System, Linkers, and Compiler Caching​

1. Build System:

  • Ninja: Pearcy recommends using Ninja instead of Make for better dependency tracking and faster incremental builds. Ninja was designed for Google's Chromium project and can often be an order of magnitude faster than Make. It utilizes all available cores by default, improving build efficiency.
  • Ninja Documentation: Ninja Build System

2. Linkers:

  • LLD and Mold: He suggests switching to LLD, a faster alternative to the default linker, LD. Mold, a modern linker written by Rui Ueyama (who also worked on LLD), is even faster but consumes more memory and is available for Unix platforms as open-source while being a paid service for Mac and Windows.
  • LLD: LLVM Project - LLD
  • Mold: Mold: A Modern Linker

3. Compiler Caching:

  • Ccache: Pearcy strongly recommends Ccache for caching compilation results to speed up rebuilds by avoiding recompilation of unchanged files. This tool can be integrated into CI pipelines to share cache across users, which can drastically reduce build times.
  • Ccache: Ccache

Detailed Techniques to Reduce Build Times

1. Forward Declarations:

  • Pearcy emphasizes the use of forward declarations in headers to reduce unnecessary includes, which can prevent large headers from being included transitively across multiple translation units. This reduces the amount of code the compiler needs to process.

2. Removing Unused Includes:

  • He discusses the challenge of identifying and removing unused includes, mentioning tools like Include What You Use and Graphviz to visualize dependencies and find unnecessary includes.
  • Include What You Use: Include What You Use
  • Graphviz: Graphviz

3. Splitting Protocol and Implementation:

  • To reduce dependency on large headers, he suggests the Pimpl (Pointer to Implementation) Idiom or creating interfaces that hide the implementation details. This technique helps in isolating the implementation in a single place, reducing the amount of code the compiler needs to process in other translation units.

4. Precompiled Headers (PCH):

  • Using precompiled headers for frequently included but rarely changed files, such as standard library headers, can significantly reduce build times. However, he warns against overusing PCHs as they can lead to diminishing returns if too many headers are precompiled.
  • CMake added support for PCH in version 3.16, allowing easy integration into the build process.
  • CMake Precompiled Headers: CMake Documentation

5. Unity Build:

  • Pearcy introduces Unity builds, where multiple translation units are combined into a single one, reducing redundant processing of headers and improving build times. This technique is particularly effective in reducing overall build times but can introduce issues like naming collisions in anonymous namespaces.
  • CMake provides built-in support for Unity builds, with options to batch files to balance parallelization and memory usage.
  • Unity Build Documentation: CMake Unity Builds

2024-07-26 Turbocharged: Writing High-Performance C# and .NET Code - Steve Gordon - NDC Oslo 2024 - YouTube

image-20241013111239413

Turbocharging Your .NET Code with High-Performance APIs

Steve, a Microsoft MVP and engineer at Elastic, discusses various high-performance APIs in .NET that can optimize application performance. The session covers measuring and improving performance, focusing on execution time, throughput, and memory allocations.

Performance in Application Code Performance is measured by how quickly code executes, the throughput (how many tasks an application can handle in a given timeframe), and memory allocations. High memory allocations can lead to frequent garbage collections, impacting performance. Steve emphasizes that performance optimization is contextual, meaning not every application requires the same level of optimization.

Optimization Cycle The optimization cycle involves measuring current performance, making small changes, and re-measuring to ensure improvements. Tools like Visual Studio profiling, PerfView, and JetBrains products are useful for profiling and measuring performance. BenchmarkDotNet is highlighted for micro-benchmarking, providing precise measurements by running benchmarks multiple times to get accurate data.

High-Performance Code Techniques

  1. Span<T>: A type that provides a read/write view over contiguous memory, allowing for efficient slicing and memory operations. It is highly efficient with constant-time operations for slicing.
  2. Array Pool: A pool for reusing arrays to avoid frequent allocations and deallocations. Using the ArrayPool<T>.Shared pool allows for efficient memory reuse, reducing short-lived allocations.
  3. System.IO.Pipelines: Optimizes reading and writing streams by managing buffers and minimizing overhead. It is particularly useful in scenarios like high-performance web servers.
  4. System.Text.Json: A high-performance JSON API introduced in .NET Core 3. It includes low-level Utf8JsonReader and Utf8JsonWriter for zero-allocation JSON parsing, as well as higher-level APIs for serialization and deserialization.

Examples and Benchmarks Steve presents examples of using these APIs in real-world scenarios, demonstrating significant performance gains. For instance, using Span<T> and ArrayPool in a method that processes arrays and messages led to reduced execution time and memory allocations. Switching to System.IO.Pipelines and System.Text.Json resulted in similar improvements.

"Slicing is really just changing the view over an existing block of memory... it's a constant time, constant cost operation."

"Measure your code, don’t assume, don’t make assumptions with benchmarks, it’s dangerous."

Conclusion Optimizing .NET code with high-performance APIs requires careful measurement and iterative improvements. While not all applications need such optimizations, those that do can benefit from significant performance gains. Steve concludes by recommending the book "Pro .NET Memory Management" for a deeper understanding of memory management in .NET.

2024-07-07 [Theo - t3․gg](https://www.youtube.com/@t3dotgg) My Spiciest Take On Tech Hiring - YouTube

2024-07-07 Haskell for all: My spiciest take on tech hiring

image-20240707101322999

High-Level Categories of Problems

  1. Tech Hiring Process Issues

    • Too Many Interviews (00): Problem: Candidates face multiple rounds of interviews (up to seven), causing frustration and inefficiency. Many find it counterproductive to go through so many technical interviews. Root Cause: Overly complex hiring processes that assume more interviews lead to better candidates. Advice: Implement a streamlined process with just one technical interview and one non-technical interview, each lasting no more than one hour. Long interview processes are unnecessary and may filter out good candidates.

    • Interview Redundancy (00): Problem: The same type of technical questions are asked repeatedly across different interviews, leading to duplication. Root Cause: Lack of coordination among interviewers and reliance on similar types of technical questions. Advice: Ensure each interviewer asks unique, relevant questions and does not rely on others to gather the same information. Interviewers should bear ultimate responsibility for gathering critical data.

    • Bias in Hiring (00): Problem: Interview processes are biased because hiring managers may already have preferred candidates (referrals, strong portfolios) before the process begins. Root Cause: Pre-existing relationships with candidates or prior work experience influence decisions. Advice: Avoid dragging out the process to mask biases—shorter, efficient interviews can make the bias more visible but manageable. Long processes don't necessarily filter out bias.

    • Long Interview Processes Favor Privilege (00): Problem: Prolonged interview panels select for candidates who can afford to take time off work, favoring those from more privileged backgrounds. Root Cause: Candidates from less privileged backgrounds cannot afford to engage in drawn-out interviews. Advice: Shorten the interview length and focus on relevant qualifications. Ensure accessibility for all candidates by keeping the process simple.

  2. Interview Process Structure

    • Diffusion of Responsibility (00): Problem: In group interview settings, responsibility for hiring decisions is diffused, leading to poor or delayed decision-making. Root Cause: No single person feels accountable for making the final decision. Advice: Assign ownership of decisions by giving specific interviewers responsibility for crucial aspects of the process. This reduces the likelihood of indecision and delayed outcomes.

    • Hiring Based on Team Fit vs. Technical Ability (00): Problem: Emphasis on technical abilities often overshadows the importance of team compatibility. Root Cause: Focus on technical skills without considering cultural and interpersonal dynamics within the team. Advice: Ensure that interviews assess not only technical competence but also how well candidates fit into the team dynamic. Incorporate group discussions or casual settings (e.g., lunch meetings) to gauge team vibe.

    • Ambiguity in Interviewer Opinions (00): Problem: Some interviewers avoid committing to clear opinions about candidates, preferring neutral stances. Root Cause: Lack of confidence or fear of being overruled by the majority. Advice: Use a rating system (e.g., 1–4 scale) that forces interviewers to choose a strong opinion, either in favor or against a candidate.

  3. Candidate Experience and Behavior

    • Negative Behavior in Interviews (00): Problem: Candidates who perform well technically but exhibit unprofessional behavior (e.g., showing up late or hungover) can still pass through the hiring process. Root Cause: Strong technical performance may overshadow concerns about professionalism and reliability. Advice: Balance technical performance with non-technical evaluations. Weigh behaviors such as punctuality and professional demeanor just as heavily as coding skills.

    • Take-Home Tests and Challenges (00): Problem: Some candidates view take-home challenges as extra, unnecessary work, while others see them as a chance to showcase skills. Root Cause: Different candidates have different preferences and responses to technical assessments. Advice: Offer take-home tests as an option, but don't make them mandatory. Adjust the evaluation method based on candidate preferences to ensure both parties feel comfortable.

  4. Systemic Issues in the Hiring Process

    • Healthcare Tied to Jobs (00): Problem: In the U.S., job-based healthcare forces candidates to accept positions they might not want or complicates transitions between jobs. Root Cause: The healthcare system is tied to employment, making job transitions risky. Advice: There's no direct solution provided here, but highlighting the need for systemic changes in healthcare could make the hiring process more equitable.

    • Lack of Feedback to Candidates (00): Problem: Many companies avoid giving feedback to candidates after interviews, leaving them unsure of their performance. Root Cause: Fear of legal liability or workload concerns. Advice: Provide constructive feedback to candidates, even if they aren't selected. It helps build long-term relationships and contributes to positive company reputation. Some of the best connections come from transparent feedback post-interview.

  5. Hiring for Senior Positions

    • Senior Candidates Have Low Tolerance for Long Processes (00): Problem: Highly qualified senior candidates are more likely to decline long and drawn-out interview processes. Root Cause: Senior candidates, due to their experience and expertise, are less willing to tolerate inefficient processes. Advice: Streamline the process for senior roles. Keep interviews short, efficient, and focused on relevant discussions. High-level candidates prefer concise assessments over lengthy ones.
  6. Hiring on Trust vs. Formal Interviews

    • Hiring Based on Relationships (00): Problem: Engineers with pre-existing relationships or referrals are more likely to be hired than those without, bypassing formal interviews. Root Cause: Prior work relationships build trust, which can overshadow the need for formal vetting. Advice: Trust-based hiring should be encouraged when there is prior working experience with the candidate. However, make efforts to balance trust with fairness by including formal evaluations where necessary.

Key Problems Summary

  • The length and complexity of the hiring process discourages many strong candidates, particularly senior-level applicants. Simplifying the process to two interviews (one technical and one non-technical) is recommended.
  • Bias in the hiring process, particularly when managers have pre-existing relationships with candidates, leads to unfair outcomes.
  • Long interview processes favor privileged candidates who can afford to take time off, disadvantaging those from less privileged backgrounds.
  • Providing feedback to candidates is crucial for building long-term relationships and ensuring a positive hiring experience, yet it's often avoided due to legal concerns.
  • Team fit is just as important as technical skills, and companies should incorporate group interactions to assess interpersonal dynamics.

Most Critical Issues and Solutions

  • Problem: Too many technical interviews create frustration and inefficiency. Solution: Use just one technical and one non-technical interview, and assign responsibility for gathering all relevant information during these sessions.

  • Problem: Bias due to pre-existing relationships. Solution: Shorten the process to expose bias more clearly and rely on trust-based hiring only when balanced with formal interviews.

  • Problem: Lack of feedback to candidates. Solution: Provide constructive feedback to help candidates improve and establish long-term professional relationships.

¡ 15 min read

Good Reads​

2024-08-27 Four Lessons from 2023 That Forever Changes My Software Engineering Career | by Yifeng Liu | Medium | Medium { medium.com }

This past year, four key lessons transformed my approach to software engineering.

First, I learned that execution is as important as the idea itself. Inspired by Steve Jobs, who highlighted the gap between a great idea and a great product, I focused on rapid prototyping to test feasibility and internal presentations to gather feedback. I kept my manager informed to ensure we were aligned and honest about challenges.

Second, I realized that trust and credibility are fragile but crucial. As a senior engineer, I'm expected to lead by solving complex issues and guiding projects. I saw firsthand how failing to execute or pushing unrealistic timelines could quickly erode trust within my team.

The third lesson was about the importance of visibility. I understood that hard work could go unnoticed if I didn’t make it visible. I began taking ownership of impactful projects and increased my public presence through presentations and updates. I also honed my critical thinking to offer valuable feedback and identify improvement opportunities.

Finally, I learned to focus on changing myself rather than others. I used to try to change my team or company, but now I realize it’s more effective to work on my growth and influence others through my actions. Understanding the company’s culture and my colleagues' aspirations helped me align my efforts with my career goals.

These lessons have reshaped my career and how I approach my role as an engineer.

2024-08-28 Just use fucking paper, man - Andy Bell { andy-bell.co.uk }

27th of August 2024

I’ve tried Notion, Obsidian, Things, Apple Reminders, Apple Notes, Jotter and endless other tools to keep me organised and sure, Notion has stuck around the most because we use it for client stuff, but for todo lists, all of the above are way too complicated.

I’ve given up this week and gone back to paper and a pencil and I feel unbelievably organised and flexible, day-to-day. It’s because it’s simple. There’s nothing fancy. No fancy pen or anything like that either. Just a notebook and a pencil.

I’m in an ultra busy period right now so for future me when you inevitably get back to this situation: just. use. fucking. paper.

2024-08-29 The slow evaporation of the free/open source surplus – Baldur Bjarnason { www.baldurbjarnason.com }

I've been thinking a lot about the state of Free and Open Source Software (FOSS) lately. My concern is that FOSS thrives on surplus—both from the software industry and the labor of developers. This surplus has been fueled by high margins in the tech industry, easy access to investment, and developers who have the time and financial freedom to contribute to FOSS projects. However, I'm worried that these resources are drying up.

High interest rates are making investments scarcer, particularly for non-AI software, which doesn't really support open-source principles. The post-COVID economic correction is leading to layoffs and higher coder unemployment, which means fewer people have the time or incentive to contribute to FOSS. OSS burnout is another issue, with fewer fresh developers stepping in to replace those who are exhausted by maintaining projects that often lack supportive communities.

Companies are also cutting costs and questioning the value of FOSS. Why invest in open-source projects when the return on investment is uncertain? The rise of LLM-generated code is further disconnecting potential contributors from FOSS projects, weakening the communities that sustain them.

My fear is that FOSS is entering a period of decline. As the industry and labor surpluses shrink, FOSS projects might suffer from neglect, security issues, or even collapse. While some of this decline might be a necessary correction, it's hard not to worry about the future of the FOSS ecosystem, especially when we don't know which parts are sustainable and which are not.

2024-08-29 Why does getting a job in tech suck right now? (Is it AI?!?) – r y x, r { ryxcommar.com }

image-20240915141710361

2024-08-31 Using Fibonacci Numbers to Convert from Miles to Kilometers and Vice Versa { catonmat.net }

Take two consecutive Fibonacci numbers, for example 5 and 8.

And you're done converting. No kidding – there are 8 kilometers in 5 miles. To convert back just read the result from the other end – there are 5 miles in 8 km!

Another example.

Let's take two consecutive Fibonacci numbers 21 and 34. What this tells us is that there are approximately 34 km in 21 miles and vice versa. (The exact answer is 33.79 km.)

Mind = blown. Completely.

2024-09-11 The Art of Finishing | ByteDrum { www.bytedrum.com }

The article explores the challenge of unfinished projects and the cycle of starting with enthusiasm but failing to complete them. The author describes this as the Hydra Effect—each task completed leads to new challenges. Unfinished projects feel full of potential, but fear of imperfection or even success prevents many developers from finishing.

"An unfinished project is full of intoxicating potential. It could be the next big thing... your magnum opus."

However, leaving projects incomplete creates mental clutter, making it hard to focus and learn key lessons like optimization and refactoring. Finishing is crucial for growth, both technically and professionally.

"By not finishing, you miss out on these valuable learning experiences."

To break this cycle, the author offers strategies: define "done" early, focus on MVP (Minimum Viable Product), time-box projects, and separate ideation from implementation. Practicing small completions and using accountability are also recommended to build the habit of finishing.

The article emphasizes that overcoming the Hydra Effect requires discipline but leads to personal and professional growth.

2024-09-11 Improving Application Availability: The Basics | by Mario Bittencourt | SSENSE-TECH | Aug, 2024 | Medium { medium.com }

In this article, I introduce the essentials of application availability and how to approach high availability. High availability is measured by uptime percentage. Achieving 99.999% availability (five nines) means accepting no more than 5 minutes of downtime per year, which requires automation to detect and fix issues fast.

I discuss redundancy as a key strategy to improve availability by using backups for connectivity, compute resources, and persistence. If one component fails, the system switches to a secondary option. However, redundancy adds both cost and complexity. More components require advanced tools, like load balancers, to manage failures, but these solutions introduce their own reliability concerns.

Not every part of an application needs the same availability target. In an e-commerce system, for instance, I categorize components into tiers:

  • T1 (website and payments) must stay available at all times.
  • T2 (order management) allows some downtime.
  • T3 (fulfillment) can tolerate longer outages.
  • T4 (ERP) has the least strict requirements.

"Your goal is to perform an impact analysis and classify each component in tiers according to its criticality and customer impact."

By setting different availability targets for each tier, you can reduce costs while focusing on the most important parts of your system.

"All strategies to improve availability come with trade-offs, usually involving higher costs and complexity."

This sets the stage for future discussions on graceful degradation, asynchronous processing, and disaster recovery strategies.

2024-09-12 A Bunch of Programming Advice I’d Give To Myself 15 Years Ago Marcus' Blog { mbuffett.com }

If the team is constantly tripping over a recurring issue, it's crucial to fix the root cause, rather than repeatedly patching symptoms. The author mentions, "I decided to fix it, and it took ten minutes to update our subscription layer to call subscribers on the main thread instead," thereby removing the cause of crashes, streamlining the codebase, and reducing mental overhead.

Pace versus quality must be balanced based on context. In low-risk environments, it's okay to ship faster and rely on guardrails; in high-risk environments (like handling sensitive data), quality takes precedence. "You don’t need 100% test coverage or an extensive QA process, which will slow down the pace of development," when bugs can be fixed easily.

Sharpening your tools is always worth it. Being efficient with your IDE, shortcuts, and dev tools will pay off over time. Fast typing, proficiency in the shell, and knowing browser tools matter. Although people warn against over-optimizing configurations, "I don’t think I’ve ever seen someone actually overdo this."

When something is hard to explain, it's likely incidental complexity. Often, complexity isn't inherent but arises from the way things are structured. If you can't explain why something is difficult, it’s worth simplifying. The author reflects that "most of the complexity I was explaining was incidental... I could actually address that first."

Solve bugs at a deeper level, not just by patching the immediate issue. If a React component crashes due to null user data, you could add a conditional return, but it’s better to prevent the state from becoming null in the first place. This creates more robust systems and a clearer understanding of how things work.

Investigating bugs should include reviewing code history. The author discovered a memory leak after reviewing commits, realizing the issue stemmed from recent code changes. Git history can be essential for debugging complex problems that aren't obvious through logs alone.

Write bad code when needed to get feedback. Perfect code takes too long and may not be necessary in every context. It's better to ship something that works, gather feedback, and refine it. "If you err on the side of writing perfect code, you don’t get any feedback."

Make debugging easier by building systems that streamline the process. Small conveniences like logging state diffs after every update or restricting staging environment parallelism to 1 can save huge amounts of time. The author stresses, "If it’s over 50%, you should figure out how to make it easier."

Working on a team means asking questions when needed. Especially in the first few months, it's faster to ask a coworker for a solution than spending hours figuring it out solo. Asking isn’t seen as a burden, so long as it’s not something trivial that could be self-solved in minutes.

Maintaining a fast shipping cadence is critical in startups and time-sensitive projects. Speed compounds over time, and improving systems, reusable patterns, and processes that support fast shipping is essential. "Shipping slowly should merit a post-mortem as much as breaking production does."

This article reaction and discussion on youtube:

2024-09-12 Theo Unexpected Lessons I've Learned After 15 Years Of Coding - YouTube { www.youtube.com }

2024-09-14 We need to talk about "founder mode" - YouTube { www.youtube.com }

"Stop hiring for the things you don't want to do. Hire for the things you love to do so you're forced to deal with the things you don't want to do.

This is some of the best advice I've been giving lately. Early on, I screwed up by hiring an editor because I didn't like editing. Since I didn't love editing, I couldn't be a great workplace for an editor—I couldn't relate to them, and they felt alone. My bar for a good edit was low because I just wanted the work off my plate.

But when I started editing my own stuff, I got pretty good and actually started to like it. Now, I genuinely think I'll stop recording videos before I stop editing them. By doing those things myself, I ended up falling in love with them.

Apply this to startups: If you're a founder who loves coding, hire someone to do it so you can't focus all your time on it. Focus on the other crucial parts of your business that need your attention.

Don't make the mistake of hiring to avoid work. Embrace what you love, and let it force you to grow in areas you might be neglecting."

Original post: 2024-09-14 Founder Mode { paulgraham.com }

Theo

Breaking Through Organizational Barriers: Connect with the Doers, Not Just the Boxes

In large organizations, it's common to encounter roadblocks where teams are treated as "black boxes" on the org chart. You might hear things like, "We can't proceed because the XYZ team isn't available," or "They need more headcount before tackling this."

Here's a strategy that has made a significant difference for me:

Start looking beyond the org chart and reach out directly to the individuals who are making things happen.

How to find them?

  • Dive into GitHub or project repositories: See who's contributing the most code or making significant updates.
  • Identify the most driven team members: Every team usually has someone who's more passionate and proactive.
  • Reach out and build a connection: They might appreciate a collaborative partner who shares their drive.

Why do this?

  • Accelerate Progress: Bypass bureaucratic delays and get projects moving.
  • Build Valuable Relationships: These connections can lead to future opportunities, referrals, or even partnerships.
  • Expand Your Influence: Demonstrating initiative can set you apart and open doors within the organization.

Yes, there are risks. Your manager might question why you're reaching out independently, or you might face resistance. But consider the potential rewards:

  • Best Case: You successfully collaborate to solve problems, driving innovation and making a real impact.
  • Worst Case: Even if you face pushback, you've connected with someone valuable. If either of you moves on, that relationship could lead to exciting opportunities down the line.

2024-09-15 Why Scrum is Stressing You Out - by Adam Ard { rethinkingsoftware.substack.com }

📌 Sprints never stop. Sprints in Scrum are constant, unlike the traditional Waterfall model where high-pressure periods are followed by low-pressure times. Sprints create ongoing, medium-level stress, which is more damaging long-term than short-term, intense stress. Long-term stress harms both mental and physical health. Advice: Build in deliberate breaks between sprints. Allow teams time to recover, reflect, and recalibrate before the next sprint. Introduce buffer periods for less intense work or creative activities.

🔖 Sprints are involuntary. Sprints in a Scrum environment are often imposed on developers, leaving them no control over the process or duration. Lack of autonomy leads to higher stress, similar to studies where forced activity triggers stress responses in animals. Control over work processes can reduce stress and improve job satisfaction. Advice: Involve the team in the sprint planning process and give them a say in determining task durations, sprint length, and workload. Increase autonomy to reduce stress by tailoring the Scrum process to fit the team’s needs rather than rigidly following preset rules.

😡 Sprints neglect key supporting activities. Scrum focuses on completing tasks within sprint cycles but doesn’t allocate enough time for essential preparatory activities like brainstorming and research. The lack of preparation time creates stress and leads to suboptimal work because thinking and doing cannot be entirely separated. Advice: Allocate time within sprints for essential preparation, brainstorming, and research. Set aside dedicated periods for planning, learning, or technical exploration, rather than expecting full-time execution during the sprint.

🍷 Most Scrum implementations devolve into “Scrumfall.” Scrum is often mixed with Waterfall-like big-deadline pressures, which cancel out the benefits of sprints and increase stress. When major deadlines approach, Scrum practices are suspended, leading to a high-stress environment combining the worst aspects of both methodologies. Advice: Resist combining Waterfall-style big deadlines with Scrum. Manage stakeholder expectations upfront and break larger goals into smaller deliverables aligned with sprint cycles. Stick to Agile principles and avoid falling back into the big-bang, all-at-once delivery mode.

2024-09-15 HOW TO SUCCEED IN MRBEAST PRODUCTION (leaked PDF) { simonwillison.net }

The MrBeast definition of A, B and C-team players is one I haven’t heard before:

A-Players are obsessive, learn from mistakes, coachable, intelligent, don’t make excuses, believe in Youtube, see the value of this company, and are the best in the goddamn world at their job. B-Players are new people that need to be trained into A-Players, and C-Players are just average employees. […] They arn’t obsessive and learning. C-Players are poisonous and should be transitioned to a different company IMMEDIATELY. (It’s okay we give everyone severance, they’ll be fine).

I’m always interested in finding management advice from unexpected sources. For example, I love The Eleven Laws of Showrunning as a case study in managing and successfully delegating for a large, creative project.

Newsletters​

2024-09-11 The web's clipboard { newsletter.programmingdigest.net }

2024-09-12 JavaScript Weekly Issue 704: September 12, 2024 { javascriptweekly.com }

¡ 15 min read

The Talk​

2024-09-01 Investigating Legacy Design Trends in C++ & Their Modern Replacements - Katherine Rocha C++Now 2024 - YouTube { www.youtube.com }

Katherine Rocha

image-20240901150003068

GPT generated content (close to the talk content)​

This digest is a comprehensive breakdown of the talk provided, which covered various advanced C++ programming techniques and concepts. Below, each point from the talk is identified and described in detail, followed by relevant C++ code examples to illustrate the discussed concepts.


1. SFINAE and Overload Resolution​

The talk begins with a discussion on the use of SFINAE (Substitution Failure Is Not An Error) and its role in overload resolution. SFINAE is a powerful C++ feature that allows template functions to be excluded from overload resolution based on specific conditions, enabling more precise control over which function templates should be used.

Key Points:

  • SFINAE is used to selectively disable template instantiation based on the properties of template arguments.
  • Overload resolution in C++ allows for multiple functions or operators with the same name to be defined, as long as their parameters differ. The compiler decides which function to call based on the arguments provided.

C++ Example:

#include <type_traits>
#include <iostream>

// Template function enabled only for arithmetic types using SFINAE
template <typename T>
typename std::enable_if<std::is_arithmetic<T>::value, T>::type
add(T a, T b) {
return a + b;
}

// Overload for non-arithmetic types is not instantiated
template <typename T>
typename std::enable_if<!std::is_arithmetic<T>::value, T>::type
add(T a, T b) = delete;

int main() {
std::cout << add(5, 3) << std::endl; // OK: int is arithmetic
// std::cout << add("Hello", "World"); // Error: string is not arithmetic
return 0;
}

2. Compile-Time Error Messages​

The talk transitions into how to improve compile-time error messages using static_assert and custom error handling in templates. By using these techniques, developers can provide clearer error messages when certain conditions are not met during template instantiation.

Key Points:

  • Use static_assert to enforce conditions at compile time, ensuring that the program fails to compile if certain criteria are not met.
  • Improve the readability of error messages by providing meaningful feedback directly in the code.

C++ Example:

#include <iostream>
#include <type_traits>

template<typename T>
void check_type() {
static_assert(std::is_integral<T>::value, "T must be an integral type");
}

int main() {
check_type<int>(); // OK
// check_type<double>(); // Compile-time error: T must be an integral type
return 0;
}

3. Concepts in C++20​

The talk explores Concepts, a feature introduced in C++20, which allows developers to specify constraints on template arguments more succinctly and expressively compared to SFINAE. Concepts help in making templates more readable and the error messages more comprehensible.

Key Points:

  • Concepts define requirements for template parameters, making templates easier to understand and use.
  • Concepts improve the clarity of both template definitions and error messages.

C++ Example:

#include <concepts>
#include <iostream>

template<typename T>
concept Arithmetic = std::is_arithmetic_v<T>;

template<Arithmetic T>
T add(T a, T b) {
return a + b;
}

int main() {
std::cout << add(5, 3) << std::endl; // OK: int is arithmetic
// std::cout << add("Hello", "World"); // Error: concept 'Arithmetic' not satisfied
return 0;
}

4. Polymorphism and CRTP​

The talk covers polymorphism and the Curiously Recurring Template Pattern (CRTP), a technique where a class template is derived from itself. CRTP allows for static polymorphism at compile time, which can offer performance benefits over dynamic polymorphism.

Key Points:

  • Runtime Polymorphism: Achieved using inheritance and virtual functions, but comes with runtime overhead due to the use of vtables.
  • CRTP: A pattern that enables polymorphism at compile-time, avoiding the overhead of vtables.

C++ Example:

#include <iostream>

// CRTP Base class
template<typename Derived>
class Base {
public:
void interface() {
static_cast<Derived*>(this)->implementation();
}

static void staticInterface() {
Derived::staticImplementation();
}
};

class Derived1 : public Base<Derived1> {
public:
void implementation() {
std::cout << "Derived1 implementation" << std::endl;
}

static void staticImplementation() {
std::cout << "Derived1 static implementation" << std::endl;
}
};

class Derived2 : public Base<Derived2> {
public:
void implementation() {
std::cout << "Derived2 implementation" << std::endl;
}

static void staticImplementation() {
std::cout << "Derived2 static implementation" << std::endl;
}
};

int main() {
Derived1 d1;
d1.interface();
Derived1::staticInterface();

Derived2 d2;
d2.interface();
Derived2::staticInterface();

return 0;
}

5. Deducing this in C++23​

The discussion moves to deducing this, a feature introduced in C++23 that allows for more expressive syntax when working with member functions, particularly in the context of templates.

Key Points:

  • Deducing this enables more flexible and readable template code involving member functions.
  • This feature simplifies the syntax when this needs to be deduced as part of template metaprogramming.

C++ Example:

#include <iostream>

class MyClass {
public:
auto myMethod() const -> decltype(auto) {
return [this] { return *this; };
}

void print() const {
std::cout << "MyClass instance" << std::endl;
}
};

int main() {
MyClass obj;
auto f = obj.myMethod();
f().print(); // Outputs: MyClass instance
return 0;
}

6. Design Methodologies: Procedural, OOP, Functional, and Data-Oriented Design​

The final section of the talk compares various design methodologies including Procedural, Object-Oriented Programming (OOP), Functional Programming (FP), and Data-Oriented Design (DOD). Each paradigm has its strengths and use cases, and modern C++ often blends these methodologies to achieve optimal results.

Key Points:

  • Procedural Programming: Focuses on a sequence of steps or procedures to accomplish tasks.
  • Object-Oriented Programming (OOP): Organizes code around objects and data encapsulation.
  • Functional Programming (FP): Emphasizes immutability and function composition.
  • Data-Oriented Design (DOD): Focuses on data layout in memory for performance, often used in game development.

C++ Example (Object-Oriented):

#include <iostream>
#include <vector>

class Telemetry {
public:
virtual void process() const = 0;
};

class InstantaneousEvent : public Telemetry {
public:
void process() const override {
std::cout << "Processing instantaneous event" << std::endl;
}
};

class LongTermEvent : public Telemetry {
public:
void process() const override {
std::cout << "Processing long-term event" << std::endl;
}
};

void processEvents(const std::vector<Telemetry*>& events) {
for (const auto& event : events) {
event->process();
}
}

int main() {
std::vector<Telemetry*> events = { new InstantaneousEvent(), new LongTermEvent() };
processEvents(events);

for (auto event : events) {
delete event;
}

return 0;
}

C++ Example (Functional Programming):

#include <iostream>
#include <vector>
#include <algorithm>

struct Event {
int time;
bool isLongTerm;
};

void processEvents(const std::vector<Event>& events) {
std::for_each(events.begin(), events.end(), [](const Event& event) {
if (event.isLongTerm) {
std::cout << "Processing long-term event at time " << event.time << std::endl;
} else {
std::cout << "Processing instantaneous event at time " << event.time << std::endl;
}
});
}

int main() {
std::vector<Event> events = { {1, false}, {2, true}, {3, false} };
processEvents(events);
return 0;
}

C++ Example (Data-Oriented Design):

#include <iostream>
#include <vector>

struct TelemetryData {
std::vector<int> instantaneousTimes;
std::vector<int> longTermTimes;
};

void processInstantaneous(const std::vector<int>& times) {
for (int time : times) {
std::cout << "Processing instantaneous event at time " << time << std::endl;
}
}

void processLongTerm(const std::vector<int>& times) {
for (int time : times) {
std::cout << "Processing long-term event at time " << time << std::endl;
}
}

int main() {
TelemetryData data = {


{ 1, 3, 5 },
{ 2, 4, 6 }
};

processInstantaneous(data.instantaneousTimes);
processLongTerm(data.longTermTimes);

return 0;
}

GPT generated content (with a bit of "hallucinations")​

Here's the expanded digest with essential text and detailed code examples for each point, focusing on modern replacements for legacy C++ practices.


Legacy Pointers vs. Smart Pointers

Legacy Practice: Use of raw pointers, manual memory management, and explicit new and delete. This can lead to memory leaks, dangling pointers, and undefined behavior.

Modern Replacement: Use smart pointers like std::unique_ptr, std::shared_ptr, and std::weak_ptr to manage dynamic memory automatically.

// Legacy code
class LegacyClass {
int* data;
public:
LegacyClass() { data = new int[10]; }
~LegacyClass() { delete[] data; }
};

// Modern code
#include <memory>

class ModernClass {
std::unique_ptr<int[]> data;
public:
ModernClass() : data(std::make_unique<int[]>(10)) {}
// Destructor not needed, as std::unique_ptr handles memory automatically
};

Key Insight: Using smart pointers reduces the need for manual memory management, preventing common errors like memory leaks and dangling pointers.


C-Style Arrays vs. STL Containers

Legacy Practice: Use of C-style arrays, which require manual memory management and do not provide bounds checking.

Modern Replacement: Use std::vector for dynamic arrays or std::array for fixed-size arrays. These containers handle memory management internally and offer bounds checking.

// Legacy code
int arr[10];
for (int i = 0; i < 10; ++i) {
arr[i] = i * 2;
}

// Modern code
#include <vector>
#include <array>

std::vector<int> vec(10);
for (int i = 0; i < 10; ++i) {
vec[i] = i * 2;
}

std::array<int, 10> arr2;
for (int i = 0; i < 10; ++i) {
arr2[i] = i * 2;
}

Key Insight: STL containers provide better safety and ease of use compared to traditional arrays, and should be the default choice in modern C++.


Manual Error Handling vs. Exceptions and std::expected

Legacy Practice: Return codes or error flags to indicate failures, which can be cumbersome and error-prone.

Modern Replacement: Use exceptions for error handling, which separate normal flow from error-handling code. Use std::expected (from C++23) for functions that can either return a value or an error.

// Legacy code
int divide(int a, int b, bool& success) {
if (b == 0) {
success = false;
return 0;
}
success = true;
return a / b;
}

// Modern code with exceptions
int divide(int a, int b) {
if (b == 0) throw std::runtime_error("Division by zero");
return a / b;
}

// Modern code with std::expected (C++23)
#include <expected>

std::expected<int, std::string> divide(int a, int b) {
if (b == 0) return std::unexpected("Division by zero");
return a / b;
}

Key Insight: Exceptions and std::expected offer more explicit and manageable error handling, improving code clarity and robustness.


Void Pointers vs. Type-Safe Programming

Legacy Practice: Use of void* for generic programming, leading to unsafe code and difficult debugging.

Modern Replacement: Use templates for type-safe generic programming, ensuring that code is checked at compile time.

// Legacy code
void process(void* data, int type) {
if (type == 1) {
int* intPtr = static_cast<int*>(data);
// Process int
} else if (type == 2) {
double* dblPtr = static_cast<double*>(data);
// Process double
}
}

// Modern code
template <typename T>
void process(T data) {
// Process data safely with type known at compile time
}

int main() {
process(10); // Automatically deduces int
process(5.5); // Automatically deduces double
}

Key Insight: Templates provide type safety, ensuring errors are caught at compile time and making code easier to maintain.


Inheritance vs. Composition and Type Erasure

Legacy Practice: Deep inheritance hierarchies, which can lead to rigid designs and hard-to-maintain code.

Modern Replacement: Favor composition over inheritance. Use type erasure (e.g., std::function, std::any) or std::variant to achieve polymorphism without inheritance.

// Legacy code
class Base {
public:
virtual void doSomething() = 0;
};

class Derived : public Base {
public:
void doSomething() override {
// Implementation
}
};

// Modern code using composition
class Action {
std::function<void()> func;
public:
Action(std::function<void()> f) : func(f) {}
void execute() { func(); }
};

Action a([]() { /* Implementation */ });
a.execute();

// Modern code using std::variant
#include <variant>

using MyVariant = std::variant<int, double, std::string>;

void process(const MyVariant& v) {
std::visit([](auto&& arg) {
// Implementation for each type
}, v);
}

Key Insight: Composition and type erasure lead to more flexible and maintainable designs than traditional deep inheritance hierarchies.


Global Variables vs. Dependency Injection

Legacy Practice: Use of global variables for shared state, which can lead to hard-to-track bugs and dependencies.

Modern Replacement: Use dependency injection to provide dependencies explicitly, improving testability and modularity.

// Legacy code
int globalCounter = 0;

void increment() {
globalCounter++;
}

// Modern code using dependency injection
class Counter {
int count;
public:
Counter() : count(0) {}
void increment() { ++count; }
int getCount() const { return count; }
};

void useCounter(Counter& counter) {
counter.increment();
}

int main() {
Counter c;
useCounter(c);
std::cout << c.getCount();
}

Key Insight: Dependency injection enhances modularity and testability by explicitly providing dependencies rather than relying on global state.


Macros vs. constexpr and Inline Functions

Legacy Practice: Extensive use of macros for constants and inline code, which can lead to debugging challenges and obscure code.

Modern Replacement: Use constexpr for compile-time constants and inline functions for inline code, which are type-safe and easier to debug.

// Legacy code
#define SQUARE(x) ((x) * (x))

// Modern code using constexpr
constexpr int square(int x) {
return x * x;
}

// Legacy code using macro for constant
#define MAX_SIZE 100

// Modern code using constexpr
constexpr int maxSize = 100;

Key Insight: constexpr and inline functions offer better type safety and are easier to debug compared to macros, making the code more maintainable.


Manual Resource Management vs. RAII (Resource Acquisition Is Initialization)

Legacy Practice: Manual resource management, requiring explicit release of resources like files, sockets, and memory.

Modern Replacement: Use RAII, where resources are tied to object lifetime and automatically released when the object goes out of scope.

// Legacy code
FILE* file = fopen("data.txt", "r");
if (file) {
// Use file
fclose(file);
}

// Modern code using RAII with std::fstream
#include <fstream>

{
std::ifstream file("data.txt");
if (file.is_open()) {
// Use file
} // File is automatically closed when going out of scope
}

Key Insight: RAII automates resource management, reducing the risk of resource leaks and making code more reliable.


Explicit Loops vs. Algorithms and Ranges

Legacy Practice: Manual loops for operations like filtering, transforming, or accumulating data.

Modern Replacement: Use STL algorithms (std::transform, std::accumulate, std::copy_if) and ranges (C++20) to express intent more clearly and concisely.

// Legacy code
std::vector<int> vec = {1, 2, 3, 4, 5};
std::vector<int> result;

for (auto i : vec) {
if (i % 2 == 0) result.push_back(i * 2);
}

// Modern code using algorithms
#include <algorithm>
#include <vector>

std::vector<int> vec = {1, 2, 3, 4, 5};
std::vector<int> result;

std::transform(vec.begin(), vec.end(), std::back_inserter(result),
[](int x) { return x % 2 == 0 ? x * 2 : 0; });
result.erase(std::remove(result.begin(), result.end(), 0), result.end());

// Modern code using ranges (C++20)
#include <ranges>

auto result = vec | std::views::filter([](int x) { return x % 2 == 0; })
| std::views::transform([](int x) { return x * 2; });

Key Insight: STL algorithms and ranges make code more expressive and concise, reducing the likelihood of errors and enhancing readability.


Manual String Manipulation vs. std::string and std::string_view

Legacy Practice: Use of char* and

manual string manipulation with functions like strcpy, strcat, and strcmp.

Modern Replacement: Use std::string for dynamic strings and std::string_view for non-owning string references, which offer safer and more convenient string handling.

// Legacy code
char str1[20] = "Hello, ";
char str2[] = "world!";
strcat(str1, str2);
if (strcmp(str1, "Hello, world!") == 0) {
// Do something
}

// Modern code using std::string
#include <string>

std::string str1 = "Hello, ";
std::string str2 = "world!";
str1 += str2;
if (str1 == "Hello, world!") {
// Do something
}

// Modern code using std::string_view (C++17)
#include <string_view>

std::string_view strView = str1;
if (strView == "Hello, world!") {
// Do something
}

Key Insight: std::string and std::string_view simplify string handling, provide better safety, and eliminate the risks associated with manual C-style string manipulation.


Threading with Raw Threads vs. std::thread and Concurrency Utilities

Legacy Practice: Creating and managing threads manually using platform-specific APIs, which can be error-prone and non-portable.

Modern Replacement: Use std::thread and higher-level concurrency utilities like std::future, std::async, and std::mutex to manage threading in a portable and safe way.

// Legacy code (Windows example)
#include <windows.h>

DWORD WINAPI threadFunc(LPVOID lpParam) {
// Thread code
return 0;
}

HANDLE hThread = CreateThread(NULL, 0, threadFunc, NULL, 0, NULL);

// Modern code using std::thread
#include <thread>

void threadFunc() {
// Thread code
}

std::thread t(threadFunc);
t.join(); // Wait for thread to finish

// Modern code using std::async
#include <future>

auto future = std::async(std::launch::async, threadFunc);
future.get(); // Wait for async task to finish

Key Insight: std::thread and other concurrency utilities provide a portable and higher-level interface for multithreading, reducing the complexity and potential errors associated with manual thread management.


Function Pointers vs. std::function and Lambdas

Legacy Practice: Use of function pointers to pass functions as arguments or store them in data structures, which can be cumbersome and less flexible.

Modern Replacement: Use std::function to store callable objects, and lambdas to create inline, anonymous functions.

// Legacy code
void (*funcPtr)(int) = someFunction;
funcPtr(10);

// Modern code using std::function and lambdas
#include <functional>
#include <iostream>

std::function<void(int)> func = [](int x) { std::cout << x << std::endl; };
func(10);

Key Insight: std::function and lambdas offer a more flexible and powerful way to handle functions as first-class objects, making code more modular and expressive.

¡ 15 min read

[[TOC]]

How the things work​

2024-08-31 Hypervisor From Scratch - Part 1: Basic Concepts & Configure Testing Environment | Rayanfam Blog { rayanfam.com }

Hypervisor From Scratch

The source code for Hypervisor From Scratch is available on GitHub :

[https://github.com/SinaKarvandi/Hypervisor-From-Scratch/]

2024-08-31 Reversing Windows Internals (Part 1) - Digging Into Handles, Callbacks & ObjectTypes | Rayanfam Blog { rayanfam.com }

2024-08-31 A Tour of Mount in Linux | Rayanfam Blog { rayanfam.com }

image-20240830200258339

2024-09-01 tandasat/Hypervisor-101-in-Rust: { github.com }

The materials of "Hypervisor 101 in Rust", a one-day long course, to quickly learn hardware-assisted virtualization technology and its application for high-performance fuzzing on Intel/AMD processors.

https://tandasat.github.io/Hypervisor-101-in-Rust/

image-20240901010106576

SAML​

2024-09-02 A gentle introduction to SAML | SSOReady { ssoready.com }

image-20240901234406239

2024-09-02 Visual explanation of SAML authentication { www.sheshbabu.com }

image-20240901233107815

:thinking: Tricks!​

2024-09-02 saving my git email from spam { halb.it }

Github has a cool option that replaces your private email with a noreply github email, which looks like this: 14497532+username@users.noreply.github.com. You just have to enable “keep my email address private” in the email settings. You can read the details in the github guide for setting your email privacy.

With this solution your email will remain private without loosing precious green squares in the contribution graph.

CRDT​

2024-09-01 Movable tree CRDTs and Loro's implementation – Loro { loro.dev }

This article introduces the implementation difficulties and challenges of Movable Tree CRDTs when collaboration, and how Loro implements it and sorts child nodes.

Art and Assets​

2024-09-01 Public Work by Cosmos { public.work }

image-20240901005017480

Game Theory 101​

2024-09-01 ⭐️ Game Theory 101 (#1): Introduction - YouTube { www.youtube.com }

image-20240901010905811

2024-09-01 Finding Nash Equilibria through Simulation { coe.psu.ac.th }

image-20240901011057303

(Emacs)​

2024-09-01 A Simple Guide to Writing & Publishing Emacs Packages { spin.atomicobject.com }

image-20240901153404884

2024-09-01 Emacs starter kit { emacs-config-generator.fly.dev }

image-20240901153233791

2024-09-01 dot-files/emacs-blog.org at 1b54fe75d74670dc7bcbb6b01ea560c45528c628 ¡ howardabrams/dot-files { github.com }

image-20240901152917238

2024-08-31 ⭐️ The Organized Life - An Expert‘s Guide to Emacs Org-Mode – TheLinuxCode { thelinuxcode.com }

2024-08-31 ⭐️ Mastering Organization with Emacs Org Mode: A Complete Guide for Beginners – TheLinuxCode { thelinuxcode.com }

image-20240830193810145

2024-08-30 chrisdone-archive/elisp-guide: A quick guide to Emacs Lisp programming { github.com }

image-20240830134758680

2024-08-30 Getting Started With Emacs Lisp Hands On - A Practical Beginners Tutorial – Ben Windsor – Strat at an investment bank { benwindsorcode.github.io }

image-20240830135224690

Retro / Fun​

2024-08-30 VisiCalc - The Early History - Peter Jennings { benlo.com }

image-20240830135448117

2024-09-01 paperclips { www.decisionproblem.com }

image-20240901153052859

2024-09-02 Seiko Originals: The UC-2000, A Smartwatch from 1984 – namokiMODS { www.namokimods.com }

image-20240901235821210

Inspiration​

2024-09-02 Navigating Corporate Giants Jeffrey Snover and the Making of PowerShell - CoRecursive Podcast { corecursive.com }

image-20240902001457920

I joined Microsoft at a time when the company was struggling to break into the enterprise market. While we dominated personal computing, our tools weren’t suitable for managing large data centers. I knew we needed a command-line interface (CLI) to compete with Unix, but Microsoft’s culture was deeply rooted in graphical user interfaces (GUIs). Despite widespread skepticism, I was determined to create a tool that could empower administrators to script and automate complex tasks.

My first major realization was that traditional Unix tools wouldn’t work on Windows because Unix is file-oriented, while Windows is API-oriented. This led me to focus on Windows Management Instrumentation (WMI) as the backbone for our CLI. Despite this, I faced resistance from within. The company only approved a handful of commands when we needed thousands. To solve this, I developed a metadata-driven architecture that allowed us to efficiently create and scale commands, laying the foundation for PowerShell.

However, getting others on board was a challenge. When I encountered a team planning to port a Unix shell to Windows, I knew they were missing the bigger picture. To demonstrate my vision, I locked myself away and wrote a 10,000-line prototype of what would become PowerShell. This convinced the team to embrace my approach.

“I was able to show them and they said, ‘Well, what about this?’ And I showed them. And they said, ‘What about that?’ And I showed them. Their eyes just got big and they’re like, ‘This, this, this.’”

Pursuing this project meant taking a demotion, a decision that was financially and personally difficult. But I was convinced that PowerShell could change the world, and that belief kept me going. To align the team, I wrote the Monad Manifesto, which became the guiding document for the project. Slowly, I convinced product teams like Active Directory to support us, which helped build momentum.

The project faced another major challenge during Microsoft’s push to integrate everything with .NET. PowerShell, built on .NET, was temporarily removed from Windows due to broader integration issues. It took years of persistence to get it back in, but I eventually succeeded.

PowerShell shipped with Windows Vista, but I continued refining it through multiple versions, despite warnings that focusing on this project could harm my career. Over time, PowerShell became a critical tool for managing data centers and was instrumental in enabling Microsoft’s move to the cloud.

In the end, the key decisions—pushing for a CLI, accepting a demotion, and persisting through internal resistance—led to PowerShell's success and allowed me to make a lasting impact on how Windows is managed.

2024-09-02 Netflix/maestro: Maestro: Netflix’s Workflow Orchestrator { github.com }

image-20240901234630103

2024-09-01 The Scale of Life { www.thescaleoflife.com }

image-20240901153703324

2024-09-01 opslane/opslane: Making on-call suck less for engineers { github.com }

image-20240901152737861

2024-09-01 Azure Quantum | Learn with quantum katas { quantum.microsoft.com }

image-20240901152236367

2024-09-01 microsoft/QuantumKatas: Tutorials and programming exercises for learning Q# and quantum computing { github.com }

2024-09-01 EP122: API Gateway 101 - ByteByteGo Newsletter { blog.bytebytego.com }

2024-09-01 pladams9/hexsheets: A basic spreadsheet application with hexagonal cells inspired by: http://www.secretgeek.net/hexcel. { github.com }

image-20240901010426062

2024-09-01 Do Quests, Not Goals { www.raptitude.com }

The other problem with goals is that, outside of sports, “goal” has become an uninspiring, institutional word. Goals are things your teachers and managers have for you. Goals are made of quotas and Key Performance Indicators. As soon as I write the word “goals” on a sheet of paper I get drowsy.

image-20240901005313993

Here are some of the quests people took on:

  • Declutter the whole house
  • Record an EP
  • Prep six months’ worth of lessons for my students
  • Set up an artist’s workspace
  • Finish two short stories
  • Gain a basic knowledge of classical music
  • Fill every page in a sketchbook with drawings
  • Complete a classical guitar program
  • Make an “If I get hit by a bus” folder for my family

2024-08-30 oTranscribe { otranscribe.com }

image-20240830135922316

Security​

2024-08-31 The State of Application Security 2023 • Sebastian Brandes • GOTO 2023 - YouTube { www.youtube.com }

image-20240830192609064

Sebastian, co-founder of Hey Hack, a Danish startup focused on web application security, presented findings from a large-scale study involving the scanning of nearly 4 million hosts globally. The study uncovered widespread vulnerabilities in web applications, including file leaks, dangling DNS records, vulnerable FTP servers, and persistent cross-site scripting (XSS) issues.

Key findings include:

  • File leaks: 29% of organizations had exposed sensitive data like source code, passwords, and private keys.
  • Dangling DNS records: Risks of subdomain takeover attacks due to outdated DNS entries.
  • Vulnerable FTP servers: 7.9% of servers running ProFTPD 1.3.5 were at risk due to a file copy module vulnerability.
  • XSS vulnerabilities: 4% of companies had known XSS issues, posing significant security risks.

Sebastian stressed that web application firewalls (WAFs) are not foolproof and cannot replace fixing underlying vulnerabilities. He concluded by emphasizing the importance of early investment in application security during the development process to prevent future attacks.

"We’ve seen lots of leaks or file leaks that are sitting out there—files that you probably would not want to expose to the public internet."

"Web application firewalls can maybe do something, but they’re not going to save you. It’s much, much better to go ahead and fix the actual issues in your application."

2024-08-30 BeEF - The Browser Exploitation Framework Project { beefproject.com }

image-20240830140152625

2024-08-31 stack-auth/stack: Open-source Clerk/Auth0 alternative { github.com }

Stack Auth is a managed user authentication solution. It is developer-friendly and fully open-source (licensed under MIT and AGPL).

Stack gets you started in just five minutes, after which you'll be ready to use all of its features as you grow your project. Our managed service is completely optional and you can export your user data and self-host, for free, at any time. image-20240830194951803

Markdown​

2024-09-02 romansky/dom-to-semantic-markdown: DOM to Semantic-Markdown for use in LLMs { github.com }

image-20240901232517227

C || C++​

2024-09-02 Faster Integer Parsing { kholdstare.github.io }

image-20240901233314132

2024-09-01 c++ - What is the curiously recurring template pattern (CRTP)? - Stack Overflow { stackoverflow.com }

image-20240901144719965

image-20240901144828823

The Era of AI​

2024-09-02 txtai { neuml.github.io }

txtai is an all-in-one embeddings database for semantic search, LLM orchestration and language model workflows.

image-20240901235351463

2024-09-02 Solving the out-of-context chunk problem for RAG { d-star.ai }

Many of the problems developers face with RAG come down to this: Individual chunks don’t contain sufficient context to be properly used by the retrieval system or the LLM. This leads to the inability to answer seemingly simple questions and, more worryingly, hallucinations.

Examples of this problem

  • Chunks oftentimes refer to their subject via implicit references and pronouns. This causes them to not be retrieved when they should be, or to not be properly understood by the LLM.
  • Individual chunks oftentimes don’t contain the complete answer to a question. The answer may be scattered across a few adjacent chunks.
  • Adjacent chunks presented to the LLM out of order cause confusion and can lead to hallucinations.
  • Naive chunking can lead to text being split “mid-thought” leaving neither chunk with useful context.
  • Individual chunks oftentimes only make sense in the context of the entire section or document, and can be misleading when read on their own.

2024-08-30 MahmoudAshraf97/whisper-diarization: Automatic Speech Recognition with Speaker Diarization based on OpenAI Whisper { github.com }

2024-08-30 openai/whisper: Robust Speech Recognition via Large-Scale Weak Supervision { github.com }

2024-08-30 ggerganov/whisper.cpp: Port of OpenAI's Whisper model in C/C++ { github.com }

2024-09-01 microsoft/semantic-kernel: Integrate cutting-edge LLM technology quickly and easily into your apps { github.com }

2024-09-01 How to add genuinely useful AI to your webapp (not just chatbots) - Steve Sanderson - YouTube { www.youtube.com }

image-20240901012420483

The talk presented here dives into the integration of AI within applications, particularly focusing on how developers, especially those familiar with .NET and web technologies, can leverage AI to enhance user experiences. Here are the key takeaways and approaches from the session:

Making Applications Intelligent: The speaker discusses various interpretations of making an app "intelligent." It’s not just about adding a chatbot. While chatbots can create impressive demos quickly, they may not necessarily be useful in production. For AI to be genuinely beneficial, it must save time, improve job performance, and be accurate. The speaker challenges developers to quantify these benefits rather than rely on assumptions.

"If you try to put it into production, are people going to actually use it? Well, maybe it depends... does this thing actually save people time and enable them to do their job better than they would have otherwise?"

Patterns of AI Integration: The speaker introduces several UI-level AI enhancements such as Smart Components. These are experiments allowing developers to add AI to the UI layer without needing to rebuild the entire app. An example given is a Smart Paste feature that allows users to paste large chunks of text, which AI then parses and fills out the corresponding fields in a form. This feature improves user efficiency by reducing the need for repetitive and mundane tasks.

Another example is the Smart ComboBox, which uses semantic search to match user input with relevant categories, even when the exact terms do not appear in the list. This feature is particularly useful in scenarios where users may not know the exact terminology.

Deeper AI Integration: Moving beyond UI enhancements, the speaker explores deeper layers of AI integration within traditional web applications like e-commerce platforms. For instance, AI can be used to:

  • Semantic Search: Improve search functionality so that users don't need to know the exact phrasing.
  • Summarization: Automatically generate descriptive titles for support tickets to help staff quickly identify issues.
  • Classification: Automatically categorize support tickets to streamline workflows and save staff time.
  • Sentiment Analysis: Provide sentiment scores to help staff prioritize urgent issues.

"I think even in this very traditional web application, there's clearly lots of opportunity for AI to add a lot of genuine value that will help your staff actually be more productive."

Data and AI Integration: The talk also delves into the importance of data in AI applications. The speaker introduces the Semantic Kernel, a .NET library for working with AI, and demonstrates how to generate data using LLMs (Large Language Models) locally on the development machine using Ollama. The process involves creating categories, products, and related data (like product manuals) in a structured manner.

Data Ingestion and Semantic Search: The speaker showcases how to ingest unstructured data, such as PDFs, and convert them into a format that AI can use for semantic search. Using the PDFPig library, the speaker demonstrates extracting text from PDFs, chunking it into smaller, meaningful fragments, and then embedding these chunks into a semantic space. This allows for efficient, relevant searches within the data, enhancing the AI’s ability to provide accurate information quickly.

Implementing Inference with AI: As the talk progresses, the speaker moves on to implementing AI-based inference within a Blazor application. By integrating summarization directly into the workflow, the application can automatically generate summaries of customer interactions, helping support staff to quickly understand the context of a ticket without reading through the entire conversation history.

"I want to generate an updated summary for it... Generate a summary of the entire conversation log at that point."

Function Calling and RAG (Retrieval-Augmented Generation): The speaker discusses a more complex AI pattern—RAG—which involves the AI model retrieving specific data to answer queries. While standard RAG implementations rely on specific AI platforms, the speaker demonstrates a custom approach that works across various models, including locally run models like Ollama. This approach involves checking if the AI has enough context to answer a question and then retrieving relevant information if needed.

Job interview / Algorithms​

2024-09-01 Understanding B-Trees: The Data Structure Behind Modern Databases - YouTube { www.youtube.com }

image-20240901011314149

Editing Distance​

2024-09-02 Needleman–Wunsch algorithm - Wikipedia { en.wikipedia.org }

2024-09-02 Levenshtein distance - Wikipedia { en.wikipedia.org }

function LevenshteinDistance(char s[1..m], char t[1..n]):
// for all i and j, d[i,j] will hold the Levenshtein distance between
// the first i characters of s and the first j characters of t
declare int d[0..m, 0..n]

set each element in d to zero

// source prefixes can be transformed into empty string by
// dropping all characters
for i from 1 to m:
d[i, 0] := i

// target prefixes can be reached from empty source prefix
// by inserting every character
for j from 1 to n:
d[0, j] := j

for j from 1 to n:
for i from 1 to m:
if s[i] = t[j]:
substitutionCost := 0
else:
substitutionCost := 1

d[i, j] := minimum(d[i-1, j] + 1, // deletion
d[i, j-1] + 1, // insertion
d[i-1, j-1] + substitutionCost) // substitution

return d[m, n]