Monday, April 1, 2024

Waterfalling down the Staircase

(Reposted from Groups.io extremeprogramming group)


For those who haven't been watching 3 Body Problem on Netflix, the last episode of Season 1 includes a resounding lesson about the difference between Waterfall and Agile.   (It would be great if someone with more video savvy than me were to capture the clip and link to it here.)

The Staircase Project is the old Project Orion concept reimagined to send a probe to recon an alien enemy fleet many light years away. Without going into spoilers, the basic idea is to accelerate a probe with an EM sail to near light speed by shooting it past 300 nuclear bombs, each to be exploded at just the right moment to blast it with radiation.  This is a purely ballistic launch: the probe has no power or steering capabilities, so the explosions have to be timed perfectly and the trajectory is locked in.

Sounding familiar? 

Early on, the shock of an explosion disconnects one of the tethers connecting the probe to the sail, the probe goes off course, and the entire project is lost - a world-threatening catastrophe.

It would not have been rocket science to give the probe the minimal intelligence and power to adjust its trajectory, perhaps by "trimming" the sail with a tug on one of its many tethers.  But no: the finest minds on earth agree it's necessary to lock the trajectory in up front.

(Disclaimer - I've read only the first book in the trilogy on which the series is based, which ends before the probe project is undertaken, so I don't know if the author, Liu Cixin, is responsible for the waterfall.)



Friday, December 14, 2018

Perils of Messing with the Speed Force


This Twitter thread opened my eyes.

Wile E. faces a starvation deadline and focuses so intently on speed that he neglects the need to analyze the domain before implementing the chase story.  This applies equally to the other major Roadrunner trope: the painted tunnel on the rockface.

As of this writing, all the commenters on that thread apparently believe that because "edge" and "run" are common English words there's no need to dig any deeper into them. Wrong: "edge" and "run" are the tip of the domain iceberg: things like cliffs, gravity, inertia, cartoon physics, etc. have to be understood before we can even look at the structure of the implementation.

The increasing micromanagement and microsiloization brought on by "Dark Industrial Agile" and the pressure from vulture capital for short-term thinking and asset-stripping that has done so much damage to the economy have had an equally destructive effect on the culture of development.  We are all coyotes one paycheck away from starvation, so if management says "Don't look back (or forward or down), just run!", we run.  It's not just testing and refactoring that get thrown away.

The Cloud is just another kind of plumbing, but so many architects and developers apparently think it's the only domain we need to organize around.

The fetishizing of so-called dynamic languages because they allow you to generate a lot of code really fast is one example.  Benchmarks that "prove" Node is faster than Java (like this one) succeed only by comparing current reactive JS implementations to old servlet implementations.  The interoperable JVM ecosystem provides much more modern options than servlets. In spite of ES6, Javascript is intrinsically slower, and NPM is currently suffering from the torture of a thousand tiny libraries.

Slow down to speed up, look around at the domain (I'd say "master it" but that's a whole nother thing) and optionally inhale that warm smell of colitas.




Sunday, July 2, 2017

Category Theory and Corporate Culture

A mediocre poet once wrote

  
    Imagine me at 
             my age looking for 
    a job. Strangely exciting.

A sudden end to a contract - less than a week's notice, probably because the manager I reported to suddenly resigned.

Like a kid whose parents are divorcing, I ask myself if it's my fault.

It's not, really.

I was hired to help automate an operations support system for a communications infrastructure provider. Working alongside a developer (DW) whose assignment was to develop "automations" using a commercial application I'll call Grit (not its real name), it became clear to me that Grit's promise of easy development (even by non-programmers) was absurd. The "development environment" lacked basic amenities like unit testing and source control. Operations support in an industry like this requires a realtime event-driven system, but both Grit and the surrounding software environment were heavily batch- and file-oriented and required human involvement and slow manual workflows.

My coworker was so unhappy with the tools he was tempted to walk. I suggested we come up with a plan for a true realtime system based on microservices. We brought this to our manager (SN) who, in spite of not having software development experience, quickly understood what we were driving at.

DW has had a lot of experience with operations support and with web infrastructure. He provided the expertise that allowed me to concentrate on applying what I had learned in years of object-oriented development and reading up on functional programming and category theory. Furthermore, he supplied a perspective on system and application monitoring that my test-driven approach to development really needed. And he set up our "disjunct" development environment.

Due to some cultural issues with security, our Windows laptops were so locked down that we could not change even the most trivial settings in our browsers, and all software could only be installed from an approved list after a long drawn-out request process. There was no way we could do modern software development in that environment.

Thanks to the loan of a server from a middle manager, we were able to set up an IDE (Intellij IDEA Community Version) and access it through SecureCrt and XServer.  (Since we would sometimes lose connection to the server, IDEA was a safer choice than Eclipse because files are always being saved in the background by default.)

We successfully implemented the first phase of the framework, based on a fractal view of systems as consisting of total functions, dependent types and hexagonal architecture. The second phase would involve organizing the functions around models-as-types.

We couldn't put our code into production without official approval for the (extremely popular and well-tested) open source libraries we were using. This was another sign of the cultural chasm: there was no provision for software development because, as we were actually told at one point, "this company doesn't do software development."

In subsequent posts, I'll spell out the organization of the framework as it evolved and how I was able to bring some of the mathematical power of category theory and functional programming to the practical tasks of implementing an evolutionary development process that allowed for the maximum leverage of the batch/file/cmdline-based legacy systems we would have to communicate with - and perhaps eventually disintermediate, applying a more efficient variant of the Strangler pattern.

For now, assessing the matter of responsibility for what appears to be the end of that framework and that team, I have to blame myself because my background in anthropology and linguistics made me the only person who might have had a chance of understanding the cultural mechanisms that underlie Conway's Law.

But I don't blame myself much, because our team was doing some pretty serious development work, and DW, SN and our rookie developer JS were reasonably clear on the concepts. The cultural problems were operating at a "higher" level. Yes, if I knew at the beginning what I knew now, I might have been able to do something about the cultural problem.

But hey - I'm not a $500-an-hour consultant with all the flashy creds.  Just a developer with too many years of experience.



Thursday, October 29, 2015

Drive By Dialectics - Part 2

(Not @tottinge)

Part 1 stirred up a bit of a fuckstorm,  which surprised me less than how quickly it became a reasonable conversation.

Nobody cited the post as a LackOfGodwin violation, probably because there's no such thing yet. Maybe if I continue along this historically material path, there will be one.

But to be a little less inflammatory (get the Red out!), here's an analogy that's pretty much structurally identical:

    ScrumMaster:OrchestraConductor::XPTeam:JazzComboOrChamberGroup



(Party [Über]Animal)


A small group that communicates well doesn't need a non-performing leader. The key term there is not "small" but "communicates well".  There are conductorless orchestras, some of which are not particularly small.

Like a priest, the traditional conductor is believed to channel the sacred intention of the composer.






My role model, Charles Mingus - photo by David Redfern

Mingus could be as hard on his bands as any OberMusikenFührer in Germany, but he was also a great bassist and composer.

Got distracted by Mingus. Where was I? Casting aspersions on Scrum? What's the point? Sure, I'm a downtrodden victim of False Scotsman Scrum, but I live in Chicago. Do I have to tell you about my mayor? Worse than even False Scotsman Scrum.

Wednesday, October 28, 2015

Drive-by Dialectics

Ron Quartel, AKA @agileAgitator, came up with a nice analogy:

    Scrum:XP::VHS:Betamax

Here's my take:

    Scrum:Agile::Communist party:SocialDemocratic parties

The Communists perfected the concept of democratic centralism: an obvious oxymoron/contradiction that functions like a "Mystery" in a classic religion: among other things, it provides cover for autocrats. This lays the basis for what post-Stalinist Soviet leaders called a "cult of personality".  Witness Stalin, North Korea's "Dear Leader" and to a lesser extent Hugo Chavez and the Castros.

(If you think this kind of authoritarianism is incompatible with free enterprise, consider the PRC and Singapore.)

Scrum has triumphed in the corporate world because it centralizes control and information chokepoints in a "Master".  This reproduces the structure of a traditional corporation and promotes Agilewashing.

Other methodologies that essentially promote acentric/holographic democracy and the autonomy of teams are a near-impossible sell in a command-and-control world.

I have worked on four teams doing Scrum (obviously not True Scotsman Scrum - a period of dictatorship of the proletariat has to precede True Communism - substitute analogous rationalization).  One of them was run by an actual manager.  The other three were run by Certified Scrum Masters (blessed from Moscow?) who by the way were all genuinely intelligent and nice guys.

There was zero encouragement for team members to think about the process or about anything other than the microsilos (stories) they worked in - solo.

I don't doubt that these situations were more efficient/productive than classic Waterfall/Cowboy regimes.  But that's not saying much.

Friday, October 23, 2015

Closed for Permission, Open for Forgiveness



I used to think that the pattern I'm about to advocate here was unforgivable. Now I'm not sure - a side effect of test-infection.

I'm assuming OO here along with SOLID and DRY, and using the word "mock" because it's shorter than "test double", but not excluding "fake".

The controversial assumption: that tests are first class citizens and are therefore not forbidden to have an impact on production code.  In mechanical and electronic engineering, products are built to run "self-tests" long after they leave the factory: these tests are an integral part of the product and must be accommodated. Jordi LaForge ran Level 3 diagnostics in the middle of critical missions.  Freakin' Level 3, for Pete's sake! No controversy there.  But in software?

Suppose you have an object A that uses an object B. You want to verify with a test that A uses B appropriately. (Actually, because you're a smart programmer, you test-drive A to do the right thing.) You inject B as a constructor argument for A (no setter, because you understand the value of immutability).

So C, which instantiates A (whether C is a factory or a POLO [Plain Old Language Object]) now has to know how to provision/acquire/instantiate a B.  Maybe C is a container or a Springy framework, in which case you figure this is just part of its job.  But even in that case there's a dependency, no matter how cheap it is to create (say, with attributes/annotations).

If B is not part of a strategy pattern - if there's no "production reason" for injecting it - you're only doing it for the test.

What I've started doing is implementing a test-only constructor on A that takes a B argument so that I can inject a mock B. The production constructor instantiates B.

Frameworks like Hibernate make you create a default constructor so they can use reflection to set what should otherwise be private members.  That's nasty. Then there are the "beans" that require setters for every instance member when you really wanted an immutable value object. That's obscene.

Just sayin' - the test-only constructor is a microevil because nothing else has to know about it or use it.  Its visibility can be minimal, as long as tests can get at it.  For example, in a Java project where test source resides in a package structure parallel to but completely separate from production source, the test-only constructor can have default (≅ package) visibility. Similarly: assembly visibility in C#.

I'm not losing sleep over it.

Tuesday, October 6, 2015

Mandala for Developers

From Dead C++ Scroll 4762A - #DevelopersAreNotFreakinEngineers

(Note misspelling in lower left quadrant.)
Cf. Field notes XK7-2015 and JL9-2015