Naive agent-based backends

Seeing the popularity of elmish architecture and attempts at implementations of backends based on it (or any unsupervised cooperative actor loops) I thought I’d share my thoughts.

Desirable characteristics

Most naive implementations of synchronous (HTTP/RPC) APIs are perfectly content to return some equivalent of 500 code at a drop of a hat. My DB timed out? 500! My message broker node has crashed and I need to reconnect? 500! Etc. etc. You get the idea.

But this and other kinds of recoverable problems happen all the time in distributed computing and scenarios with unpredictable load. Remember, network is unreliable, as is your cloud VM and the physical machine under it. If your service/agent is on the edge, the constrained device might be under load doing whatever its primary job.

Here’s another consideration: What if it’s in the middle of a chain of APIs, as is common with microservices? Wether you return an error or not, what is the recovery strategy for the earlier nodes in the chain? Retry? Then something like this should look familiar:heavy-mallet

 

What about timeout constraints higher up the call chain? At some point, you’d end up  returning the error all the way up the call chain and not only this is an overhead you could have avoided, now it’s your user’s problem.

Conversely, is it ok to just drop the error? This is what unsupervised actors and elmish dispatch loop do. In elmish at least the API tries to prepare you for dealing with the possibility of errors, but if something does slip through, the best we can do is log it.

And logging it is an example of data loss. There’s no longer the context of the call, so there may or may not be a data that was passed in required to resolve the problem even if you found out about it at later date.  This might be ok in the UI, but on the backend/edge it could lead to hard to diagnose and solve business problems.

So, one desirable characteristic that’s different for backends and edge nodes is resilience. If we can recover from an error, we should do so without making it our client’s responsibility.

Achieving resilience

Yes, I’m going to talk about store-and-forward architecture, which may seem ironic, considering the picture above comes from reddit – one of the most prominent users of RabbitMQ, but the fact is reddit scales far better than most websites could.

The essential ingredients of resilience are asynchronous APIs (messaging) and ack/nack functionality of the messaging infrastructure.

Asynchronous messaging allows us to change the problem of timing from “It’s time-sensitive” to “It’s time-relevant” – we just have to process the messages in certain order, not on a strict timeline.

Ack/nack allows us to guarantee that the message is processed completely, which is what makes asynchronous API trustworthy – we’ll never loose the message or the fact that there might have been a problem processing it.

Ack/nack is what makes it “fire-and-forget” – often misinterpreted analogy. Imagine if military fired the rockets thinking “wether it hit the target or not, we can forget about it once it’s fired”. No, of course we care what happens, and it’s because there are recovery mechanisms in place that we can kinda forget about it, which logging and unsupervised actors don’t provide.

Speaking of actors

If you read the original papers on actors, they were meant to be a reasoning tool in the face of high processing complexity. The original constraints got watered down with the ability to address any actors in the cluster and the approach has become a tactical tool for relatively safe cooperative concurrency. However…

Your single actors node can process $300K/s messages? I’m not impressed, the number is meaningless not only because your problem doesn’t translate into mine, but also because your actors are unsupervised. Show me the supervised numbers, where individual message can be shown to be fully processed, replayed (wether successfully or as part of the compensation workflow while handling a non-recoverable error), and while we are at it – can it throttle the sources of events so that my downstream processing is not overwhelmed?

Higher-level abstractions

That brings me to “streaming” – modern day abstractions that define concepts of sources, data streams and sinks. But it doesn’t stop there, if you take a look at Apache Beam, you’ll see a standard defining not only these essentials, but higher-level operators like grouping, windowing and many others. The standard is implemented by several different data processing frameworks, so if we are going to build something resilient let’s focus on stream processing.

Edit: Conclusion

Elmish abstractions lack the way to implement Ack/Nack, define a data stream, as well as any means of facilitating update parallelism. If none of that matters , for example – in a locally hosted node.js backend to support user interaction, then you are OK. For a more robust implementation look elsewhere… maybe check out FsShelter which I’ll be presenting at OpenFSharp in September.

Naive agent-based backends

Elmish now supports RemoteDev time-travelling debugger

Screen Shot 2017-01-11 at 11.33.13 AM.png

Since day 1 elmish, developed at Prolucid for our front-end applications supported console logging of states and updates taking place in the application. You have to see it to appreciate how powerful of a feature that is and for a while it was enough. That is until ever-prolific @forki decided he wants a full blown Elm-like debugger 🙂

Thanks to RemoteDev tools and its developer Mihail we now have the experimental support in Elmish that makes the features like time-traveling and import/export available to apps targeting web and mobile!

For details see the README, the React sample and the React Native sample.

The Chrome and Firefox extensions are easy to install, but the setup for mobile is a little-bit complicated, due to the fact that the communications are done over the network. It requires either a connection to a public cluster or running a local server, with everything that entails – making sure the routes are setup, HTTP/s traffic possible, ports are forwarded, etc. But it’s all very well documented in RemoteDev server repo.

Big Thanks to Steffen for pushing for the feature and Mihail and Alfonso for their support in making it possible!

Elmish now supports RemoteDev time-travelling debugger

Authoring Fable bindings: inheritance and variance in action.

JavaScript cheats. If you want to pass some properties, there’s no need for a type to describe them – just return anything you want inside of curly braces and you are done. Of course there’s a cost, but that not what this post is about.

How does one achieve this level of freedom in order to integrate with JavaScript libraries out there when writing in a statically typed language – F#? Fable provides several mechanisms for transparent interop with the rest of JS world:

  • Record Types – going from typed to untyped is not a problem;
  • Discriminated unions with a special attribute [<KeyValueList>].

Currently a typical way to create new fable bindings for a library starts with TypeScript definitions imported via ts2fable.  The tool produces F# type definitions that closely resemble the OO roots of TypeScript definition, but are not particularly interesting to work with from a functional-first language. So the usual second step when authoring a binding is a creation of a “helper” DSL – conversion of generated interfaces to KeyValueList discriminated unions and addition of factory functions to take lists of DU instances that describe various properties of the target. Let’s take a look at an example, from React JSX intro:


const element = (
<h1 className="greeting" key="1">
Hello, world!
</h1>
);

view raw

react.jsx

hosted with ❤ by GitHub

Brief detour: JSX is template language that lets you mix React elements described in XML/HTML-like syntax with JS6, it’s transpiled later into plain old JS. So in lines 2-4 we see a construction of h1element with implicit child element (“hello…” text) and an explicit className and key attributes set. With regards to React, key is unnecessary and may even be invalid, but this post is not about React and the attribute is perfect to discuss the types involved.

We’ll skip the actual JS definition of the h1, just keep in mind that we are setting a couple of properties and let’s dive into TypeScript definition for each:


// havily modified version for the purpose of the article
interface Props<T> {
key?: Key;
// snip
}
interface DOMAttributes<T> {
/// snip
}
interface HTMLAttributes<T> extends DOMAttributes<T> {
/// snip …
className?: string;
/// … snip
}
interface HTMLProps<T> extends HTMLAttributes<T>, Props<T> {
}
// and the factory function overload
function createElement<P extends DOMAttributes<T>, T extends Element>(
type: string,
props?: Props<T> & P,
…children: ReactNode[]): DOMElement<P, T>;

view raw

react.ts

hosted with ❤ by GitHub

We can see that TypeScript introduced an interface hierarchy to describe something that would be a runtime composition in JS – object is just an array, indexed by property name. Leaving aside for now the interesting use of type intersection, let’s take a look at what F# binding we get from ts2fable:


type DOMAttributes =
// snip
and HTMLAttributes =
inherit DOMAttributes
// snip…
abstract className: string option with get, set
// …snip
and Props<'T> =
// snip
abstract key: U2<string, float> option with get, set
and HTMLProps<'T> =
inherit HTMLAttributes
inherit Props<'T>

view raw

react.import.fs

hosted with ❤ by GitHub

Unsurprisingly we get a couple of interfaces Props and HTMLAttributes, unified in HTMLProp. Let us consider how we would use the interfaces like that… we’d need an object of type HTMLProp that we’d call an interface method on, but it’s only useful for callbacks. The callbacks are contravariant -as long as it receives the “biggest” type we are type-safe.

But how would we construct an object in type-safe manner using these definitions? The answer is we can’t. Constructors need a covariant list of properties.

Let’s see what the idiomatic F# DSL offers on top of the imported definitions:


[<KeyValueList>]
type IProp =
interface end
[<KeyValueList>]
type IHTMLProp =
inherit IProp
[<KeyValueList>]
type DOMAttr =
// snip
interface IHTMLProp
[<KeyValueList>]
type Prop =
| Key of string
// snip
interface IHTMLProp
[<KeyValueList>]
type HTMLAttr =
// snip…
| ClassName of string
// …snip
interface IHTMLProp
// omitting some Fable majic for clarity
let domEl (tag: string) (props: IHTMLProp list) (children: ReactElement list): ReactElement = jsNative
let h1 props children = domEl "h1" props children

Here we see the DUs representing the properties from the imported interfaces and the “factory” functions to facilitate assembly of the element objects. We also see a marker interface IProp that doesn’t exist anywhere in the previously seen hierarchy. Why do we need the marker interfaces? To answer that question, let’s see the usage:


// assuming we have the right imports
let header = h1 [ HTMLAttr.ClassName "greeting"; Props.Key "1" ]
[ unbox "Hello, world" ]

view raw

react.usage.fs

hosted with ❤ by GitHub

We pass HTMLAttr.ClassName and Props.Key as the list elements and a child text element.

What is interesting is that here we have a typed list with instances coming from different types. In JS we just bundle anything we like with {} , in TypeScript we can do the same, but there are also “intersection types” that allow one to express mixing of various properties together, in F# we need the artificial marker interface to relate the unrelated types, so that our list could be constructed.

What we see taking place is transition from interface definitions usable by contravariant callbacks to a covariant list of properties that can be passed into the createElement and it required the unifying interface type to make it possible.

Unlike the fairly simple React hierarchy, which if you squint looks like a match for the original inheritance tree, ReactNative is more involving and requires a lot of unifying interfaces – one for each valid combination of properties for a given element and requires careful thought and consideration. It’s a work in progress. On the other hand what JS can only check at runtime, F# can check at compile time, as well as provide a nice IntelliSense experience!

Authoring Fable bindings: inheritance and variance in action.

Cross-platform UIs with F# and Fable

If you are small vendor and your primary focus hasn’t been in designing UIs, entering the field today presents you with too many choices. However, if you’d like to use existing expertise and develop new ones in way that would accommodate broad range of platforms: Web, Mobile, Windows and, ideally, OSX and linux – the choices shrink dramatically.

We have some WPF expertise in house, but due to premature demise of Silverlight it has become a non-transferrable skill. Xamarin seemed like a way forward, but it would fragment our investment into Mobile/Desktop and despite Xamarin’s accomplishments (and they are impressive) it remains a fragile niche and having tried it we decided to keep looking.

React (and its Native derivative) with “learn once, write anywhere” approach seems like a promising direction but it has one (big) problem – JavaScript. Having built statically-verifiable code the weak and dynamic nature of JS leaves the language entirely unattractive. On the other hand JavaScript as ecosystem is like a lab full of petri dishes, rapidly blossoming and quickly killing off an infinite stream of ideas. It’s great of course that it’s happening, but trying to figure out the minimal viable combination of tools and libs… the fatigue sets in rather quickly.

Over the past couple of years we at Prolucid have been building up our F# skills developing backend systems and looking at Elm with its beautiful implementation of “model view update” architecture and Fable with its amazing capabilities I realized we may have our way forward.

Building hardware and low-level device software I expect we’ll be dropping into native quite a bit, but doing native in the tools that were designed for it seems like an excellent idea anyway. At the same time we’ll be able to reuse tools, core logic and models across platforms and tiers.

What was lacking is F# implementation of Elm’s dispatch loop, similar to what Redux does, but w/o all the overhead seemingly designed to overcome the language shortcomings. There’s an implementation in fable-virtualdom from Tomas Jansson, but it’s tied at the moment with virtual dom management.

To that end an early build of Elmish is now available on npm. Elmish is a small library that has been designed following Elm’s concepts and terminology and it should work with React, ReactNative, virtualdom and any other DOM/rendering framework.

There are samples for React (CounterTodoMVC) and React Native.

With exception of F# syntax and explicit dispatch function being passed around one could follow Elm’s documentation when studying this approach to writing a UI, but hopefully I’ll get some time to write some docs in the next few weeks.

Many thanks to Fable contributors for the help with the ecosystem and for making F# a competitive language to write cross-platform UIs in!

Cross-platform UIs with F# and Fable

Protoshell and Thriftshell update

As part of FsShelter development I had to implement the multilang serilizers for Storm, originally building against then current Storm 0.10.0.

I’ve started with Thrift, thinking that since its already a part of Storm runtime it would make the adoption easier compared to protobuf and given similar  characteristics the downgrade in performance would not be noticeable. After some testing it turned out that Thrift performed nearly at the speed of JSON (better with some payloads, worse with others), which might require some explanation.

Unlike monolithic protobuf, Thrift has a pluggable model for pretty much everything. So when people say Thrift they should qualify at least two things: Transport and Protocol. Thirft looks comparable to protobuf only when Compact protocol is used. Compact however has a caveat, it doesn’t work with streaming transports, unless you implement custom framing logic to achieve something similar to protobuf’s ParseDelimitedFrom functionality. And Storm is all about streaming, which is why I’m deprecating the support for Thrift. Unless someone wants to maintain it I’ll be removing Thrift support from the future releases of FsShelter.

Protoshell on the other hand gets an update – Storm 1.0 has been released and some packages have been renamed. The new 1.0.1 release of Protoshell is now available and has been tested to work with latest Storm, so now FsShelter can benefit from massive performance improvements made in Storm.

FsShelter does not require a new build to benefit from this release. All one needs to start running FsShelter components against Storm 1.0.1 is the new server-side serializer implementation, which can be referenced directly from github as a paket dependency and included with the topology for deployment.

Protoshell and Thriftshell update

An easy way to try FsShelter

Thanks to docker, trying something out w/o having to figure out all the dependencies and pollute your system has become really easy. This is how I started with Storm and this is the way we now help others to try FsShelter as well – fsshelter-samples container.

The container includes an installation of Storm, Mono, F# and a pre-built clone of FsShelter repo. Original build of Mono (4.2.1) caused processes to crash now and then and was an interesting study in how Storm deals with failures and what it means for processing guarantees. Current version (4.2.3) runs solid and may deprive you from witnessing Storm restarting all the components… you may have to crash them yourself 🙂

An easy way to try FsShelter

FsShelter: a Storm shell for F#

About a year ago Prolucid adopted Apache Storm as our platform of choice for event stream processing and F# as our language of choice for all of our “cloud” development.

FsStorm was an essential part that let us iterate, scale and deliver quickly, but even from the earliest days it was obvious that the developer experience could be improved. Unfortunately, it meant a complete rewrite of FsStorm:

  • FsStorm DSL is a really thin layer on top of Nimbus API model:
    • has explicit IDs when describing components in a topology
    • uses strings in all the names
    • matching of inputs/outputs is not guaranteed
  • FsStorm uses Json AST as it’s public API:
    • messages, tuples, configuration
    • serialization mechanism is hard-baked into the API

We’ve worked around some of the problems, usually by writing more code.

It actually makes sense that Storm itself doesn’t care about the type of the tuples/fields. It runs on JVM, which is very much typed, and it relies on sub-class polymorphism to make things tick. However the public API for the tuples looks like an afterthought in every language. But we figured, there is this “compiler” that can do “type checking” for us, let’s make it work! Maybe we can even make it faster if we replace Json with Protobuf?

Coming up with the new DSL that would allow the components to consume and emit tuples of various (static) types on multiple streams was an interesting experience and led to some strange places. A lot has been written on F# DSLs, but none of that applied directly. Can I use “just functions”? Do I need a type provider? A computation expression? A compiler as a service?

logo

After a few false starts I found the desired paradigm that could be expressed in F# succinctly. As it usually happens, once I gave up on certain notions (building “any purpose” graph from a single source in this case), the result was pretty simple. And so, after a few weeks of journey and discovery, we are releasing FsShelter: a way to program Storm with F# in a statically typed fashion.

Many thanks to Tomas Petricek, Scott Wlaschin, Andrew Cherry and Erik Tsarpalis, without them FsShelter wouldn’t have been possible.

FsShelter is currently in beta and any feedback is welcome and appreciated.

FsShelter: a Storm shell for F#

Real-time analytics with Apache Storm – now in F#

Over the past several month I’ve been prototyping various aspects of  an IoT platform – or more specifically, exploring the concerns of “soft” real-time handling of communications with potentially hundreds of thousands of devices.

Up to this point, being in .NET ecosystem I’ve been building distributed solutions with a most excellent lightweight ESB – MassTransit, but for IoT we wanted to be a little closer to the wire. Starting with the clean slate and having discovered Apache Storm and Nathan’s presentation I realized that it addresses exactly the challenges we have.

It appears to be the ultimate reactive microservices platform for lambda architecture: it is fairly simple, fault tolerant overall, yet embracing fire-n-forget and “let it fail” on the component level.

While Storm favours JDK for development, has extensive component support for Java developers and heavily optimizes for JRE components execution, it also supports “shell” components via its multilang protocol. Which is what, unlike Spark makes it interesting for a .NET developer.

Looking for a .NET library to implement Storm components there’s the Microsoft’s implementation – unfortunately components in C# end up looking rather verbose and it happens to work exclusively with HDInsight/Azure, which is a deal breaker for us, as we want our customers to be able to run it anywhere. Fortunately though, further search revealed recently open-sourced FsStorm announced on Faisal’s blog and I liked it at first sight: concise F# syntax for components and the DSL for defining topologies makes authoring with it a simple and enjoyable process.

The FsStorm components could be just a couple of lines of F#, mostly statically verified, have clear lifecycle and easy to grasp concurrency story. And with F# enjoying 1st class support on Mono, we are able to run Storm components effectively on both dev Windows boxes and distributed Linux clusters while capitalizing on productivity and the wealth of .NET ecosystem.

It is now available under FsStorm umbrella as a NuGet package, with CI, a gitter chatroom and a bit of documentation.

While still in its early days, with significant changes on the horizon – something I want to tackle soon is static schema definitions for streams and pluggable serialization with Protobuf by default, I believe it is ready for production, so go forth and “fork me on GitHub”!

Real-time analytics with Apache Storm – now in F#

A problem with resources

One classic trait you will find throughout enterprise software development is that people are literally treated as individual resources – to be shared, allocated, etc.

Of course the management has a perfectly reasonable motivation – how else would you maximize gains or lower the cost of while solving multitude of problems that need solving?

There are several underpinning factors that have to be in place for this mentality, but first, let’s examine what it really means:

  • part-time commitments – it’s harder to foresee when any single feature is going to be actually in customer’s hands
  • limited window of opportunity – there’s little or no chance to address bugs or overlooked features
  • no ownership – things to tend to deteriorate when there is no reason to maintain overall consistency by going beyond the immediate feature/scope

If that looks acceptable, we’re done, read no further!

 


 

On the other hand, if that causes a familiar sinking feeling and you totally can imagine the problems down the line, let’s see how organization ends up in this situation:

Well, duh, Waterfall:

Yes, the process calls for Requirements Phase and we can’t let anyone sit on their hands, we gotta keep everyone busy while that happens. The commitments have been made and the corresponding budget has been allocated last year and for 12 months ahead! Besides, we need more people, right now – over there, on that project.

Organization structure:

Centralized leadership that’s expected to maintain technical and domain expertise, project management skills and a charismatic personalty necessary to solve the problems and effectively manage manpower to implement the solutions. These leaders have to be good at everything! And oh, yeah, the leaders will know the best how long something is going to take.

Tradition of partitioning the responsibility: 

Sole but partitioned responsibility is a terrible setup. BA – for requirements, an architect – for the technical stack and “best practices”, a coder – for the immediate feature code (with a potential separate role for a person responsible for deployment infrastructure) and tester – for the quality: “Hey, don’t look at me, my part is done!”

Lack of talent:

It’s been said many times, but you don’t hire top 10%. Nobody does, and while you could bring up the average, see above – “we can’t let anyone sit on their hands”. Besides, they’ll just quit and use their newly acquired skills elsewhere, right? Lack of training in broad problem solving leads to skill set silos.

If you find yourself in an organization with those traits – that’s it, prepare to reap the consequences of treating the individuals as resources, indefinitely: low morale, quality problems, slow and unreliable feature delivery, rotting code base and employee turnover.

 


 

Incidentally, what might fairy-land of Agile have to address this? Optionally mapping to Agile Manifesto

Teams:

Stable units with the broad skill set to solve a problem from the original statement to tested product in customers hands. Own the code and the solution as a whole. Give the team the problem, give them a lot of problems, just line them up and let them finish – one thing at a time.

Figure out the requirements together with the solution (Customer Collaboration):

The requirements are never done, solution team working with the customer (or Problem Stater, to use flavor-independent term) will come up with a better problem statement (or a better problem!) than a BA marinating in his/her own juice. Chances are it will have to be broken down into smaller problems in order to be solvable. In the end you’ll definitely have a better solution.

Decentralized leadership (People and Interactions):

Problem Stater and Impediments Remover are the only two assigned roles (and arguably they are not even on the team), the rest is allowed to emerge within a team any way the team feels best (and the best itself is allowed to be figured out by the team). People tend to stick to and be happier with the commitments they take and estimates they make themselves. We all try to take pride in what we do and self-commitments allow us to do better. All you have to do is position them for success and let them.

Definition of done (Working Software):

Seriously, you can’t even talk about quality unless you defined what it means to be Done. Refactor, keep the consistency or don’t and know you are paying the price the next iteration, but since you own it – it’s yours to pay. Pick the best stack that works for your team and the problem, but make good and deliver.

Iterative delivery (Responding to Change):

Test ideas, change tracks, fail early, develop deeper understanding of your customer (or discover a different customer!). Deliver, make profit, repeat.

Continuous focus on improvement:

Keep asking “what we can do differently, what we can do better?”. Improve skills, tools, processes, artifacts, etc. as matter of course.

 

In the end…

If all you want is a cog, then all you’ll get is a cog. But make a person matter, let people take pride in what they do, let them grow and feel the benefits of their accomplishments and they will stick around to make you a handsome profit.

A problem with resources

How to fail while implementing agile

Agile software development, the best process the industry has come up with so far, how could it fail? Easy.

Any of the following will do, even if you hired a professional trainer…

 

Dismiss the idea of training as “it’s obvious, isn’t it?!”.

About 80% of all attempts at agile result is “scrumfall”, i.e. people going through the motions w/o understand where the value is supposed to come from. So, no, it’s not.

 

Allow the trainer to avoid locking into any specific flavour.

Scrum or Kanban? Leaving the choice to the uniformed or proceeding with training not tailored to fit the organization is not a training, it’s an information session. People walk away w/o an understanding of the role they are supposed to play or how to play it or even why bother.

 

Let people taking the training to wonder in and out as they please.

Exercise in the class – focusing on one thing at a time improves the team productivity. If someone was allowed to skip it, they are still thinking in terms of their silo and how personally more productive they are if they don’t have to deliver in small chunks.

 

Fail to grasp product owners and project managers/scrum masters roles and their differences.

In a traditional development  model your manager and your team lead are playing both sides – figuring out what to do and how to do it.

To benefit from agile you need to let product owners concentrate on figuring out the “what” and “why” and let the team to come up with the “how”. Empower the scrum masters to facilitate and unblock.

 

Don’t provide additional training to product owners and project managers.

Focus on what and why is not natural for many people ending up in the product owner role. It requires a different “hat”, if you will.

Project managers need to learn when to step back and when to put their foot down – in agile these two usually happen for entirely different reasons.

 

Don’t bother with the definition of done, don’t account for testing in the estimations.

Even if your teams do everything right, without the definition of done you don’t know when or what you can realistically ship. Just trying to come up with the definition will highlight potential problems with your delivery.

 

Say that testers are uninformed, unavailable, elsewhere and don’t involve them in the planing and estimations.

Related to the definition of done. No testing? Chances are you’ve built the wrong thing, even if it works as intended.

 

Don’t write stories, keep writing tasks.

Tasks don’t tell the team what motivates you, people have to reverse-engineer the value you expect to derive from things you tell them to do. Describe the value instead, they will find the best way to provide it.

Coming up with the pipeline of prioritized “value adds” is not an easy job, don’t make it harder by trying to figure out the solutions as well.

 

Help the developers by solving the problems for them, instead of focusing on defining them.

Traditional approach to problem solving has produced generations of developers whos only motivation is not to screw up. Let the people come up with the solution and they will “own” it, they’ll take pride in it, they will think late at night reviewing their decisions.

 

Swoop on the team now and then and ask if they are done yet, make an off-hand remark that it shouldn’t take this long.

It’s their estimates, not yours. If it takes longer than expected, maybe the problem wasn’t well understood, or maybe the task was scoped too big. Unknowns will pop, feed the observation back into the next retrospective and the next planing session.

 

When another pressing thing comes up tell them to get on that, regardless of any previous priorities and commitments.

Some times you have to scrape the sprint, but it should be red flag – something is wrong with your planing. Making a habit of it will destroy the morale, as prioritized backlog is part of the trust relationship between the “chickens” and the “pigs”.

 

Talk about refactoring as if it were a separate, optional activity, done at another time.

This is often a developer’s folly. People used to others making the decisions will try to present the choice of “cheap and fast” or “slow, but good” to the product owner, not realizing that continuous delivery and improvement is on them.

This why having a good backlog is important – any refactoring decision has to be done by the developers, with the knowledge of the roadmap for the best results.

 

Keep pretending that you can estimate in hours, pad the estimates as your experience tells you.

By now the question “How long do you think it will take you?” should have been exposed as an apparent invitation for a lie. The only relevant question should have been that of relative complexity, everything else is counterproductive.

 

Skip on the retrospectives… you don’t have any power to change anything anyways.

What is the forum for the constructive feedback in your team and organization? How often are you prepared to receive it and react on it? Telling someone “if I’m doing something wrong just tell me” is not a forum, it’s an invitation for reprisal and hurt feelings.

 

Keep the cost, time and scope fixed.

Project management triangle has been known for a long time. Agile assumes that the quality is given. Pick two and let the other float, that’s the only way.

 

How to fail while implementing agile