Thursday, September 26, 2019

Accelerate Chapter 3 Discussion Points

Chapter 3 of Accelerate: The Science of Lean Software and DevOps: Building and Scaling High Performing Technology Organizations covered these points:

  • Culture is of huge importance, but is intangible
  • Needed to find a model of culture that:
    • Was well-defined in scientific literature
    • Could be measured effectively
    • Would have predictive power in our domain
  • It is possible to influence and improve culture by implementing DevOps practices
  • Modeling and measuring culture
    • Organizational culture can exist at three levels
      • basic assumptions
        • Formed over time as members of a group or organization make sense of relationships, events, and activities
        • Least "visible" of the levels
        • Things we just "know"
        • Hard to articulate
      • values
        • Provide a lens through which group members view and interpret the relationships, events, and activities around them
        • More "visible"
        • Can be discussed and even debated by those who are aware of them
        • Quite often the "culture" we think of when we talk about the culture of a team and organization
      • artifacts
        • Most "visible"
        • Can include written mission statements or creeds, technology, formal procedures, or even heroes and rituals
    • Westrum's organizational cultures
      • Pathological (power-oriented)
        • Characterized by large amounts of fear and threat
        • People often hoard information or withhold it for political reasons, or distort it to make themselves look better
      • Bureaucratic (rule-oriented)
        • Protect departments
        • Those in the department want to maintain "turf", insist on their own rules
      • Generative (performance-oriented)
        • Focus on the mission
        • Everything is subordinated to good performance
        • People collaborate more effectively
        • Higher level of trust
    • Organizational culture predicts the way information flows through an organization
    • Good information
      • provides answers to the questions that the receiver needs answered
      • is timely
      • is presented in such a way that it can be effectively used by the receiver
  • Measuring culture
    • Use Likert scale with strongly worded statements
    • Determine if measure is valid from a statistical point of view
    • Discriminant validity, convergent validity, and reliability
  • What does Westrum organizational culture predict?
    • Organizations with better information flow function more effectively
    • Better culture leads to better software delivery performance and organizational performance
  • Consequences of Westrum's theory for technology organizations
    • Both resilience and the ability to innovate through responding to change are essential
    • Who is on a team matters less than how the team members interact, structure their work, and view their contributions
    • In the case of failure, our goals should be
      • To discover how we could improve information flow so that people have better or more timely information, or
      • To find better tools to help prevent catastrophic failures following apparently mundane operations
  • How do we change culture?
    • The way to change culture is not to first change how people think, but instead to start by changing how people behave -- what they do
    • Lean management and continuous deliver impact culture
    • You can act your way to a better culture by implementing these practices in tech organizations

Wednesday, September 25, 2019

Accelerate Chapter 2 Discussion Points

Moving on to chapter 2 of Accelerate: The Science of Lean Software and DevOps: Building and Scaling High Performing Technology Organizations, here's a list of bullet points for discussion:
  • We wanted to discover what works and what doesn't in a scientific way, starting with a definition of what "good" means in this context
  • Measuring performance in the domain of software is hard
  • The flaws in previous attempts to measure performance
    • Many other measures in general suffer from two drawbacks:
      • They focus on outputs rather than outcomes
      • They focus on individual or local measurements rather than team or global ones
    • Three examples:
      • Lines of code
        • Rewarding developers for writing lines of code leads to:
          • Bloated software
          • higher maintenance costs
          • higher cost of change
        • Minimizing lines of code isn't ideal, either
          • In the extreme, leads cryptic code that would be clearer if written with more lines
      • Velocity
        • Velocity is designed to be used as a capacity planning tool
        • However, some managers have also used it as a way to measure team productivity, or even compare teams, which has several flaws:
          • Velocity is a relative and team dependent metric, making them incomparable
          • When used as a productivity measure, team inevitable game their velocity
          • This can lead to inflated estimates and being uncooperative with other teams
      • Utilization
        • High utilization is only good up to a point
        • Once utilization gets above a certain level, there is no spare capacity (or "slack") for unplanned work, changes to the work, or improvement work
        • This results in longer lead times to complete work
  • Measuring software delivery performance
    • A successful measure of performance should:
      • Focus on a global outcome to ensure that teams aren't pitted against each other
      • Focus on outcomes, not output (shouldn't reward people for large amounts of busywork that doesn't achieve organizational goals)
    • Four measures of delivery performance:
      • Delivery lead time
        • Time it takes to go from a customer making a request to the request being satisfied
        • When we need to satisfy multiple customers in potentially unanticipated ways, the lead time has two parts:
          • The time it takes to design and validate a product or feature
            • high variability ("fuzzy front end")
          • The time to deliver the feature to customers
            • implemented, tested, and delivered
            • easier to measure and lower variability
        • Shorter product delivery lead times:
          • enable faster feedback
          • allow us to course correct more rapidly
          • allow better responsiveness to defects or outages
        • Measured as time to go from code committed to code successfully running in production
          • Point to consider: How does code deployed to production behind a feature toggle count? Does the toggle need to be turned on for it to count?
      • Deployment frequency
        • Closely tied to batch size, but batch size is difficult to measure, and deployment frequency is easy to measure
        • Reducing batch sizes:
          • Reduces cycle times and variability in flow
          • Accelerates feedback
          • Reduces risk and overhead
          • Improves efficiency
          • Increases motivation and urgency
          • Reduces costs and schedule growth
        • Measured as software deployment to production or to an app store
      • Time to restore service
        • It is important that as performance improves, that it doesn't come at the expense of stability
        • Traditionally reliability is measured as time between failures, but with complex software systems, failure is inevitable
        • So the question then becomes: How quickly can service be restored?
      • Change fail rate
        • What percentage of changes for the primary application or service they work on:
          • Result in degraded service, or
          • Subsequently require remediation
            • Lead to service impair or outage
            • Require a hotfix, a rollback, a fix-forward, or a patch
    • The research shows that high performers do well on all four points, and low performers do poorly on all four points
  • The next question is: Does software delivery performance matter?
  • The impact of delivery performance on organizational performance
    • The research shows that high performance organizations were twice as likely to exceed goals in profitability, market share, and productivity than low performers

Wednesday, September 18, 2019

Accelerate Chapter 1 Discussion Points

We've recently started the book Accelerate: The Science of Lean Software and DevOps: Building and Scaling High Performing Technology Organizations (found here). This is a list of discussion points for chapter 1.


  • "Business as usual" is no longer enough to remain competitive
  • In order to delight customers and rapidly deliver value to organizations:
    • Use small teams
    • Work in short cycles
    • Measure feedback from users
  • DevOps movement: how to build secure, resilient, rapidly evolving distributed systems at scale
  • Focus on capabilities, not maturity
    • The key to successful change is measuring and understanding the right things with a focus on capabilities -- not on maturity
    • Maturity model:
      • Focus on "arriving" at a mature state and being done
      • "Lock-step" or linear, prescribing the same thing to all situations
      • Simply measures technical proficiency or tooling install base
      • Defines a static level to achieve
    • Capability model:
      • Focus on continual improvement in an ever changing landscape
      • Customized approach to improvement, with a focus on capabilities of most benefit
      • Focus on key outcomes and how capabilities drive improvement
      • Allow for dynamically changing environments and focus on remaining competitive
  • Evidence-based transformations focus on key capabilities
    • There are disagreements on which capabilities to focus on
    • A more guided, evidence-based solution is needed, which this book aims to show
  • The value of adopting DevOps:
    • The high performers have:
      • 46 times more frequent code deployments
      • 440 times faster lead time from commit to deploy
      • 170 times faster mean time to recover from downtime
      • 5 times lower change failure rate (1/5 as likely for a change to fail)
  • High performers understand that they don't have to trade speed for stability or vice versa, because by building quality in they get both

Thursday, September 5, 2019

Dependency Injection Sans Reflection in Kotlin

A few weeks back we implemented a very basic dependency injection container in Kotlin using reflection (see here). But here's something cool about Kotlin: it's powerful and flexible enough to allow for a pretty solid dependency injection experience without even pulling out reflection or annotation processing. Check this out:
fun main() {
  val dep4 = Injector.dep4
  println(dep4)
}
​
object Injector {
  val dep4 by lazy { Dep4() }
  val dep1 by lazy { Dep1() }
  val dep3 by lazy { Dep3() }
  val dep2 by lazy { Dep2() }
}
​
class Dep1
data class Dep2(
  val dep1: Dep1 = Injector.dep1)
data class Dep3(
  val dep1: Dep1 = Injector.dep1,
  val dep2: Dep2 = Injector.dep2)
data class Dep4(
  val dep3: Dep3 = Injector.dep3)
And we could take this one step further, and allow for mocks to be injected for integration tests:
// Here's the implementation
​
fun main() = run()
// Running this main method will print this to the console:
// Dep4(dep3=Dep3(dep1=Dep1@610694f1, dep2=Dep2(dep1=Dep1@610694f1)))
​
fun run(injectorOverride: Injector? = null) {
  injectorOverride?.let {
    injector = it
  }
  val dep4 = inject().dep4
  println(dep4)
}
​
open class Injector {
  open val dep4 by lazy { Dep4() }
  open val dep1 by lazy { Dep1() }
  open val dep3 by lazy { Dep3() }
  open val dep2 by lazy { Dep2() }
}
​
private lateinit var injector: Injector
fun inject(): Injector {
  if (!::injector.isInitialized) {
    injector = Injector()
  }
  return injector
}
​
class Dep1
data class Dep2(
  val dep1: Dep1 = inject().dep1)
data class Dep3(
  val dep1: Dep1 = inject().dep1,
  val dep2: Dep2 = inject().dep2)
data class Dep4(
  val dep3: Dep3 = inject().dep3)
​
// Here's some hypothetical test code
​
import io.mockk.mockk
​
fun main() = run(TestInjector())
// Running this main method will print this to the console:
// Dep4(dep3=Dep3(dep1=Dep1@72c28d64, dep2=Dep2(#1)))
​
class TestInjector: Injector() {
  override val dep2 by lazy { mockk() }
}
This could of course be further improved upon, but it shows that in not all that many lines of code, we've got a pretty solid dependency injection setup.