Why find what can be fixed?
How about a tool that fixes what it finds?
Why do we all have a deep backlog of security tickets some tool found for us? Why do so many of us take an annual training on OWASP Top 10, secure coding practices, etc?
If our behavior is any indication, it seems like finding problems has become a badge of pride.
Really the same can be said for other "-ilities" as well. That performance issue found by APM? You know, the one the ops team is mitigating with an extra $5000 a month in compute. Those crashes the ops team mitigates by through a janky reboot process? Code so complex the Harvard intern dropped out to become a fry cook?
Why are problems like these so common that they have attained meme-like status in the software development and operations culture?
Want to find a memory leak? A code profiler can be used locally if you know how. APM can be used at runtime in production. While each of these skills is rarer than we'd care to admit, even scarcer are the wizards who know how to fix these problems.
Consider this "funnel of capability" for a moment:
Millions of developers creating security, performance, stability and maintainability problems.
Hundreds of thousands who know how to use the tools to detect them.
Thousands who know how to fix them.
It is evident why the backlog of "ility" type issues continues to grow without bound.
Fixing Humans
As the number of developers creating code continues to grow, we need the ranks of those who can detect and remediate quality issues to expand rapidly.
But any psychologist or sociologist will tell you, "You can't fix people."
Developers have a "maker" personality. It's in their nature to solve problems by writing more code. In the context of the problem at hand, this only exacerbates the issue.
So the notion of "shift left" and addressing -ility issues before a pull request is approved is a fallacy.
There is a human all the way to the left of wherever we are now. The fallacy lies in expecting them to create value by writing code while simultaneously detecting and fixing security, performance, and other issues.
There is nothing wrong with the humans.
Let's Talk Approach
Why do we deploy a static scanner?
Is it to find security issues in the code?
Or is it to ensure we remove security issues from the code?
"Why?" is a philosophical question. Take a moment, contemplate it, maybe even remove that lint from your navel.
Done? If you chose the first option, stop reading, go ahead, and solve all your problems with yet another tool in that next vendor meeting.
But for those of you who found meaning in considering the issue, the next question is: Under what model could we ensure security issues are removed from the code.
We could train the humans creating the code to fix their own messes.
We could expect tools to fix messes before humans need to get involved.
The first solution of training humans has failed. We've been doing it for a decade or more, and we have seen little change. So, let's apply Einstein's wisdom: "The definition of insanity is doing the same thing over and over and expecting different results."
Before moving on from this model, let's take a closer look at human behavior in the context of passwords found in code. It's no longer a prevalent problem. Why?
I assert that because this was a very simple problem to fix, humans were eager to remedy it.
The conclusion to draw is that given sufficient understanding, humans will do the right thing. Therefore, humans don't need to be fixed.
The problem must lie with the tools.
Tools that Fix Problems
With a new model addressing the issue of finding more problems than humans are able to fix, we need a new class of tools. Let's consider what the implementation of such a tool would look like.
If I have a tool that sees a problem and is able to fix it then I would want to implement it in the pull request process.
When my CI system builds and checks the software, this tool will identify a problem, create a new pull request suggesting the fix, and provide an explanation to the developers.
As a consequence, the developer can see the code, learn from the experience they care about, and take a positive action. This implementation is winning.
Furthermore, let's return to our capability funnel. Our wizards who can fix the code suddenly have much less demand on their time. They only work on problems that genuinely require their expertise.
When the wizards interact with the developers, the latter are better equipped to understand because the tool has fostered understanding in the context of the particular "ility."
However, I want more from this implementation. When the wizard and developer agree on the solution, I want the wizard to be able to train the tool so that other developers and teams can benefit from that learning.
And now I'll buy that tool
If a tool like this existed, I would replace the tools that only find problems. My teams would actively participate in training the tool during internal hackathons, conferences, and normal team rituals. My wizards would present new cases for the use of the tool and collaborate with the tool's vendor to develop more features and capabilities.
Can someone build this tool so I can buy it?


