Wednesday, December 27, 2017

Managing the Lazy Pro


One of my first jobs in the early 1980s was at IBM in Boca Raton Florida. Our team was comprised of about 10 juniors/interns mainly from nearby Florida Atlantic University (FAU) and we worked primarily in industrial automation (robotic arms, CNC machines, etc). It was a rigid environment where sometimes it seemed the primary objective was to badge-in before 8AM and leave at 4:42PM on the button. 

Our group was named 5J3 and we took our informal name from an IBM publication that we came across from the 1970s. Its title was "What To Do About The Lazy Pro" and so on all our login screens from then on you were presented with an ASCII art logo with the tag line "5J3, The Lazy Pros". 

The article was interesting as well. It was the cover story for that month's run and had a photo of a young bespectacled fellow with a crew cut and pocket protector, examining an oscilloscope. He was the classic geek which was the predominate view of programmers back then. Though 'Human Resources' initially thought it would be the mathematicians and physicists who were best suited to learn this new science called 'programming', in reality it tended to be students from the arts and soft sciences (like psychology and philosophy) who migrated over.

The article took an almost algorithmic approach to the issue where it gradually worked its way down to the final solution; termination of the cancerous employee, though it did remind the reader that this was nothing new (it was a well understood problem in the manufacturing sector as well) and that there were already performance improvement programs in place for these sorts of things. As was often the case back then the approach was more negative management than motivational, with a goal of protecting the company from this unproductive resource being the primary objective. Almost like dealing with a faulty robotic arm.

The problem with this approach is that programming tends to be a more creative exercise then manufacturing (though as Kanban has proven this approach is inferior even within the manufacturing sector) and positive reinforcement can produce better results than negative management when dealing with creative individuals. In most cases the lazy pro isn't so much lazy as bored.

I had the good fortune to manage a team of superior quality Javascript programmers around 2010. It was a virtual office with programmers scattered across the globe. As is often the case, one of the owners was not a programmer and was what I like to refer to as your classic 'Marine Corp manager'; 'the beatings will continue until morale improves'. This individual would monitor the social media feeds of all employees and whenever, during business hours activity was detected I would get a call cajoling me to fix this problem. 

In my mind this is where a manager earns their pay and can save the organization money. Management is not about controlling people but rather an attempt to get the most out of the resources at your disposal. This requires the manager to get the coder any resources they may need to get their job done more efficiently and to provide guidance when needed. Removing obstacles is a big part of the equation, but so is motivation. In fact many creative types (myself included) will work for less money on projects that are more interesting, or as I often hear "you couldn't pay me enough to do that". 

Many will counter that motivation comes from within and there is certainly some truth to that but de-motivation is an external force and it is this we must be vigilant against. Putting an employee on notice is a de-motivating scare tactic that in my opinion is a short term solution to a long term problem. If you have hired correctly the employee is probably just bored. Boredom and burn-out are common concerns in the software community and addressing these head-on is typically the best way to deal with programmers. 

For the sake of brevity I will not get into various motivational strategies available, however, it is imperative to avoid the death spiral. You are in or entering a death spiral when you are resource constrained and all projects seem to be behind schedule. This leaves little time for motivational strategies (like education, cross training, etc) and tends to de-motivate engineers. Proper planning should not only take into consideration execution estimates but also the intangibles like cross coverage of skill sets and motivational exercises. Creative resources liked to be challenged even when the job requirements might seem to offer none. Put succinctly, managers of creative resources need to be creative squared.









Saturday, November 11, 2017

US <> Capitalism

The United States Does Not Equal Capitalism

In my not so humble opinion, this country was founded on the concept of freedom, not capitalism. Somewhere along the way I think we lost sight of that and began equating capitalism with the US, and started elevating capitalism above freedom to the point where we now have our government attempting to enforce capitalism at the expense of our freedom. What's worse, any time we have government interference in capitalism it is attacked as a communist maneuver. I am not an advocate of communism in the US or anywhere else for that matter so please don't assume that is my point here.

Consider our monopoly laws and unions. Henry Ford and his ilk went to great lengths to squash unionization, and many supported anti-unions views with the rationale being they were fronts for communists and criminals but at the end of the day from the mile high perspective are unions not just a way for the average American worker to try to reclaim some of their life and freedom? Am I the only one who reads Steinbeck anymore?

The opposite of freedom is control and control comes in many subtle forms, not the least of which is financial control. Monopolies are illegal mainly because they attempt to control us, thus taking away our freedoms in a particular area. It is a classic example of freedom controlling capitalism, and it is not anti-American and communistic to have and enforce monopoly laws, it is exactly the American thing to do if we cherish our freedom above capitalism. 

I am anti-welfare, anti-food stamps, etc. I do not agree with wealth re-distribution via taxation to enable a permanent underclass or allow the government to expand to monstrous pre proportions but I do believe somewhere along the way we have allowed the corruption that comes with money to misguide us into defending the ultra-wealthy at the expense of our basic freedoms. It is not noble to support a pharmaceutical company's excessive pricing for the sake of capitalism while Americans die because they can not afford contribute to that company's bottom line. It is not American to allow insurance companies to take nearly 80% of every dollar spent on healthcare. It is actually anti-American to place money above life, thereby depriving an American of their most basic freedom for the sake of capitalism. America does not equal capitalism it equals freedom. Push comes to shove, capitalism loses. 

Consider the American sports industry as an example. At one point these guys were dealt with as property. The wealthy owners colluded, poor-mouthed and alluded to communism when they fought player unionization. Now the greedy players share in the wealth with the greedy owners. There was obviously enough money to go around. The player's unions did not signal the end of the sport, but rather it was the beginning of more freedom for the players at the expense of the owner's bottom line (wealth distribution) which is kind of my point. It is better for the economy when more than one person can afford to buy a loaf of bread. 


Saturday, November 4, 2017

Weak Minded Strong Typing

It never ceases to amaze me the prejudice that exists within the coding community against weakly typed languages. Its almost as if variables have no rights. People can change. Why can't variables?

In the Python programming language I can write …

x = 2
print type( x )
x = str( x )
print type( x )

This is a thing of beauty. Its almost as if the variable can transcend its initial limitations and rise above to a greater social status. "I was a lowly int, but now I'm a string".

Try the same in a strongly typed language like Java and see what you get. Many programmers I work with look down their nose at languages like PERL and refer to them as write-only languages, and while I agree PERL's obscure syntax and side effect constructs can make it difficult to maintain, what language is immune to poor programming practices and sloppy hacking. 

I have worked on massive PERL code bases that were easy to maintain because they were written properly, with attention paid to coding conventions and detail. In fact if I recall correctly, when I got to Amazon in 1999 a significant portion of the code base was written in PERL. 

Lately it seems there is a push in the programming community to move towards more strongly typed languages. Even Javascript is not immune, nothing is sacred anymore. Why don't we just leave the language alone. If you are an advocate of strong typing then build your code base in that manner as a convention but don't impose these limitations on others. I have seen many Python projects that were obviously the product of a displaced Java engineer, with abstract base classes, inheritance work arounds and other rigid constructs more typical of a COBOL program than a modern programming language. 

Part of the beauty of coding in a typeless language is the freedom to not have to worry about the particular constraints of the language. Instead one can focus on the design rather than the implementation. Strongly typed languages are notoriously difficult to work with when the class structures are dynamic in nature to the point that you often feel like you are fighting with the programming language to get the job done. 

I once saw a Java stack trace that was 83 levels of inheritance deep. I can't think of anything that is that abstract in real life. 

While I can appreciate the desire to produce maintainable code I can also appreciate the benefit of flexibility in a programming language. I sure hope the result of this recent strongly typed movement is nothing more than a set of recommended best practice design patterns, and not the onslaught of a bunch of strongly typed interpreters. 
  


Wednesday, September 20, 2017

Definition and Tense

In philosophy we have the seemingly innocent concept of reference. To what do you allude to (the word 'allude', itself being a concept). But when we talk about a reference we must first consider what a definition is because we tend to refer to things as being defined when in reality they are anything but. 

For example, I see an ant. I refer to that object as an ant. I step on that object and obliterate it to the point where its constituent components no longer exist yet what can be said about my reference to that ant. What about any modifications to its environment that ant may have made. It did exist at some point in time, even if that point in time does nothing more than define my relationship to that ant (IE the ant no longer existed in its ant type form AFTER it interfaced with me). 

Consider Abraham Lincoln. Is this a hard reference to a well defined object or a loose reference to something that no longer exists? So the concept of tense is central to the concept of a reference and the concept of a reference is strongly tied to the definition of an object. 

Consider a hard reference something to the effect "I saw Abraham Lincoln remove his hat' versus a soft reference like "Abraham Lincoln's mother passed away today". 

Since Abraham Lincoln no longer exists we must rely on the definition of what an Abraham Lincoln was in order to understand what we actually have a reference to. In other words the reference may refer to something which exists in one of two tenses; exists, or does not currently exist. There are many other dimensions we must consider like 'is in my present space', which we can examine later but for now let's limit ourselves to separating references into one of two groups; currently exists and currently does not exist. We can group a bunch of these attribute like things together later and refer to these things in a context but for now let's keep things simple. 

So in the simple case where we have a hard reference to a well defined object (I saw Abraham Lincoln remove his hat today) we can now think about the difficulty in actually defining an object and face head-on the fact that in all such cases we must ultimately limit our concept of a definition in some significant ways. 

The statement 'Abraham Lincoln was a human' is pretty much accepted fact. The exact definition of a human MUST be abstract. Not all humans are identical therefore our definition of a human must be based on physical attributes (like organs, etc) clipped at some level, which themselves are abstract concepts predicated on other abstract concepts like atoms. Think of the concept of a hand. Is there an absolute limit on the size (small or large) of a hand? Must it be operational? Must it be attached to a human? Must it be part of a biological organism? Do monkeys have hands? If all things must ultimately be composed of some atomic entity can we measure all things using this unit of measure? Can we say this love is stronger than that love because it contains more atoms? How many atoms does it take to make a love and in what configuration? 

Regardless of how we go about this defining business (since absolutely defining something is technically impossible) what this definition must ultimately reconcile is what we accept as historical facts (effects) about this entity, regardless of entity tense. One might argue that once a reference no longer refers to a physical object it MUST reference the set of effects (we will call these 'accepted facts') this entity caused, however, one could also argue the set of effects this entity has accumulated over time must also be considered even when evaluating existing entities. Put another way you (and every other entity in the universe) are simply defined as the set of things you have affected even if the only thing you effected was yourself. References to you are either direct or indirect and refer to this set of facts. 

So basically all references are weak and all objects are loosely defined. Such are the concerns of true AI.






Tuesday, April 11, 2017

Percival and Such

Perhaps the reason we never hear from anyone after they leave this world (if as Percival believes we still exist) is because it is so insignificant in the grand scheme of things that it isn't even worth a passing thought.

Sunday, March 26, 2017

Understand the Answer. Hell it Took Me 30 Years to Understand the Question!

My hobbies over the years have been mathmatical logic, the attempt to model thought and deterministic programming. Obviously this has brought me to many odd places like feedback robotics, natural language parsing, symbolic logic, the ontology of time, etc., so it is not surprising over the years I have run across the halting problem and Godels refutation of Peano's axioms, to name just a few of the philosophical conundrums confounding modern day philosophers.

I finally had some free time this past week so I returned to a topic that has always eluded me; namely why is the halting problem such a big deal. I liken it to Godel's proof because they both rely on the use of Cantor's Diagonal to formally construct their proofs. Cantor's Diagonal is a way to formally introduce the concept of infinity into an equation. He was an evil man :-)

I guess one of my problems with understanding the actual question (or problem) is that being brought up as a computer programmer I always had to take into consideration the resources at hand when developing algorithms so I rarely had the luxury of assuming unlimited (or infinite) resources in any application I would develop. When I would study a problem in classic mathematical logic it became clear many proofs that a system was inconsistent would rely on the introduction into the equations of the notion of infinity and this always confused me. After all, nothing is infinite in real-life so how do we allow it into our every day equations like it is some real quantity rather than some abstract notion? It almost makes one glad Cantor ended up institutionalized.

So this past week it finally dawned on me the halting problem (much like Godel's proof) is simply the result of allowing infinity into the equation. While it is derived from Hilbert's Entscheidungsproblem it is basically the problem of recursively allowing the output of an operation to be used as the input. It really has little to do with recursive function theory or set theory and more to do with ignoring the very real constraints we experience in every day life (like time, clipping, etc).

I do not mean to say that time, resource constraints, etc are the only problems as Russell's Paradox demonstrates it is really more infinity based, or the belief that any system can handle unrestrained totality. In reality systems are constrained and I believe we are best served by constructing systems with input constraints in mind. In so doing we can reduce the unpredictable to predictable within reason.