Friday, September 29, 2023

 When I return from whence I came,

all will be different but also the same.

Lessons I've learned will be quickly absorbed,

and contribute to those who have yet to be born.

My knowledge will swell from the others around,

and go towards the answers which remain to be found.

For I am alone but a very short time, 

and belong to the flow which advances the line. 

Advances the line and continues the flow,

which all are a part of as we constantly grow.

We continue to grow towards an end that is clear,

that all should embrace and no one should fear.

For fear is a fleeting lost moment in time,

like Jah is to you as is trunk to the vine.

So spread forth your faith in the knowledge that all,

will absorb and protect you, and not let you fall.




Thursday, May 13, 2021

Erosion of Rights or Natural Balance?

When I was young my mother thought it would be a good idea if I memorized my social security number so a campaign was launched to do just that which eventually I did.

I recall at the time being fascinated at this concept. What was this social security number, how did I get one, did everybody have one and other thoughts crossed my mind. I learned every one's was unique. I naturally assumed it would be used to uniquely identify you, however, I also learned the prevailing belief at the time was that it was an anti-American imposition for the government (or anyone for that matter) to do so.

Americans it seems don't think it is a good thing that the government can uniquely identify them using a number. I have heard friends argue that it used to be illegal to use a person's social security number to identify them. I do not believe this to be true. There are several laws that do limit requirements to give out your social security number.

But I recall as a youngster being perplexed because it seemed logical to simply use someone's social security number to identify them, government or otherwise, yet here were people telling me this was an intrusion on my rights.

Over time I have watched in horror as my rights have been eroded more and more as the use of my social security number has become a statement of fact, not a questioned practice. I can't buy a car, apply for credit, get a government ID without my social security number.

And so here we are, right back to where my natural instincts led me. The use of your social security number to uniquely identify you is considered by most (if questioned by any) to be "just the way it is". So I wonder, is this an erosion of my rights or the realization of natural balance. 


 

Monday, December 21, 2020

The Fermi Paradox, Revisited

The Fermi Paradox simply questions why we fail to see evidence of extraterrestrial life when there appear to be so many known solar systems.

Perhaps the answer is not so complicated. Maybe we aren't worth contacting.

I contend that no self respecting alien life form would want to make contact with us until we get our shit together. As far as planetary civilizations go we are pretty retarded. We speak no universal language, we have no central body to negotiate with and we defile our planet. Hell we can't even agree on a universal system of weights and measurements.

Until we become approachable we will remain isolated perhaps even marginalized by the other life forms which obviously must be present but who want absolutely nothing to do with us. We will slowly crumble under own weight unless we can get an infusion of intelligence from another source.

It is for this reason I propose the International Galactic Society (IGS). A society of like minded individuals determined to make us approachable by other forms of intelligent life.

We need a universal language (I propose English), a universal system of weights and measures (fuck the metric system) and a universal global governing body (led by me of course) to get started. Once we have overcome these minor obstacles we can move onto more fundamental challenges like galactic advertising, galactic language translators and accommodations which are galactic accessible. Oh, and we will also need a galactic website and associated attractive logo.



 


Sunday, April 26, 2020

The Reason Why

So before I continue with potential solutions to the many problems I perceive surrounding existing formal logic systems (the implication operator, issues surrounding the introduction of time into a formal system and the concerns regarding replacing a binary logic system with a many valued logic system), which may ultimately bring us into the realm of modal logic (but I suspect its not that simple) I think it is best we take a step back and consider what it is we are trying to accomplish by formalizing any of this.

In other words, what is the "why" of a logical formal mathematical system and what questions we would like to try to answer once we have created such a thing. What is it we are trying to accomplish exactly. I suspect we are trying to address the following general areas …

1) Did something happen (past) or will something happen (future)?
(and is it the case that we can only answer for the present tense, or past or future?)

2) What is the certainty surrounding the derived answers (the probability) and is there any such thing as absolute certainty?

3) Is all of this somehow relative, either to the evaluator, the time or the place? Is it the case that everything is relative based on an evaluator's time and place and maybe even, who or what an evaluator is?

It would seem that predicting the future is harder than predicting the past but is this true and how do we demonstrate this. Clearly, if we could prove that something absolutely must happen given a minimum set of required inputs we could then predict the future with 100% certainty. Is it even possible to enumerate a minimum set of required inputs? Is it any easier to predict what did happen versus what will happen? How does probability enter into the equation. Can we ever say something is absolute?

To answer some of these questions let's go back to my perceived problems which may not be problems at all since I tend to be a simple uneducated layperson with nothing but questions and little formal education. 

Probability can sometimes be expressed as 'necessity' and 'possibility'. When we look at modal systems we will see one can be expressed in terms of the other so I don't think this is a problem we can not overcome with already understood methods. Perhaps many valued logic is solved using a predicate of this nature.

When we further investigate the concept of time. we discover that building on A.N. Prior's work Saul Kripe seems to have proposed a solution to this issue and so have Rescher and Urquhart so one might be inclined to think this is just a matter of formalism and the logical conclusions derived from using their systems. Considering Rescher was a leading researcher into many valued logic, perhaps this solution is tied to the solution of time as well. I need to understand these systems better to be certain, but I suspect the foundational problem is not time, especially if we constrain time to the past nor do I believe it to be probability, again if we constrain it to the past.

Is there anything which is absolute in our universe and if there is, can there be any operations for combining absolutes to continue the absolute chain of certainty or is everything probabilistic or does everything become probabilistic once we use operators to combine results. I suspect things which happened in the past are 100% probabilistic, the future, not so much. Is this even true though it would seem so at face value. Perhaps more important is the concept of semantic relationship. 

Revisiting the issues surrounding the implication operator we see this amounts to a semantic relationship (or lack thereof) between the antecedent and the consequent. Clearly these two must hold some valid provable relationship to each to overcome this problem, but it is fair to ask if this is simply a problem with formalism. I suspect not. I am not sure we can absolutely construct a system of semantic relationships that is consistent and complete. We tread into the boundary between mathematics and concepts when we investigate semantics and to quote Frege, "Concepts are areas with fuzzy boundaries." What are we left with when boundaries are ill defined?

It is often helpful to construct a thought experiment which can elucidate some of these issues. Let's use common sense to guide us and remove some difficult issues we are aware of to gain some insight. In other words, let's start with some low hanging fruit. 

To remove the issues of perspective, knowledge and time let's take a twig. A small piece of wood. Now if I burn this piece of wood it will at some point cease to be a piece of wood and will instead become ash. This ash may ultimately blow away and we are left with nothing from something. Let us not concern ourselves at the moment with the semantic relationship between a piece of wood and ash and what ash blown away by the wind is, since I suspect this is where we are ultimately heading. Let us simply capture this event using a video recording device. 

So if I film the event over time, of a piece of wood burning until it becomes ash and blows away in the wind I can certainly say this event did happen with 100% certainty. My video recording of this event is proof and so even if only I and a handful of others saw this in person we could certainly share this video recording with others. The fact that at some point in time this piece of wood did exist at some place (let's say my patio) and it no longer does, could be considered an indisputable fact of something that happened in the past (assuming we don't take into consideration fake or doctored videos, etc) at a certain location (many recording devices can also capture location) at a certain time (again, let's rely on the recording device's reporting of time) and again, let's not concern ourselves with relativity and observational issues.

This seems like the easiest thing to describe using a formal system and something where we can begin. Again, let me stress these points; excluding the consideration of relativity and the definition of observer. 

So clearly, a formal logic system which could represent this event as an absolute certainty would be a good place to start. We wish to construct a logical formal mathematical system which represents this event absolutely. It will always prove this did happen. It could prove that it is "necessary" that this did happen in the past. I keep stressing the term "necessity" because we will soon become exposed to this basic concept when we review what a modal logic system is. You can look up the difference between "necessary" and "possibly" under any introduction to modal logic systems if this is still gnawing at you (which I hope it is and I hope you do) and by constraining our research to the past, for now, we can perhaps make some progress in constructing a system to model reality. 

I think this simple thought experiment lays bare what we are really up against here though, and I don't know of any existing modal logic system which has completely solved this problem yet. While object oriented computer science can aid us a bit in the understanding of predicates such as 'is a' and 'has a' I believe the central predicate we will have to come to terms with is 'is to' and what we are trying to accomplish is the ability to handle the manipulation of concepts and what their 'is to' predicates are. I will refer to this as the semantic predicate or the semantic problem, and I suspect once we can solve this we will be in a much better place in beginning to construct a logical formal mathematical system which can be used to express reality. Using our above example, "ash 'is to' wood' as bla", is probably where we are heading. 

But we have some work to do because the current state of modal logic is a great place to start.





Modeling Reality, Introduction

Disclaimer:

As I near retirement (and because of recent employment woes) I seem to have some free time on my hands for the first time in a long time. When that happens I often get back to my hobbies, one of which has always been symbolic logic and modeling reality. I usually (as previous blog posts may show) attack this from a philosophical perspective, but sometimes try to get down to the mechanical aspects as well. This post is one of these instances. I will also try to get into Relevance Logic research in future posts but this is kind of a placeholder for me to go over the foundational needs for such research.



So when trying to model reality using logic we are currently fucked. I contend there are a variety of reasons for this, not the least of which is the inability to account for time in a formal logic system. Now it is also the case that folks like Godel have demonstrated that infinity also causes issues (damn you Georg Cantor), however, the universal (for all) and existential (there exists) predicates aid us a bit here, hence the rationale for what is sometimes referred to as first order predicate calculus.

Now it is probably also the case that if we had similar constructs for time we might be in better shape. Something like a temporal logic universal (it is always) and existential (sometimes) predicate could get us out of this hole but maybe 'before' and 'after' is where its at. Don't know yet.

But I am still of the belief that our reliance on just two states (true and false) also lead us down a rabbit hole. I have always believed life is not binary and just having true and false is a part of the basic problem. Indeed, in just the field of digital electronics we have three states (tri-state logic) where the third state (often referred to as Z or high impedance) becomes the unknown. I ascribe to the belief that we really may have an almost infinite number of states but I am really deficient in this area. Three or four may actually be good enough. I really don't know at this point.

So the solution long term seems to require some additional stuff we currently do not have in our tool chest.

Now A.N. Prior has done some rigorous work regarding temporal logic, and folks like Nicholas Resher have done good work in the field of many valued logic, so eventually we can lean on them to work out some of the issues arising from time and binary logic.

However, it is also the case that the way we do proofs in mathematical logic relies on what is often called the implication operator. Now the problems here have indeed been investigated and dealt with so I will briefly touch upon some of this work and try to remove this obstacle before we move on to more esoteric topics like temporal and many valued logic.

As is often the case, just stating the problem is sometimes difficult enough but always required before one can understand the solution.

For those who already understand the problem you can simply jump ahead to the topic of Relevant Logic (or relevance logic) and read the foundational works by C.I. Lewis, Ivan E. Orlov, , Wilhelm Ackerman, Alonzo Church, or jump ahead to the magnum opus of the subject, Entailment: The Logic of Relevance and Necessity by Nuel Belnap and Alan Ross Anderson which I pan on covering in future blog posts.

A decent understanding of formal logic systems, Fitch Charts and Boolean operators as applied to inductive reasoning might also be helpful (but not necessary) before you continue reading this text.

First we introduce the implication operator. It is sometimes called the 'implies' operator or the 'if then' operator. Its use is often called 'material implication'. It is used in formal systems but it has issues. First we will describe it and then look at some typical uses, then we will look at where it fails, and finally look at some potential solutions to these issues. To keep things simple, we will use '->' as the implication operator. We can read p->q as p implies q.

The truth table for this operator (note, it is a weak connective) is simply …

p  q  p->q
----------
T  T    T
T  F    F
F  T    T
F  F    T

When using the implication operator we often call the p argument the antecedent and the q argument the consequent.

When we apply this concept to human language we get the following basic definition.

"It is not the case that p is true and q false". Also, "p implies q" is equivalent to "p is false or q is true". Let's break this down a bit further.

"It is not the case that p is true and q false". This is the second row (T F F).
"p is false or q is true".
Well "p is false" are the last two rows.
"q is true" are rows one and three but we have an 'or p' here so this is actually row one.

For example, "if it is raining, then I will bring an umbrella", is equivalent to "it is not raining, or I will bring an umbrella, or both". This truth-functional interpretation of implication is called material implication or material conditional.

Enumerating this we get

(1) Our original statement (which we state is true).
p = T (it is raining)
q = T (I will bring an umbrella)
p->q = True

(2) Contradicts our original true statement, so its false.
p = T (it is raining)
q = F (I will not bring an umbrella)
p->q = False

(3) This is a bit odd. It arises from the fact that q is true, but just seems wrong.
p = F (it is not raining)
q = T (I will bring an umbrella)
p->q = True

(4) This seems consistent.
p = F (it is not raining)
q = F (I will not bring an umbrella)
p-> = True


Let's dig a little deeper and look at "p is false or q is true"

~p or q

This means not p or q (~ is the negation operator)

basically this says all we really give a shit about is q since anything 'OR' something is true if something is true. This can be rewritten as

(p or ~p) or q

So the first part of this seems to logically hold. In other words

q

And indeed when we look at our truth table we see that in all cases when q is true p->q is also true (rows one and three).

But this is also what we call a tautology since (p or ~p) is always true and as a result, we really don't care about q. In other words since (p or ~p) is always true, why bother with q as we already know the truth value of the entire statement; its always true.

Now let's keep in mind these two basic facts

(~p or p) = True
(~p and p) = False

Which translate to "anything OR the negation of anything" is always true and "anything AND the negation of anything" is always false. These are simply the definitions of the 'AND' and 'OR' Boolean operators.

So here we hit our first problem with using the implies operator. Its just not right. Sure when q is true the value of p->q is true, but if we substitute (~p and p) for p we get

(~p and p) -> q
F -> q

These can be found in the last two rows of our truth table for the implies operator. Put another way, looking at these two rows, False implies True is True (row three) but False implies False is also True (row four) which leaves us with a big WTF?

This is known as entailment and it arises from the principle of explosion which stated simply means an inconsistent premise always makes an argument valid. From a set theory perspective, if I can derive A and NOT A from a set, the set is said to be inconsistent because formally I can derive anything from this inconsistent set. In other words a contradiction must never prove to be true.

It is also problematic that if p is false it implies every q (again, last two rows in our truth table) because in this case q is said to be vacuously true (a universal statement that is only true because the antecedent can not be satisfied. For example, all cell phones in the room are turned off will be true even if there are no cell phones in the room).

So, even though formal logic (and all inductive proofs which rely on it) is somewhat lacking we can still use it to define things like relativity, but we also understand they are inherently flawed and we need to research this no further than Godel's Proof (the refutation of Peano's axioms) to know this is indeed the case.

Now that we understand the problem a bit better, we can move on to some potential solutions.

For those of you still reading this hokum, here is an interesting site. It is basically a Fitch Chart/Diagram helper script which can be used to try out some logic statements using many (not just the implies) operators for well formed formulas (WFFs).

http://www.harmendeweerd.nl/fitch/






Saturday, April 25, 2020

A Layperson's Brief Review of AI In My Lifetime

I was born in the 1958. My high school graduating class was 1976 and I attended college from 1979 though 1985, so this sets the foundation for my educational experience. 

My first job in the computer industry was as a software developer for a company named Computers 101 in Hollywood Florida where we sold micro computers and I wrote software applications for various customers. I eventually worked for IBM in the early 1980s in an Industrial Automation group, Amazon from 1999-2004 and many smaller companies in between. At one point while at IBM I worked on a robotic arm to paint a car fender coming down a conveyor belt. This could be considered the extent of my professional experience with AI. Not really AI at all. 

I have been interested in AI since my college days. My major was in Information Processing with a minor in computer systems. I did a directed independent study program with one of my professors, Marty Solomon in my last year. This was a probability first search algorithm written in LISP. Very simply put, this program would search for a result, once discovered it would go back to each node in the tree and update its probability for success searching for a particular category of goal. So the next search, rather than traversing the tree in a depth first or breath first manner would use this probability first approach. This may be considered the extent of my academic knowledge of AI. Again, not much at all.

As you might expect, I have been interested in AI since these days. My interests stemmed from the desire to model human thought and behavior more than the ability to have a machine learn for the sake of learning. They are actually two very different things. 

Training a machine to be more performant than a human or not attempting to solve a problem as a human does was never an interest. In the current time (around 2020) the concept of machine learning has become more of an attribute weighting approach, while writing code which writes code (what I tend to consider true AI) has not gotten much traction. It is the latter which I was always more interested in. It is the former which tends to be more productive and profitable. 

Now my first exposure to what was considered AI was a program called Eliza. This program was developed by a psychologist so obviously it held some interest to me. It was developed in the 1960s and was a somewhat simplistic program not much different than the old game program where you would ask a program a question and if it didn't know the answer it would ask you to tell it the answer and then it would store the answer and now it knew the answer. The next time it was asked the same question it would simply repeat the answer. 

So for example, you might ask the program what is a kangaroo. It would answer "I don't know" and then it would turn around and ask you the question, "what is a Kangaroo?" and you might answer "a kangaroo is a mammal" and the next time you ask the program what is a kangaroo it would respond with "a kangaroo is a mammal". This is a form of knowledge retention, but hardly artificial intelligence. It actually demonstrates the difference between knowledge and intelligence. 

The Eliza program was not much different. It tried to do some basic reasoning but its famous out when it didn't know something would be to ask "well how does that make you feel?". Pretty much what a psychologist would charge you for, so if nothing else, it was economical. 

Actually, in the 1950s Alan Turing proposed the Turing Test which is basically the belief that we have achieved artificial intelligence when, given the conversation between two entities (one a human, the other a computer), neither of which may be seen by a human evaluator, the evaluator can not tell the difference between the human and the computer. The Loebner Prize actually pays out a monetary award for the winner of an annual contest along these lines (https://en.wikipedia.org/wiki/Loebner_Prize) and reading through some of these transcripts is often entertaining as well as educational. For example, it has taught me this is no longer a valid test for artificial intelligence as I believe the goal of a Turing Test these days is to actually dumb down the computer participant. 

I'll give you a concrete example of what I mean. In one exchange (in a transcript from one Loebner contest) the human tells the computer "Oh, you are located in New York. I am located in Australia". The human then asks the computer "Are you East or West of me" to which the computer responds "both". A dead give away as computers are more logical than humans. Most humans would not answer in this manner, even though it is technically the correct answer.

Back in the 1980s, the programming language Prolog was a popular approach to creating what were known at the time as Expert Systems. This used something called "Horn Clause Logic" and was a grammar for expressing logic in this format. This is not much further advanced than Aristotelian Syllogisms except I believe it supported first order predicate calculus (the universal and existential operators) but was also a somewhat mechanical deduction approach. Possibly how humans think; probably not.  

Which brings me to my summary of what I believe is considered artificial intelligence these days. Keep in mind I have not been involved in AI in any capacity since my college days (about 40 years ago) or have I done any AI type coding nor any deep dives into any literature on the subject for many years, so at best this may be considered a layman's perspective. 

These days, it seems there are three basic approaches to AI though it is probable all use some methods of each. 

I will use (1) attribute weighting, (2) the popular Amazon product 'Alexa" and (3) the IBM product 'Watson' to discuss their basic differences as I have come to believe them to be. There are obviously other variants and different products I am not aware of, and I am sure some cross-pollination has occurred, however, these will suffice to demonstrate the fundamental differences as I see them. Again, keep in mind I have no in-depth knowledge of any of these examples and what I am about to explain is simply what I have come to believe from discussions with friends in the field. I have never interfaced with any of these three nor have I any internal insight into how they go about their business. Again, simply a layperson's perspective. 

Attribute weighting is much like my directed independent study approach mentioned above. A goal is provided and the code goes through the various attributes it uses to arrive at the correct answer and attempts to adjust the weight of the various attributes until it arrives at the correct conclusion using an adjusted set of attribute values (or weights). 

Alexa, is what I like to think of as a crowd sourced version of the game program described above (using the kangaroo example) so you ask Alexa a question, if it does not have the answer in its data store it will go out to a crowd of mechanical turks (see the Amazon Mechanical Turk program https://www.mturk.com/ for more information) and take the responses and figure out the most popular and add that to its data store. The next time the question is asked it will come from this data store.

Watson, tends to take a more syllogistic approach where it tries to use deductive reasoning to derive new facts from its data store of known facts. Much like the canonical example ...

All men are mortal.
Socrates is a man.
Therefore, Socrates is mortal.

Watson will search through its data store of known facts and attempt to derive new facts using the existing set of facts. If these new facts are indeed shown to be true, they get added back into the data store of known facts. 







Monday, March 16, 2020

What The Hell Is Babel?

INTRODUCTION

This article is simply an expansion on the excellent article posted here

https://medium.com/@bluepnume/jsx-is-a-stellar-invention-even-with-react-out-of-the-picture-c597187134b7

The reason I created this article is because I wanted to actually try what this article shows but it took me a few minutes to get that code setup and working (and the posted code actually breaks) so I figured this might save some other folks some time, plus I wanted to have a reference so I could repeat this process in the future.

JSX is a templating language which is typically used by a transpiler called 'Babel' to produce React.js code. JSX syntax is very much like Javascript syntax with inline styles though even the styles can be further broken out into separate CSS type files (which is what you tend to do when you use 'React Styled Components'). This article just concentrates on Babel (with some minor 'node' thrown in for ease of use) but you should be aware that in most cases folks also use something called webpack to produce their final product. webpack automates what we manually do near the end of this article when we manually copy files and paste them into our html file to create a browser renderable file. Also note this example uses babel-cli which means Babel from the command line, though you could also simply include Babel as a '<script>' tag in your html file and that would cause transpilation to occur at run time versus at build time. I specifically use babel-cli because I want to see what Babel actually produces and how it converts ES6 to plain vanilla ES4. I also override the React output function and provide a vanilla Javascript output function to aid in this process. 

If you are new to all of this you might struggle a bit with this post. Basically, ES4 is old style Javascript and ES6 is a newer version of Javascript and ES6 is not backwards compatible with ES4. To fix this Babel comes into the picture. You can think of Babel as a tool to convert new Javascript (ES6) to old style Javascript (ES4) although it is capable of doing far more than this. From a high level this solves most browser compatibility issues, though as I say, this is just one benefit of using Babel

What this article (and the original one it is based on) is trying to show is that even though Babel appears to be tightly coupled to React.js, it isn't. JSX is simply a templating language which can be transpiled by Babel into nearly anything. The fact that Babel is almost exclusively used to transpile JSX to React.js or Typescript is simply a reflection of how it is used. It does not have to produce either of these two target languages and so in this article I (as the original author also did) will show you exactly what that means by simply using Babel to transpile ES6 to ES4. Worth understanding if you ever have a bug you suspect was due to the conversion process. BTW you should read the original article first and if it makes sense to you and you can get it running from that article, then you really don't need to read any further. Its just that it took me a while to actually be able to put up a simple web page that used Babel to transpile ES6 to working Javascript following that example so I felt I needed a working example for future reference. Note the only reason that the original article doesn't work is it uses 'null' in two places. If you simply replace these with {id:"someId"} the example will build fine. You would still need to integrate it with your html file to get it to run in your browser and we will do this later in this article.

THE SETUP


So first, we need to have node.js installed and working on our computer. Open up a shell/command line terminal on your computer and type 

node --version

If this doesn't show you a version then install node by following the instructions shown here for Windows 

https://www.guru99.com/download-install-node-js.html

or here for Mac

https://www.webucator.com/how-to/how-install-nodejs-on-mac.cfm

Assuming all worked as expected (IE 'node --version' actually shows a version) we will now create our basic project. To do this create a directory called 'jsx_pragma'. I am using a Mac and I created this directory on my desktop so you will need to adjust accordingly. Using your GUI you could just right click on your desktop and create a new folder.

Once you have created your 'jsx_pragma' directory (some folks call it a folder) open a terminal and navigate into it ('cd ~Desktop/jsx_pragma' should work) and then create a file called package.json. Paste this into it.

{
  "name": "jsx_pragma",
  "version": "1.0.0",
  "description": "test jsx pragma",
  "main": "RenderDom.js",
  "dependencies": {
    "babel-cli": "^6.26.0"
  },
  "devDependencies": {
    "babel-preset-env": "^1.7.0"
  },
  "scripts": {
    "build": "babel src -d dist"
  },
  "author": "",
  "license": "ISC"
}

Now simply type this on the command line 

npm install

This will download all the stuff you need to actually build this project. It will figure all this out from the dependencies we put in our 'package.json' file. This is because 'npm' stands for node package manager which is part of node and this is what npm does for a living. Specifically you should see a directory called 'node_modules' and a file named 'package-lock.json' in this directory alongside the 'package.json' file we just created. 

We are not going to go too much in detail about what our 'package.json' file contains except for one key section. If you look in the 'package.json' file we created you will see this 

  "scripts": {
    "build": "babel src -d dist"
  },

Which basically defines a script we will ask npm to execute later. In our case, this script is called 'build'. It also tells npm what to run when we say 'npm run build' which in our case is babel. Additionally we are telling babel to look for input files in the directory 'src' and to place output files in the directory named 'dist'. This is pretty standard stuff but first we should actually create those two directories. You can do this however you want, just make sure these two directories exist in our 'jsx_pragma' directory. For example, you could use your GUI (right click in the 'jsx_pragma' folder and create a new folder) or you could execute these commands from the command line in the 'jsx_pragma' folder

mkdir src
mkdir dist

Once you have done this your 'jsx_pragma' folder should look something like this 

/dist
/src
/node_modules
package-lock.json
package.json

Where dist, src and node_modules are directories and package-lock.json and package.json are files. 

We are almost ready. We simply need to create two source files which are actually our project and we will be done. Create a file named 'RenderDom.js' and put it in the 'src' directory 

RenderDom.js
-------------
let renderDom = (name, props, ...children) => {
  let el = document.createElement(name);
  for (let [key, val] of Object.entries(props)){
    el.setAttribute(key, val);
  }
  for (let child of children){
    if (typeof child === 'string'){
      el.appendChild(document.createTextNode(child));
    } else {
      el.appendChild(child);
    }
  }
  return el;
}

Next create a file named 'test_render.js' and also put it in the 'src' directory 

test_render.js
--------------
/* @jsx renderDom */

function renderLogin(){
  return renderDom("section", {id:"secId"}, 
    renderDom("input", {type:"email", value:""}),
    renderDom("input", {type:"password", value:""}),
    renderDom("button", {id:"butId"}, "Log In")
  );
}

Note those are not really JSX but rather standard Javascript (thanks to Scott for pointing that out) but since we are really interested in Babel we won't worry about that. Now we can run the following command and we should get two files in our 'dist' folder.

npm run build

We are almost done. The reason I say almost is for some reason the babel presets don't get set automatically. I have no idea why. I mean they are in our 'package.json' file but still, they don't work. To fix this you could alter the babel command line in the 'package.json' file to include them but I prefer to simply create a .babelrc file and put them in there. So, create a new file in the 'jsx_pragma' directory and call it '.babelrc' and stick this in there (NOTE don't forget the leading dot when you create .babelrc !)

{
  "presets" : [ "/Users/kensmith/Desktop/jsx_pragma/node_modules/babel-preset-env" ]
}

Two things to note. First you will need to change this line to point to where YOUR 'jsx_pragma' folder is located. In the example above it is located at '/Users/kensmith/Desktop/' on my machine. I am assuming yours is not in a directory called '/Users/kensmith'. Also, you must use an absolute path. Do not try to take a shortcut and use a relative path. Specifically, this works 

/Users/kensmith/Desktop/jsx_pragma/node_modules/babel-preset-env

This does not work

jsx_pragma/node_modules/babel-preset-env

I believe this is a babel6 limitation. 

Once you have created your .babelrc file and placed it in the proper directory and modified the absolute path in that file to point to YOUR 'jsx_pragma' directory location you should be able to run 

npm run build

Remember, our output will be located in our 'dist' directory and we will need to put it in a html file to actually see it in action in our browser. You could run it just using node but that's no fun, so create the following file and put it on your desktop (or somewhere you can easily find) and call it test.html. 

test.html
--------
<html>
  <head>
  </head>
  <body>
    <div id="ruut" name="ruut">
    </div>
    <script>
var wtf = renderLogin();
document.getElementById("ruut").appendChild(wtf);
    </script>
  </body>
<html>

All we are doing here is creating a simple html file with a div called 'ruut' and two lines of Javascript. The first will call the 'renderLogin()' function we created in our 'test_render.js' file and assign its output to a variable named 'wtf'. The second simply appends this output to our div named 'ruut' using the standard DOM appendChild() method. 

Finally, in the 'dist' directory will be two files. Open each and copy the contents of them into the html file right after the '<script>' tag (normally something called webpack does this as part of the build process but here we will do it manually). When you are done your html file should now look something like this 

test.html
--------
<html>
  <head>
  </head>
  <body>
    <div id="ruut">
    </div>
    <script>

var _slicedToArray = function () { function sliceIterator(arr, i) { var _arr = []; var _n = true; var _d = false; var _e = undefined; try { for (var _i = arr[Symbol.iterator](), _s; !(_n = (_s = _i.next()).done); _n = true) { _arr.push(_s.value); if (i && _arr.length === i) break; } } catch (err) { _d = true; _e = err; } finally { try { if (!_n && _i["return"]) _i["return"](); } finally { if (_d) throw _e; } } return _arr; } return function (arr, i) { if (Array.isArray(arr)) { return arr; } else if (Symbol.iterator in Object(arr)) { return sliceIterator(arr, i); } else { throw new TypeError("Invalid attempt to destructure non-iterable instance"); } }; }();

var renderDom = function renderDom(name, props) {
  for (var _len = arguments.length, children = Array(_len > 2 ? _len - 2 : 0), _key = 2; _key < _len; _key++) {
    children[_key - 2] = arguments[_key];
  }

  var el = document.createElement(name);
  var _iteratorNormalCompletion = true;
  var _didIteratorError = false;
  var _iteratorError = undefined;

  try {
    for (var _iterator = Object.entries(props)[Symbol.iterator](), _step; !(_iteratorNormalCompletion = (_step = _iterator.next()).done); _iteratorNormalCompletion = true) {
      var _ref = _step.value;

      var _ref2 = _slicedToArray(_ref, 2);

      var key = _ref2[0];
      var val = _ref2[1];

      el.setAttribute(key, val);
    }
  } catch (err) {
    _didIteratorError = true;
    _iteratorError = err;
  } finally {
    try {
      if (!_iteratorNormalCompletion && _iterator.return) {
        _iterator.return();
      }
    } finally {
      if (_didIteratorError) {
        throw _iteratorError;
      }
    }
  }

  var _iteratorNormalCompletion2 = true;
  var _didIteratorError2 = false;
  var _iteratorError2 = undefined;

  try {
    for (var _iterator2 = children[Symbol.iterator](), _step2; !(_iteratorNormalCompletion2 = (_step2 = _iterator2.next()).done); _iteratorNormalCompletion2 = true) {
      var child = _step2.value;

      if (typeof child === 'string') {
        el.appendChild(document.createTextNode(child));
      } else {
        el.appendChild(child);
      }
    }
  } catch (err) {
    _didIteratorError2 = true;
    _iteratorError2 = err;
  } finally {
    try {
      if (!_iteratorNormalCompletion2 && _iterator2.return) {
        _iterator2.return();
      }
    } finally {
      if (_didIteratorError2) {
        throw _iteratorError2;
      }
    }
  }

  return el;
};

/* @jsx renderDom */

function renderLogin() {
  return renderDom("section", {id:"secId"}, renderDom("input", { type: "email", value: "" }), renderDom("input", { type: "password", value: "" }), renderDom("button", {id:"butId"}, "Log In"));
}

var wtf = renderLogin();
document.getElementById("ruut").appendChild(wtf);
    </script>
  </body>
<html>


Simply double click this file (after you save it to your desktop) and you will see your transpiled code running in the browser. Specifically you should see two inputs and a button. Babel has transpiled our ES6 input to plain Javascript; ES4 style. 


WHAT WE DID


So first we created a project for npm to build. This project uses Babel to take any files in the 'src' directory and transpile them and put the transpiled output in the 'dist' directory. This transpiling simply converted a JSX template file (which in our case is really just ES6 code) to an ES4 plain vanilla Javascript file. We did this to two files, 'RenderDom.js' and 'test_render.js'.

If you look at our 'test_render.js' file we see it does not contain very much. Specifically it contains 

/* @jsx renderDom */

function renderLogin(){
  return renderDom("section", null,
    renderDom("input", {type:"email", value:""}),
    renderDom("input", {type:"password", value:""}),
    renderDom("button", null, "Log In")
  );
}

The first line is NOT just a comment. It is known as a 'pragma comment' or 'jsx pragma' or 'custom pragma'. I have seen it called all this and more but these are the most common terms. The technical explanation for this line may be found here

https://babeljs.io/docs/en/babel-plugin-transform-react-jsx#custom

Basically, it is telling Babel what function to use to transform (or transpile) your JSX. In our case we are telling it to the use the function 'renderDom()' which we created in our other file (RenderDom.js).

The rest of this file is vanilla Javascript. In our example, we are using a function called 'renderDom()' instead of the standard React function called 'React.createElement()' which is what JSX is typically used for (to create a React application) and so rather than JSX being converted to React in our example, ES6 is being converted to plain vanilla Javascript, version ES4 by Babel

We defined the function 'renderDom()' in our second file called 'RenderDom.js'. Now what is interesting and why I went through all of this exercise was to see what exactly Babel does when it transpiles a file. Looking at the files Babel output in the 'dist' directory which we copied into our html file is interesting. It didn't do much to 'test_render.js' because it was pretty much just standard ES4 Javascript to begin with, but it did do a number on our 'RenderDom.js' file. Specifically this code

let renderDom = (name, props, ...children) => {
  let el = document.createElement(name);
  for (let [key, val] of Object.entries(props)){
    el.setAttribute(key, val);
  }
  for (let child of children){
    if (typeof child === 'string'){
      el.appendChild(document.createTextNode(child));
    } else {
      el.appendChild(child);
    }
  }
  return el;
}

Was converted to this code 

'use strict';

var _slicedToArray = function () { function sliceIterator(arr, i) { var _arr = []; var _n = true; var _d = false; var _e = undefined; try { for (var _i = arr[Symbol.iterator](), _s; !(_n = (_s = _i.next()).done); _n = true) { _arr.push(_s.value); if (i && _arr.length === i) break; } } catch (err) { _d = true; _e = err; } finally { try { if (!_n && _i["return"]) _i["return"](); } finally { if (_d) throw _e; } } return _arr; } return function (arr, i) { if (Array.isArray(arr)) { return arr; } else if (Symbol.iterator in Object(arr)) { return sliceIterator(arr, i); } else { throw new TypeError("Invalid attempt to destructure non-iterable instance"); } }; }();

var renderDom = function renderDom(name, props) {
  for (var _len = arguments.length, children = Array(_len > 2 ? _len - 2 : 0), _key = 2; _key < _len; _key++) {
    children[_key - 2] = arguments[_key];
  }

  var el = document.createElement(name);
  var _iteratorNormalCompletion = true;
  var _didIteratorError = false;
  var _iteratorError = undefined;

  try {
    for (var _iterator = Object.entries(props)[Symbol.iterator](), _step; !(_iteratorNormalCompletion = (_step = _iterator.next()).done); _iteratorNormalCompletion = true) {
      var _ref = _step.value;

      var _ref2 = _slicedToArray(_ref, 2);

      var key = _ref2[0];
      var val = _ref2[1];

      el.setAttribute(key, val);
    }
  } catch (err) {
    _didIteratorError = true;
    _iteratorError = err;
  } finally {
    try {
      if (!_iteratorNormalCompletion && _iterator.return) {
        _iterator.return();
      }
    } finally {
      if (_didIteratorError) {
        throw _iteratorError;
      }
    }
  }

  var _iteratorNormalCompletion2 = true;
  var _didIteratorError2 = false;
  var _iteratorError2 = undefined;

  try {
    for (var _iterator2 = children[Symbol.iterator](), _step2; !(_iteratorNormalCompletion2 = (_step2 = _iterator2.next()).done); _iteratorNormalCompletion2 = true) {
      var child = _step2.value;

      if (typeof child === 'string') {
        el.appendChild(document.createTextNode(child));
      } else {
        el.appendChild(child);
      }
    }
  } catch (err) {
    _didIteratorError2 = true;
    _iteratorError2 = err;
  } finally {
    try {
      if (!_iteratorNormalCompletion2 && _iterator2.return) {
        _iterator2.return();
      }
    } finally {
      if (_didIteratorError2) {
        throw _iteratorError2;
      }
    }
  }

  return el;
};

Wow! That's a lot of code. At a minimum, if we are getting paid for lines of code Babel is our friend :-)

In effect this is the difference between ES4 and ES6. ES6 can be written more concisely than ES4 and it also added some exception handling in there for us for free.

Now let's see what happens when we refactor that method ourselves to use ES4 Javascript. You would think Babel would have nothing to do. Here is our new renderDom() method in old school ES4


// Note IE8 needs polyfill for Object.entries and Object.keys

// From https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/keys
Object.keys = (function() {
  'use strict';
  var hasOwnProperty = Object.prototype.hasOwnProperty,
      hasDontEnumBug = !({ toString: null }).propertyIsEnumerable('toString'),
      dontEnums = [
        'toString',
        'toLocaleString',
        'valueOf',
        'hasOwnProperty',
        'isPrototypeOf',
        'propertyIsEnumerable',
        'constructor'
      ],
      dontEnumsLength = dontEnums.length;

  return function(obj) {
    if (typeof obj !== 'function' && (typeof obj !== 'object' || obj === null)) {
      throw new TypeError('Object.keys called on non-object');
    }

    var result = [], prop, i;

    for (prop in obj) {
      if (hasOwnProperty.call(obj, prop)) {
        result.push(prop);
      }
    }

    if (hasDontEnumBug) {
      for (i = 0; i < dontEnumsLength; i++) {
        if (hasOwnProperty.call(obj, dontEnums[i])) {
          result.push(dontEnums[i]);
        }
      }
    }
    return result;
  };
}());

Object.entries = function( obj ){
  var ownProps = Object.keys( obj ),
      i = ownProps.length,
      resArray = new Array(i); // preallocate the Array
  while (i--)
    resArray[i] = [ownProps[i], obj[ownProps[i]]];
  
  return resArray;
};


function renderDom(name, props, ...children) {
  let el = document.createElement(name);
  for (let [key, val] of Object.entries(props)){
    el.setAttribute(key, val);
  }
  for (let child of children){
    if (typeof child === 'string'){
      el.appendChild(document.createTextNode(child));
    } else {
      el.appendChild(child);
    }
  }
  return el;
}


Well Babel might have nothing to do but we certainly had a lot to do.

Now you might be wondering what happened to our simple renderDom() method. Well first, if we are worried about backward compatibility with IE8 (and isn't everybody), then even ES4 has issues, specifically, IE8 (or older) does not support Object.entries() nor Object.keys(). As a result, we have to create a polyfill for both methods just to allow our renderDom() method to compile. Note that we could have also chose to refactor renderDom() to not use Object.entries() but it would have gotten ugly in a hurry and that's kind of the point. This is exactly what Babel is is good at. Specifically it will handle all of this for us so we can concentrate on writing good clean Javascript without having to worry about ES4 versus ES6 and browser incompatibilities. Babel handles all that for us, and more.

Here is what Babel transpiled our converted renderDom.js to 

'use strict';

var _slicedToArray = function () { function sliceIterator(arr, i) { var _arr = []; var _n = true; var _d = false; var _e = undefined; try { for (var _i = arr[Symbol.iterator](), _s; !(_n = (_s = _i.next()).done); _n = true) { _arr.push(_s.value); if (i && _arr.length === i) break; } } catch (err) { _d = true; _e = err; } finally { try { if (!_n && _i["return"]) _i["return"](); } finally { if (_d) throw _e; } } return _arr; } return function (arr, i) { if (Array.isArray(arr)) { return arr; } else if (Symbol.iterator in Object(arr)) { return sliceIterator(arr, i); } else { throw new TypeError("Invalid attempt to destructure non-iterable instance"); } }; }();

var _typeof = typeof Symbol === "function" && typeof Symbol.iterator === "symbol" ? function (obj) { return typeof obj; } : function (obj) { return obj && typeof Symbol === "function" && obj.constructor === Symbol && obj !== Symbol.prototype ? "symbol" : typeof obj; };

// IE8 needs polyfill for Object.entries and Object.keys

// From https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/keys
Object.keys = function () {
  'use strict';

  var hasOwnProperty = Object.prototype.hasOwnProperty,
      hasDontEnumBug = !{ toString: null }.propertyIsEnumerable('toString'),
      dontEnums = ['toString', 'toLocaleString', 'valueOf', 'hasOwnProperty', 'isPrototypeOf', 'propertyIsEnumerable', 'constructor'],
      dontEnumsLength = dontEnums.length;

  return function (obj) {
    if (typeof obj !== 'function' && ((typeof obj === 'undefined' ? 'undefined' : _typeof(obj)) !== 'object' || obj === null)) {
      throw new TypeError('Object.keys called on non-object');
    }

    var result = [],
        prop,
        i;

    for (prop in obj) {
      if (hasOwnProperty.call(obj, prop)) {
        result.push(prop);
      }
    }

    if (hasDontEnumBug) {
      for (i = 0; i < dontEnumsLength; i++) {
        if (hasOwnProperty.call(obj, dontEnums[i])) {
          result.push(dontEnums[i]);
        }
      }
    }
    return result;
  };
}();

Object.entries = function (obj) {
  var ownProps = Object.keys(obj),
      i = ownProps.length,
      resArray = new Array(i); // preallocate the Array
  while (i--) {
    resArray[i] = [ownProps[i], obj[ownProps[i]]];
  }return resArray;
};

function renderDom(name, props) {
  var el = document.createElement(name);
  var _iteratorNormalCompletion = true;
  var _didIteratorError = false;
  var _iteratorError = undefined;

  try {
    for (var _iterator = Object.entries(props)[Symbol.iterator](), _step; !(_iteratorNormalCompletion = (_step = _iterator.next()).done); _iteratorNormalCompletion = true) {
      var _ref = _step.value;

      var _ref2 = _slicedToArray(_ref, 2);

      var key = _ref2[0];
      var val = _ref2[1];

      el.setAttribute(key, val);
    }
  } catch (err) {
    _didIteratorError = true;
    _iteratorError = err;
  } finally {
    try {
      if (!_iteratorNormalCompletion && _iterator.return) {
        _iterator.return();
      }
    } finally {
      if (_didIteratorError) {
        throw _iteratorError;
      }
    }
  }

  for (var _len = arguments.length, children = Array(_len > 2 ? _len - 2 : 0), _key = 2; _key < _len; _key++) {
    children[_key - 2] = arguments[_key];
  }

  var _iteratorNormalCompletion2 = true;
  var _didIteratorError2 = false;
  var _iteratorError2 = undefined;

  try {
    for (var _iterator2 = children[Symbol.iterator](), _step2; !(_iteratorNormalCompletion2 = (_step2 = _iterator2.next()).done); _iteratorNormalCompletion2 = true) {
      var child = _step2.value;

      if (typeof child === 'string') {
        el.appendChild(document.createTextNode(child));
      } else {
        el.appendChild(child);
      }
    }
  } catch (err) {
    _didIteratorError2 = true;
    _iteratorError2 = err;
  } finally {
    try {
      if (!_iteratorNormalCompletion2 && _iterator2.return) {
        _iterator2.return();
      }
    } finally {
      if (_didIteratorError2) {
        throw _iteratorError2;
      }
    }
  }

  return el;
}

That's a lot of code. Looking at the original renderDom.js file is a lot easier on the eyes and probably easier to debug and maintain. It is good, however, to see what exactly Babel does to your code. If you're like me and distrustful of other code wonking your code you can use this approach to actually see how your code was transformed (or transpiled) by Babel. Who knows, maybe it produces a bug under certain circumstances. It wouldn't be the first time a compiler (or in this case transpiler) bit me in the ass. I don't know about you, but at least I'll now be able to sleep better at night.