Face to Face with Binto George

in Face à Face by


I just finished reading “Artificial Intelligence Simplified” (ISBN 978-1944708016; Published by CSTrends LLP; available at Amazon as print and Kindle editions and also as low-cost student edition) written by Binto George (BG) and Gail Carmichael. Binto did his PhD in IISc. He is now a professor in School of Computer Sciences at Western Illinois University, Macomb, IL, USA. I met Binto in IISc and a friendship was born out of the joys of interdisciplinary crosstalk. We used to spend numerous hours under many trees and over many coffees and teas, discussing both computer science and biology without the jargon of either world hindering our conversations. This was evident even during his thesis defense in Super Computer Education and Research Center. The same is reflected in the simple language he has used in this book.

The book is very well written and simple that you can finish reading it in a couple of hours, even if you do not know anything about computer science. Of course, that is if you manage to spare yourself from thinking too frequently about the parallels and possibilities with concepts from biology. To biologists out there, this is a book that you should read to see how biology concepts have influenced AI. To every aspiring writer, this book is an example of how you can bring a complex subject to almost everyone who is curious and interested in understanding it. Moreover, as Binto brings out towards the end of the book, the field of AI is itself waiting for advances in our understanding of how our brains work in order to mimic it inorganically.

Below are responses from Binto on some of the thoughts that came up after I read the book. I hope you will read the book and ponder more over the responses below. I want to thank Binto for taking his time to respond to these thoughts. Hope this discussion can stimulate interdisciplinary crosstalk and creativity.- Syam Anand (SA)

SA: What was your motivation for writing this book?

BG: When I was teaching AI, I saw many students struggling with basic AI concepts. A few students got it the first time and a few never did. When I was a masters student, my AI instructor was very knowledgeable, but I realized that many students didn’t like him.   Also, many students were complaining about their textbook.

If AI is hard for CS students, what about the rest of the world?  My feeling is that it is not that AI is inherently hard, it is more about the way it is presented.

The current prediction by World Economic Forum (WEF) is that five million jobs will be outsourced to AI/robots by 2020. Years back, we had the same fear about losing jobs to computers. Now we know that computers didn’t replace us but it merely transformed the way we worked.  So, in my opinion,  knowing AI in the future is likely to be like having an understanding of computers today.  And this book is to help anyone learn AI.

Personally, for me, it is a coming back to AI after several years of hiatus.

SA: How was the idea for the book formed and how did you get your co-authors?

BG: Gail  (the co-author) was very active in “Go Code Girl” initiative in Carton University, initially as a Ph.D. student and then as an instructor. I was looking for someone who can work with me to make computer science more approachable and less intimidating to everyone including girls.  So we eventually planned a book on computer science, which later evolved into a two-part book – the first part of the book would include data structures and algorithms and the second part would have more in-depth chapters in areas, such as AI, Networks, etc.

Meanwhile, Gail moved on to pursue an industry career. Because of her busy schedule related to the new work assignment,  we thought we will go ahead and publish AI chapter as a book without waiting for other parts to complete.  Our hope is that we still be able to complete other parts in the future.

SA: How much time did you take to write the book? I understand that simpler and shorter books take longer to write. Correct me if I am wrong.

BG: A lot of time.  Gail wrote (and is currently writing) the first part on data structures and algorithms. I worked on more advanced areas including AI. Gail would always push for making things simpler. So no wonder she made me rewrite several times. Last few summers I spent an enormous amount of time writing the AI book, adding references, designing sketches etc.   Gail’s contribution in the AI book was particularly with respect to presenting the technical contents in a simple format for the general audience. We got many good suggestions because of the international review process (thanks to Susan). Andrew has been instrumental in holding us up to a higher standard, overall.

Obviously, some of our reviewers demanded more “depth”. As a result, we created the Appendix to cover more advanced topics at the same time keeping the readers’ interest by preserving the original text as simple as possible.

SA: I noticed that your publisher is CSTrends LLP. Could you help other IIScians in publishing similar books?

BG: Absolutely. A successful publication has to be “organically” great.   CSTrends LLP doesn’t count too much on publicity or marketing stunts. The belief is that a great book will find a way to thrive. If you want to explore a publication, you can send a detailed proposal to help@cstrends.com.

SA: What is Artificial in AI?

BG: Good question.  Nothing, or, at least, that is the hope.

We can create intelligence by natural reproduction and training. I’m talking about raising kids and educating them, etc.  Our fundamental limitation is that “an apple doesn’t fall too far from the tree”.  Of course, there can be outliers due to mutation, but the selection process in the modern world at least with respect to human beings is far more permissive than the primitive era. Plus, intelligence is not the only criteria for natural selection.

We can also try to create intelligence using computers and again try to train to perform at “human level”.  Turing test is based on indistinguishably of the former form of intelligence from the latter – this is explained in second last chapter of the book.  Or we can have machines do something in a specific, narrow domain better than people. A chess playing program is a good example.

The advantage of going machine route is that machines can work tirelessly – they are simple enough to repair and rebuild.  Machines have not yet started complaining about their unpaid labor.

As a community, we have spent less than hundred years developing AI. The former form of intelligence has evolved over billions of years.  Natural intelligence will still evolve, but not at a pace that we are patient enough or would help us survive a major surprise adverse event that might happen sooner.  Plus, people love challenges inherent in creating intelligence. All these make a strong case for pursuing AI.

SA: Does the word “intelligence’ automatically conjure up an organic image in the minds of CS researchers?

BG: I can’t talk for others – but to me, intelligence still reminds me of the brain. Since we like computers, some of us try to be precise like machines, but in my opinion, that is not the point of it.  Do what we are good at and let machines do what they are good at.

SA: You mentioned the original objectives of AI proposed by Winston and said that the objective was later split up to achieve focused objectives. Is it because Winston did not comprehend the complexity of intelligence in terms of the different computational operations required for different objectives? Is this going to linger in the AI community?

BG: In general, what is achievable practically in the near term to keep the interest and funds flowing has a strong impact on our research focus.  I believe only government/non-profit initiatives can support strong research in fundamental areas (that are extremely challenging and of high risk/low short-term reward nature) that can have a real long-term impact. It is concerning that funding sources for basic science are slowly drying out.

SA: Fuzzy is not really fuzzy, is it?

BG: Not a bit – fuzzy is a way of modeling fuzzy thinking. There is nothing imprecise about it.  Being imprecise is not a great quality.  However, that can capture many human reasoning since we are naturally not that precise.

SA: Has AI depended more on mathematics and logic more than on Biology for developing as a discipline?

BG: I think so.  We have not understood biology enough to really depend on that for creating intelligence.  Also, when we try to create something like using biology, there are ethical questions.  Even when we manage to create a super intelligence (either biologically or otherwise), there is a probability of getting out of hand like we explained in our last chapter.

SA: In human intelligence, there is noise. Is AI devoid of noise?

BG: If by noise you meant entropy, there is. Again, the nature of non-productive activities in AI would be something different. from human intelligence:  e.g., a person gets tired, a computer gets overheated.

SA: Degrees of truth is a concept that I think is most valuable in terms of similarity to organic intelligence, next to maybe genetic algorithms. Your take?

BG: Interesting. In science, researchers like to have a hypothesis to be either proved or disproved.  The probability theory can help us capture some of the uncertainties and incompleteness of our knowledge and have a hypothesis that is likely to be true with a certain probability.

Many real-life situations need us to react to it without having all knowledge or not finding out the perfect (optimum) solution.  So in my opinion, it is no surprise that organic intelligence developed this way to solve problems under uncertainty or by adopting a trial and error strategy.  I really encourage reader comments on this one.

SA: Looks like chatbots have advanced (for example Siri) with respect to the simple example you gave in the book (Alice; which I tried and was fun. When I asked Alice “what is beyond noise?”, it answered “God perhaps?”). Does advancements in natural language processing, natural language parsing etc. have a role in the future of AI?

BG: Yes and no.   First, the reason for the “No” answer: Although, a machine could not win the Turing test without NLP, it could still be as intelligent as we are, at the same time unable to speak any natural language on earth.  (eg., Someone who  speaks little English may appear to be unintelligent to an American, but she could be really smart.)  Now, the reason for the “Yes” answer:  NLP helps the applications of AI that can attract commercial interest, which can turbo-charge the development of AI. Besides, a machine with good NLP can seamlessly interact and learn from people.

SA: Is there a real demand for adult intelligence? In other words, will the current AI efforts to solve focused problems in a faster, cheaper better way keep investments in the development of adult intelligence away from it?

BG: Yes, the industry is focused on investments that would result in near-term return. So that could hurt GI prospects.  I think that GI would be helpful in developing super intelligence much faster (rather than depending on natural reproductive cycle), which will help us save from a major disaster or surprise event that could extinct all of us.

SA: Is India invested enough in AI or, in other words, is there enough funding in the AI basic research in India that can be translated?

BG: I don’t know much about the scenario there now. Can anyone currently in India help?

SA: Do you think AI is taught in schools and colleges (undergrad level) well?

BG: No, I personally don’t.   Students at IISc or MIT may be able to learn from reading a “standard” AI textbook.  However, simplicity can save everyone’s time and help everyone quickly grasp the area (especially for whole-part learners).   I’ve read about two great people talking about simplicity: Gandhi and Einstein.  To me, both make sense.

SA: Looks like processing speed or memory size are not the blocks in the advancement of AI. Can pure mathematics and statistical methods and modeling come to the rescue of AI and advance it further instead of waiting for breakthroughs in Biology to understand the human brain?

BG: The same problem can have different solutions. I believe that a combination method that combines strengths of all techniques has a better chance of realizing General Intelligence (GI).

SA: Does the AI community think that even, in theory, there could be such a thing as a perfect brain that can do any task assigned to it with perfection? In other words, does the specialization to carry out a particular task with perfection, automatically rule out using the same system for another task or is it all about access control?

BG: Not necessarily. With a high-speed communication infrastructure AI nodes can collaborate at lightning speed to create essentially a super mass intelligence. Assume that an AI node is a specialist in one area because of the problems to be solved by it (or inspired by physical limitations such as memory).  Other AI nodes can tap into and learn from that node (assuming they are given access to that knowledge). AI should be able to disseminate knowledge and expertise in a much more efficient manner (than we publishing papers).

SA: Can AI systems be subject to “natural selection”? How much computing power do we need to mimic natural selection at the simplest level and are lower forms of intelligence being studied and tested to achieve these objectives?

BG: We can write a simple program that does natural selection or a program to solve a problem can be evolved using natural selection.  Yes, lower forms of intelligence are studied and reproduced using genetic algorithms.

SA: Gerald Edelman, Nobel prize winner for his contributions to Immunology had a book “Wider than the sky” that talked about how the development of the brain could use a method similar to clonal selection that is by the immune system to select high- affinity antibodies against immunogens. Any thoughts?  

BG: From what little I know, the book explores consciousness.  Don’t know much about it.  Maybe some of our readers can pitch in.

SA: Last question. Does the AI community also divide intelligence into brain, mind and consciousness?

BG: My guess is that when we create General Intelligence (GI), we would have a brain, mind and consciousness.  That’s assuming we can really define and understand consciousness.  What about feelings?  A feeling, in my opinion, is a real-time feedback mechanism for our own survival.  What about empathy? Does it come from our own vulnerability? And if so, if an AI robot has no specific vulnerability (e.g., it doesn’t need to breathe), will it be empathetic to a human being that’s gasping for air? Will it be able to connect to that its situation of going on low battery? (That’s assuming that its goal is to stay off hibernation) These are some of the interesting questions. It turns out that AI is an exciting field with more questions than answers.


The book authored by BG is available here http://www.amazon.com/Dr-Binto-George/e/B01AGE6NA0

Binto’s lab link: http://www.wiu.edu/cbt/computer_science/faculty/george.php




Interview conducted by Dr Syam Anand, PhD (Indian Institute of Science, IISc; Post-Doctoral research, University of Pittsburgh School of Medicine; Faculty, University of Pittsburgh School of Medicine, Founder and US Patent Agent, Mainline Intellectual Property LLC, Ardmore, Philadelphia USA). Syam has over 20 years experience in diverse areas of Science with domain knowledge in Life Sciences and Intellectual Property. Dr. Anand is also an inventor and budding entrepreneur. A rationalist, Dr. Anand enjoys science at all levels and advocates the use of scientific methods for answering all questions and solving all problems and make common people curious and interested in understanding their worlds.


Creative Commons License
This work by ClubSciWri is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

Leave a Reply

Your email address will not be published.