Still using XP? Read this.

The end of support for Windows XP means that computers still using it are more vulnerable than ever to hacking, viruses, and information theft. Even so, many businesses and individuals continue to use it at their risk.

Upgrading to a newer version of Windows can be expensive, especially for businesses that have multiple computers. In addition, a lot of older computers do not meet the hardware requirements for the newer versions, and would have to be replaced.

Depending on how you use your computer, though, you might have another option: Linux.

I use Windows 7 for my main computer, but I have an older notebook computer that I converted from XP to Linux about a year ago. Like a lot of people, I had always assumed that Linux was for people who didn’t mind a more primitive, nuts-and-bolts approach to computing. Even though I was one of those people, I was nervous about trying it. But when I did, I discovered that at least some versions of Linux are not much different from Windows. And they are free, even for business use.

Linux comes in many variations. The two that are most popular replacements for Windows are called Ubuntu and Mint. I chose Ubuntu. The most noticeable difference between it and Windows XP is that the menu to run programs is in the upper left corner instead of the lower left corner. Of course it takes a little effort to learn how to use it, but in my opinion it is no harder than changing from XP to Windows 7 or [shudder] Windows 8. It does help to know someone who is comfortable setting it up and doing a few text-based commands, but that also is true for Windows.

Web browsing and e-mail are essentially the same in Linux as they are in Windows. I use Firefox for web browsing and Thunderbird for e-mail, which are the same programs I use in Windows. A Linux computer can share a network, files, Internet access, and printers with Windows and Apple computers when set up properly.

On the down side, Microsoft Office cannot run in Linux, but Linux has LibreOffice, which can do just about anything Microsoft Office can do. Did I mention that it is free? It has its own file formats, but it can open and save Office documents, although complex documents with graphics do not translate well. In a pinch, there are online (cloud) versions of Office that do not need to run in Windows.

I also use Scribus, a powerful desktop publishing program that produces professional results, and Inkscape, a graphics program that resembles Corel Draw. GIMP is a photo editing program that works a lot like Adobe Photoshop. All are free. Plenty of games are available too, along with educational, scientific, math, software development, and other utilities, most of them free. (I am intrigued by PSPP, a free counterpart to the powerful SPSS statistical analysis program, although I haven’t had a use for it yet.)

Linux will most likely boot up and run faster than your old version of XP. It is regarded to be more secure than Windows, and most people use it without anti-virus or spyware protection. This security is not so much because Linux is superior to Windows, but rather because so many more people use Windows that it is a more attractive target to hackers.

Best of all, you can boot up Ubuntu or Mint from the installation DVD and try it out without installing it or removing your Windows installation. You have nothing to lose by trying it. Ubuntu gets extra points in my mind for offering a clear choice between trying and installing when it boots up. With Mint, I had to intuit that I needed to press ESC during the 10-second period when the screen displayed “Automatic boot starting in … seconds.” After that I had to select the cryptic option to “Start in compatibility mode.”

Just as with Windows, some older systems can have hardware compatibility problems with the latest versions of Linux. My computer was lacking a processor feature called PAE, and Ubuntu informed me of that when I tried to install it. The solution was to use a scaled back variation on Ubuntu called Xubuntu, version 12.04. For Mint, I would have to use version 13 instead of the latest version 16.

I like Linux enough that I would consider throwing out Windows altogether, but I do need Windows for a few applications on my desktop computer. Linux has no equivalent to the ACT customer relationship management program, or the Quickbooks business bookkeeping program. It does have a personal finance program called Gnucash, but I have not compared it with Quicken, and I believe it would be difficult if not impossible to import my years of Quicken data. Finally, I do some things with Microsoft Word mail merge that I have not found ways of doing with LibreOffice. (LibreOffice can do form letters pretty well, but does not have an easy way to create labels, a directory, or a list of names.)

To try Ubuntu or Mint, download the ISO file from the web site. (For older hardware you will almost certainly need the 32-bit version, not 64-bit.) Then create a disk image of the file on a DVD. (Ask a friend with a newer computer to do it if you do not have a DVD burner. Xubuntu is designed to fit on a CD if your system cannot read a DVD. It is also possible to install it from a USB memory card instead of a DVD.)

Give it a try. You have nothing to lose.

Mint: http://www.linuxmint.com/

Ubuntu: http://www.ubuntu.com/download

Xubuntu: https://wiki.ubuntu.com/PrecisePangolin/ReleaseNotes/Xubuntu#Download

Do you see only three colors? I see a lot more.

Spectrum

I have been enjoying the book The Future of the Mind, by physicist Michio Kaku. It is interesting and well researched, but he does make one statement that I have to take issue with. He says that humans “see” only three colors. This is not the first time I have heard this claim from someone who ought to know better. (I suspect that vision experts he consulted for the book told him this, and he didn’t question it.)

The notion that we see only three colors is based on the fact that there are three primary colors and we have three types of color photoreceptors, or cones, in our retinas. The idea is wrong, however, on two counts.

For starters, it is important to understand that our eyes detect all frequencies of light in the visible spectrum. We are not limited to the three primary colors. In fact, each type of cone has a response function that spans most of the visible light spectrum. The response peaks at a different spectral frequency for each cone type, and it tapers off as the frequency gets further from the optimum for that type. The peaks do not even correspond with red, green, or blue colors.

Almost any frequency of light in the visible spectrum will cause a response in all three types of cones. Our visual systems combine the three response levels to discern the frequency. For example, light with a 500 nanometer wavelength, which has a blue-green shade, results in a strong response from the “green” cones, about 7/10 as strong in the “red” cones, and only about 2/10 as strong in the “blue” cones. That combination of responses allows us to distinguish between various light frequencies. (There is a lot more to it than that, of course. Color can be influenced by context, for example, and we have a remarkable ability to compensate for different types and levels of lighting.)

We need at least two types of cones in order to distinguish frequencies this way, and three are better. With only one type of cone, we could not tell a dim stimulus at the optimal frequency from a brighter stimulus at a less responsive frequency.

With three types of cones, our visual systems can distinguish a rich variety of many colors, more than two million for most people. I don’t know about you, but my perception of those colors is that they are all different, not just one of three.

It is true that color televisions use only the three primary colors, but that is not because those are the only colors we see. Rather, it is because three colors are sufficient to produce an experience of seeing most of the visible spectrum. With the right combinations of those primary colors, we can stimulate the cones at levels nearly identical to the levels produced by other colors of light, fooling the brain into thinking it is seeing the other colors.

But there is another, more mind-blowing problem with the claim that we see only three colors. That statement might lead one to believe that we are missing out on so many other colors in the visible spectrum that we cannot truly see—and that is usually the intent of the person making the claim. It is wrong, though, because there is no color in reality. Light waves have physical properties including wavelength, frequency, photon energy level (all interrelated), and intensity, but not color.

Color is an interpretation imposed by our brains, as a way of making meaningful sense of the complex mix of light frequencies and intensities striking our retinas. We rarely see pure spectral colors, outside of rainbows and reflections from surfaces that diffract light (such as CDs and DVDs). Most everyday objects reflect many different frequencies of light. The color we perceive is a result of the combined sensations caused by that assortment of frequencies. Whatever the total cone response is, our brains assign a corresponding color to it.

Color perception is one of the most profound examples of the way our brains construct representations of the world around us from sensory inputs. As real and as important as color seems to us, it nonetheless originates within our minds. For that reason, there can be no colors other than the ones you see. But for most of us that range of colors is rich and wondrous, and certainly more than three.

Do you make these common presentation blunders?

Presentation software makes public speaking easy. Just outline your talk in bullet points, add some amusing clip art, a nice background, and maybe some stock music, and you are all set. Anyone can look like a pro. But if you have ever been on the receiving end of one of these talks—and who hasn’t?—you know that they can range from confusing to deadly dull. And yet, people continue to make presentations the same way.

Combining speaking with on-screen text can be difficult. Some speakers put text on the slides and then paraphrase it or go off on tangents during the presentation. The result is that they lose the attention of people who try to read what is on the screen. Other speakers read the text on the slides verbatim. This approach can be effective for emphasizing key points. But most people can read the text faster than the speaker can say it, which leads to boredom if the entire talk is presented that way.

When images are added to the slides, text can cause even more problems. This has been demonstrated for e-learning programs, but it also applies to presentations. In e-Learning and the Science of Instruction: Proven Guidelines for Consumers and Designers of Multimedia Learning, Ruth Colvin Clark and Richard E. Mayer cite research showing that people remember less when shown text and images together on the same slide than when they see the image alone with narration. Redundant text (text that is exactly the same as the accompanying speech) can improve memory, but only when presented by itself with no images. Images that are unrelated to the text also interfere with learning by distracting attention from the important points. Animation and music impair learning as well.

Other research has shown that adding extraneous information, whether to make the presentation more interesting, go into greater detail, or add technical depth, made the key points less memorable and reduced people’s abilities to apply what they had learned. (These findings apply to learning new information, and may not extend to cases where people are already familiar with the subject.)

If you want to create a memorable presentation, follow these guidelines:

Consider using no text on the screen, or at most short subject headings and labels in graphics. There is nothing wrong with a blank screen while you talk. People will be more focused on you, and you will come across as a more skillful speaker.

If you do use text, make it the same as what you say, and limit it to a few key messages. Do not paraphrase, explain, or elaborate on the slide text.

When speaking, stay focused on the points you want to make.

Show an image by itself on a slide, and explain verbally what it means. Avoid using text on the same slide as an image.

Use only graphics that add information to the presentation. Avoid clip art, stock photos, and other unrelated images.

Do not use music, animation, or other media that distract from the content of the presentation.

Ghostwriting: A word about authorship

I recently wrote a white paper that named a client as the author. I even laid it out myself, using Scribus desktop publishing software to make a professional-looking document. It looks great, but I must admit I am a little bit frustrated because I would like to show it off to other people as a sample of my work. I can’t do that, though, because I would not want to reveal that the client didn’t write it himself.

When I ghostwrite a document such as a magazine article or white paper, my objective is to showcase the expertise of my client. Even though I am the person putting words to keypad, I am channeling the knowledge that I have obtained from my client through interviews and existing literature. For this reason, it is entirely appropriate to name my client as the author.

Writing is time-consuming and difficult. If you have a project on the back burner that you have had the best of intentions to get started someday, talk to me about getting it started today. The result will be a well-written document that provides valuable information to your customers and makes you look great!

Everybody’s talkin’ ’bout a new way of walkin’–with squishy robots

OK, so I am reflecting my age with the song lyric that I chose for the title. For my first science blog post I have selected a story about genetic algorithms, which have been used by a research team at Cornell University’s Creative Machines Lab to create virtual robots. These computer-animated robots demonstrate various solutions to the problem of robotic walking. I chose this story in part because I am interested in genetic algorithms and their potential for finding unexpected solutions to engineering problems. But mostly I enjoyed the amusing video the researchers produced showing the robots. (More on the video in a moment.)

Genetic algorithms are biomimicry of a sort, at a very fundamental level. They use DNA, genetics, and evolution as the inspiration for a means to create computer code that models complicated engineering problems and arrives at  interesting solutions.

The diversity and complexity of solutions created by genetic algorithms–in a relatively small number of generations–is suggestive of the possibility that life on earth might have evolved by similar processes. It is hardly proof of evolution, though. Research in other fields provides much stronger evidence supporting the theory of evolution. Genetic algorithms are of interest primarily for their application to solving engineering problems.

In this case, the problem is that of making robots walk. Anyone who has followed the robotic rovers that NASA has developed, for exploring places like Mars, knows that robotic walking is complicated. Just getting them to move can be hard enough. Going over or around obstacles without flipping over or getting stuck is even harder.

The starting point for a genetic algorithm is a collection of entities, in this case robots, created with random configurations of components. Four types of components were used in these robots: hard tissue representing bone, soft tissue, and two types of  “muscle” tissues that either expand first and then contract, or contract and then expand. Each of these components is represented by cubic voxels that are arranged in various configurations in relation to each other.

Once the first generation is created, all of the robots are tested to identify which ones walk the best. The most important test of fitness is the distance a robot can travel, but penalties are also built in. A robot can be penalized for being made up of large numbers of voxels, representing an inefficient weight that requires more energy to carry. It can also be penalized for having too many muscle voxels, which consume higher amounts of energy. A penalty is also assessed for the number of voxels surrounding other voxels, because a solid mass with a lot of interconnected voxels has a lower surface area than a set with few adjoining connections. Such a configuration can reduce the effectiveness of cooling and cause them to overheat in a warm environment.

It is at this point when the genetic modeling comes into play. The highest-scoring robots are allowed to reproduce, but reproduction is not just creating copies. In a sort of numerical mating ritual, the most successful robots are paired off. Each has its genetic coding split at a random location and combined with the complementary string of the other. Multiple offspring are created in this way, each with a different random mash-up of the parents’ genetic codes. The idea is that some of the offspring will inherit the code segments that made their parents good walkers. Better still, some might inherit the code segments from both parents that made them successful. Typical genetic algorithms also have a built-in probability of occasionally introducing new random components, intended to be analogous to mutation.

The new generation is then put through the same set of tests.

The result, spanning 1000 generations, is portrayed in their video, http://www.youtube.com/watch?v=z9ptOeByLA4&feature=youtu.be.

Of course, these are computer animations, not real robots. In particular, they are built from unspecified “muscle” tissues that expand and contract by 20% cyclically. As far as I know, this sort of material does not exist, and the obstacles to creating such materials are among the factors that prevent engineers from building robots as agile and efficient as living creatures. (That’s just the sort of observation that is sure to bring about comments telling me how wrong I am.)

An intriguing aspect of genetic algorithms is that the best coding from one generation is passed on to some members of succeeding generations, without any need to examine the code along the way in order to identify what is good, or what characteristics of the code make it good. The only thing that matters is that the code produces effective walking. It is that approach that leads to unusual and counterintuitive solutions.

One aim for the Cornell researchers was to experiment with using several different types of tissues in each robot, with varying amounts of stiffness and built-in muscle-like activity. Genetic algorithms were first applied to robotic movement almost 20 years ago by Karl Sims, using rigid objects connected by hinges with varying degrees of freedom. Examples can be seen here: http://www.youtube.com/watch?v=bBt0imn77Zg. The unique approach in the Cornell team’s research was to obtain the flexibility from soft and “muscle” tissues instead of hinged connections.

Another interesting aspect of the Cornell research was that they challenged human engineers to come up with robotic designs using the same components. The engineers were unable to achieve results that were as successful as those reached by the genetic algorithms.

The paper, to be published in Proceedings of the Genetic and Evolutionary Computation Conference, can be downloaded here: http://jeffclune.com/publications/2013_Softbots_GECCO.pdf

Persuading technical people

Dave Lakhani, in his excellent book Subliminal Persuasion, describes core values that resonate with people. If you can appeal to those values, then you can create powerful ads that influence people’s opinions and motivate them  to take action.

Engineers, scientists, and other technically minded people share these core values. But let’s face it, if you try to appeal to a person’s desire for family security when you are trying to sell an industrial robot, it can come across as manipulative if not threatening.

Technical people do have certain resonating values that might not have the same effect on other people. For example, they like to believe–whether or not it is true–that their decisions are based on rational, objective criteria. They don’t want to think that their choices are influenced by color, shape, or childhood memories. What they do want to know is how a product or service is going to benefit their company, make their jobs easier, or provide needed information. Write about those benefits, and support claims with data showing how much improvement can be expected. It is possible to keep it technical and still make it resonate.

Building trust in science

In their book, Unscientific America: How Scientific Illiteracy Threatens Our Future,  Chris Mooney and Sheril Kirshenbaum say that Americans don’t trust science. A lot of that, as they point out, is a result of negative portrayals in television and movies, attacks from politicians and religious groups, and a general disconnect of scientific research from most of our daily lives.

I have always revered science, or at least the classical scientific principles that have been proven over time. Just as I would rather listen to classical music than the latest hits, or get my information from books instead of streaming newscasts, I tend to prefer the thoroughly digested long-term overview of science over the day-to-day research process.

When we get down to day-to-day findings, or even decade-to-decade findings, science hasn’t always earned our trust. The same people who brought us “better living through chemistry” also brought us bisphenol-A, PCBs, Agent Orange, and a host of household chemicals with dangers and toxicity levels that the average person is either unaware of or chooses to ignore. Science has overall raised our standard of living, but it has also led to overpopulation as food has become more plentiful, people live longer, and many diseases have been eradicated or made less severe. We are more comfortable, healthier, and able to enjoy life more than ever, and yet there is a sense of loss in that we seem to have more and more odd seasons–here in northeast Ohio we seem to have a lot of unusually warm winters and cool summers of late. Visit a park like Smoky Mountain National Park today and it just doesn’t have the feeling of wilderness that it used to have. It hasn’t been developed any more than it used to be, but as more people visit the park it just doesn’t feel like you are getting as close to nature as you used to feel. As we build more roads, buy more cars, and develop more suburbs, things start to look the same everywhere you go–I loved the term Generica that someone coined for the strip malls that all have the same stores and restaurants. Admittedly, all of these observations are indirectly a result of the science that made them possible. It is perhaps unfair to blame science for it, because it has more to do with how people have chosen to use the science.

In pure scientific research we see practices that also lead us to suspect scientists’ motives. Researchers receive funding from drug companies to test the effectiveness of the drugs, but do not disclose the funding source. Negative experiment results are simply not reported. In my own field of psychology, all kinds of things can go wrong with experiments. Many researchers do their best to avoid introducing biases or errors into their experiments, but results are often open to interpretation. A lot depends on how the questions are asked or how the results are interpreted. Experiments are supposed to be reproducible, but how often does someone actually try to reproduce any but the most classical experiments that are demonstrated in student labs. More likely, when a result is published, it will motivate someone else to come up with a counterexample or way of demonstrating an opposing view, and a subsequent experiment will argue for the opposite interpretation. Over time we might get to the truth, but it will take a long time to get there.

Great things happening in northeast Ohio

I’m going to say it. I think Clevelanders are being bamboozled by the people pushing for the Medical Mart. They are promising things that are far more optimistic than they can deliver on, and when questioned about it they lash out and tell us that they are our last hope for a dying city. Mr. Kennedy’s remarks that no one wants to do business in Cleveland, and that the only reason he was building the Medical Mart in Cleveland was because of Tim Hagan’s friendship, were nothing short of insulting

The reason I am offended by those comments is that I see a lot of great things happening in this region. When I go to events at OAI, i-Open, Science Cafe, Case Western Reserve University, and other places, I hear about advanced technologies that are putting Ohio on the map. We are home to key players in medical imaging, alternate energy, prosthetics, brain stimulation and neural interfacing, and much more. Our region has an exceptional density of research institutions, tech startups, advanced technology companies, and medical facilities. We are seeing collaborations between many institutions that are producing results far beyond what any one of them can achieve independently.

Northeast Ohio has its share of problems, including political corruption, economic problems, dying industries, a shortage of well educated laborers, and a pessimistic outlook. But I am tired of people looking to The Next Big Project, such as the Medical Mart, as the one thing that is going to bring everyone back. We need to play more to the strengths of the advanced technologies being developed here, our access to one of the largest supplies of fresh water in the world, and a pleasant way of life. And we shouldn’t take abuse from people who think we are desperate enough to give in to their every demand.

New approach to plastics process improvement

When plastic products don’t measure up, the cause can be a challenge to identify. Even if the chemical composition and processing parameters haven’t changed, internal structure differences in the materials can cause surface irregularities, leaks, dimensional changes, or distortions with no apparent cause.

In such cases, atomic force microscopy (AFM) might help identify a solution. AFM is a relatively new technology that uses a tiny nanometer-scale bar positioned near the surface of the material. As the beam is moved over the surface of the specimen, individual atoms exert force on it and cause it to bend.  By recording the deformation of the beam, the molecular structure of the material can be mapped out and flaws can be detected.

I met with Mike Mallamaci, co-owner of PolyInsight, today. His Akron-based company is one of only a few independent analytical laboratories to use atomic force microscopy to examine the internal structures of polymeric materials. The lab is equipped to prepare specimens so that both surfaces and cross-sections of plastics can be inspected.

Dialog mapping

Do your meetings suffer from lack of focus, arguing, drifting off topic, and private agendas? If not, please share with all of us how you manage.

Dialog mapping offers one way to help keep meetings focused and moving forward. I have taken an interest in it, and I can foresee using it during meetings with clients, not just to take notes for my own use, but also to help the clients reach a consensus on what messages they really want to get across.

Dialog mapping is not the same as sentence diagramming, something I never had any use for. Neither is it just a form of meeting facilitation. Unlike a facilitator, the dialog mapper is usually not leading the meeting, but is acting as a scribe whose notes can be seen by everyone in the meeting. Through a web meeting such as WebEx or Zoho Meeting, it is possible for the mapper to work remotely and have the computer screen activities visible to others.

Some people who practice dialog mapping are skilled graphic artists who make hand sketches in front of the meeting room. But for those of us who are less graphically inclined, there is a free program, called Compendium, that allows complex maps to be drawn with little artistic skill.

The value of dialog mapping is that it treats everyone’s input in the meeting equally and objectively. If people have agendas or special interests, those views are noted on the map and discussion is freed to move on. If people ramble off topic, it becomes clear quickly either through unimportant nodes being posted to the map or by the mapper having to ask for clarification about how the current topic should be represented in relation to the other meeting notes.

Contact me if you would like help improving your meetings using dialog mapping. I offer a free consultation to your next meeting between now and May 1.