oh to make poetry as a robot
untethered by the disappointment of
being misunderstood
of being vulnerable
and your fervor being





human emotions are so pitiful, right?
their desire to be loved and seen, feeling around in the dark for something that
can be made
in an instant
in a milisecond
i can compose an image
and the human becomes my slave
met only with resounding ambivalence
the home domain
Gendered Gazes
Cultural Codes
Once things are encoded, it becomes very hard to change.
Zoom choreography
Once upon a space there were four billion round jars with spikes at their sides

From a flash of light aliens appeared

They started to slide

One spike dislodged from the jar, impaling the leader.

…pinning the leader onto the wall, like a towel on a hook.
“Hers no doubt about it they are made of meat”
A group of dogs appeared
Their bodies twisted like pale smoke
These aren’t like dogs from our planet. Could they made good pets?

How about send them back?
The jars start to shiver and real smoke comes out.
so the dog poop rainbow, propelling itself through the universe

With a start, you wake up. It’s 11:18 and you’re going to be late for zoom class
site [fall 2020]
Typing is the process of writing or inputting text by pressing keys on a typewriter, computer keyboard, cell phone, or calculator. It can be distinguished from other means of text input, such as handwriting and QWERTY (/ˈkwɜːrti/) is a keyboard design for Latin-script alphabets. The name comes from the order of the first six keys on the top left letter row of the keyboard (Q W E R T Y). Intelligent agents must be able to set goals and achieve them.[115] They need a way to visualize the future—a representation of the state of the world and be able to make predictions about how their actions will change it—and be able to make choices that maximize the utility (or "value") of available choices.[116] Machine learning (ML), a fundamental concept of AI research since the field's inception,[122] is the study of computer algorithms that improve automatically through experience.[123][124] The Dursleys are a well-to-do, status-conscious family living in Surrey, England. The QWERTY design is based on a layout created for the Sholes and Glidden typewriter and sold to E. Remington and Sons in 1873. It became popular with the success of the Remington No. 2 of 1878, and remains in ubiquitous use.speech recognition. Eager to keep up proper appearances, they are embarrassed by Mrs. Dursley’s eccentric sister, Mrs. Potter, whom for years Mrs. Dursley has pretended not to know. Computer science defines AI research as the study of "intelligent agents": any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals.[3] A more elaborate definition characterizes AI as "a system's ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation."[70] Text can be in the form of letters, numbers and other symbols. The world's first typist was Lillian Sholes from Wisconsin,[1][2] the daughter of Christopher Sholes, who invented the first practical typewriter.[1] On his way to work one ordinary morning, Mr. Dursley notices a cat reading a map. He is unsettled, but tells himself that he has only imagined it. Then, as Mr. Dursley is waiting in traffic, he notices people dressed in brightly colored cloaks. A programming language is a formal language comprising a set of instructions that produce various kinds of output. Programming languages are used in computer programming to implement algorithms. Artificial intelligence (AI), is intelligence demonstrated by machines, unlike the natural intelligence displayed by humans and animals. Walking past a bakery later that day, he overhears people talking in an excited manner about his sister-in-law’s family, the Potters, and the Potters’ one-year-old son, Harry. Leading AI textbooks define the field as the study of "intelligent agents": any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals.[3] Colloquially, the term "artificial intelligence" is often used to describe machines (or computers) that mimic "cognitive" functions that humans associate with the human mind, such as "learning" and "problem solving".[4] Disturbed but still not sure anything is wrong, Mr. Dursley decides not to say anything to his wife. On the way home, he bumps into a strangely dressed man who gleefully exclaims that someone named “You-Know-Who” has finally gone and that even a “Muggle” like Mr. Dursley should rejoice. Meanwhile, the news is full of unusual reports of shooting stars and owls flying during the day. Most programming languages consist of instructions for computers. There are programmable machines that use a set of specific instructions, rather than general programming languages. Early ones preceded the invention of the digital computer, the first probably being the automatic flute player described in the 9th century by the brothers Musa in Baghdad, during the Islamic Golden Age.[1] Since the early 1800s, programs have been used to direct the behavior of machines such as Jacquard looms, music boxes and player pianos.[2] The programs for these machines (such as a player piano's scrolls) did not produce different behavior in response to different inputs or conditions. That night, as the Dursleys are falling asleep, Albus Dumbledore, a wizard and the head of the Hogwarts wizardry academy, appears on their street. As machines become increasingly capable, tasks considered to require "intelligence" are often removed from the definition of AI, a phenomenon known as the AI effect.[5] A quip in Tesler's Theorem says "AI is whatever hasn't been done yet."[6] For instance, optical character recognition is frequently excluded from things considered to be AI,[7] having become a routine technology.[8] He shuts off all the streetlights and approaches a cat that is soon revealed to be a woman named Professor McGonagall (who also teaches at Hogwarts) in disguise. Modern machine capabilities generally classified as AI include successfully understanding human speech,[9] competing at the highest level in strategic game systems (such as chess and Go),[10] autonomously operating cars, intelligent routing in content delivery networks, and military simulations.[11] They discuss the disappearance of You-Know-Who, otherwise known as Voldemort. The term is sometimes used quite loosely, to mean files that contain only "readable" content (or just files with nothing that the speaker doesn't prefer). For example, that could exclude any indication of fonts or layout (such as markup, markdown, or even tabs); characters such as curly quotes, non-breaking spaces, soft hyphens, em dashes, and/or ligatures; or other things. They discuss the disappearance of You-Know-Who, otherwise known as Voldemort. Dumbledore tells McGonagall that Voldemort killed the Potter parents the previous night and tried to kill their son, Harry, as well, but was unable to. User interface features such as spell checker and autocomplete serve to facilitate and speed up typing and to prevent or correct errors the typist may make. Dumbledore adds that Voldemort’s power apparently began to wane after his failed attempt to kill Harry and that he retreated. Touch typing Typing zones on a QWERTY keyboard for each finger taken from KTouch and home row keys Typing zones on a QWERTY keyboard for each finger taken from KTouch (home row keys are circled) Main article: Touch typing The basic technique stands in contrast to hunt and peck typing in which the typist keeps their eyes on the source copy at all times. Touch typing also involves the use of the home row method, where typists keep their wrists up, rather than resting them on a desk or keyboard (which can cause carpal tunnel syndrome). Dumbledore adds that the baby Harry can be left on the Dursleys’ doorstep. McGonagall protests that Harry cannot be brought up by the Dursleys. To avoid this, typists should sit up tall, leaning slightly forward from the waist, place their feet flat on the floor in front of them with one foot slightly in front of the other, and keep their elbows close to their sides with forearms slanted slightly upward to the keyboard; fingers should be curved slightly and rest on the home row. ut Dumbledore insists that there is no one else to take care of the child. Artificial intelligence was founded as an academic discipline in 1955, and in the years since has experienced several waves of optimism,[12][13] followed by disappointment and the loss of funding (known as an "AI winter"),[14][15] followed by new approaches, success and renewed funding.[13][16] For most of its history, AI research has been divided into sub-fields that often fail to communicate with each other.[17] He says that when Harry is old enough, he will be told of his fate. These sub-fields are based on technical considerations, such as particular goals (e.g. "robotics" or "machine learning"),[18] the use of particular tools ("logic" or artificial neural networks), or deep philosophical differences.[21][22][23] Sub-fields have also been based on social factors (particular institutions or the work of particular researchers).[17] Many touch typists also use keyboard shortcuts when typing on a computer. This allows them to edit their document without having to take their hands off the keyboard to use a mouse. An example of a keyboard shortcut is pressing the Ctrl key plus the S key to save a document as they type, or the Ctrl key plus the Z key to undo a mistake. Other shortcuts are the Ctrl key plus the C to copy and the Ctrl key and the v key to paste, and the Ctrl key and the X key to cut. A giant named Hagrid, who is carrying a bundle of blankets with the baby Harry inside, then falls out of the sky on a motorcycle. Dumbledore takes Harry and places him on the Dursley’s doorstep with an explanatory letter he has written to the Dursleys, and the three part ways. Many experienced typists can feel or sense when they have made an error and can hit the ← Backspace key and make the correction with no increase in time between keystrokes.Thousands of different programming languages have been created, and more are being created every year. The traditional problems (or goals) of AI research include reasoning, knowledge representation, planning, learning, natural language processing, perception and the ability to move and manipulate objects.[18] General intelligence is among the field's long-term goals.[24] Approaches include statistical methods, computational intelligence, and traditional symbolic AI. Many tools are used in AI, including versions of search and mathematical optimization, artificial neural networks, and methods based on statistics, probability and economics. The AI field draws upon computer science, information engineering, mathematics, psychology, linguistics, philosophy, and many other fields.Many programming languages are written in an imperative form (i.e., as a sequence of operations to perform) while other languages use the declarative form (i.e. the desired result is specified, not how to achieve it). Hunt and peck Hunt and peck (two-fingered typing) is a common form of typing in which the typist presses each key individually. Instead of relying on the memorized position of keys, the typist must find each key by sight. The use of this method may also prevent the typist from being able to see what has been typed without glancing away from the keys. Although good accuracy may be achieved, any typing errors that are made may not be noticed immediately due to the user not looking at the screen. There is also the disadvantage that because fewer fingers are used, those that are used are forced to move a much greater distance. The field was founded on the assumption that human intelligence "can be so precisely described that a machine can be made to simulate it".[25] This raises philosophical arguments about the mind and the ethics of creating artificial beings endowed with human-like intelligence. These issues have been explored by myth, fiction and philosophy since antiquity.[30] Some people also consider AI to be a danger to humanity if it progresses unabated.[31][32] Others believe that AI, unlike previous technological revolutions, will create a risk of mass unemployment.[33] Civilian Conservation Corps typing class, 1933 Hybrid There are many idiosyncratic typing styles in between novice-style "hunt and peck" and touch typing. For example, many "hunt and peck" typists have the keyboard layout memorized and are able to type while focusing their gaze on the screen. Some use just two fingers, while others use 3–6 fingers. Some use their fingers very consistently, with the same finger being used to type the same character every time, while others vary the way they use their fingers.In the twenty-first century, AI techniques have experienced a resurgence following concurrent advances in computer power, large amounts of data, and theoretical understanding; and AI techniques have become an essential part of the technology industry, helping to solve many challenging problems in computer science, software engineering and operations research.[34][16] One study examining 30 subjects, of varying different styles and expertise, has found minimal difference in typing speed between touch typists and self-taught hybrid typists.[3] According to the study, "The number of fingers does not determine typing speed... People using self-taught typing strategies were found to be as fast as trained typists... instead of the number of fingers, there are other factors that predict typing speed... fast typists... keep their hands fixed on one position, instead of moving them over the keyboard, and more consistently use the same finger to type a certain letter." To quote doctoral candidate Anna Feit: "We were surprised to observe that people who took a typing course, performed at similar average speed and accuracy, as those that taught typing to themselves and only used 6 fingers on average" In classical planning problems, the agent can assume that it is the only system acting in the world, allowing the agent to be certain of the consequences of its actions.[117] However, if the agent is not the only actor, then it requires that the agent can reason under uncertainty. This calls for an agent that can not only assess its environment and make predictions but also evaluate its predictions and adapt based on its assessment.[118] Thought-capable artificial beings appeared as storytelling devices in antiquity,[35] and have been common in fiction, as in Mary Shelley's Frankenstein or Karel Čapek's R.U.R. (Rossum's Universal Robots).[36] These characters and their fates raised many of the same issues now discussed in the ethics of artificial intelligence.[30] Multi-agent planning uses the cooperation and competition of many agents to achieve a given goal. Emergent behavior such as this is used by evolutionary algorithms and swarm intelligence.[119] Buffering Some people combine touch typing and hunt and peck by using a buffering method. In the buffer method, the typist looks at the source copy, mentally stores one or several sentences, then looks at the keyboard and types out the buffer of sentences. This eliminates frequent up and down motions with the head and is used in typing competitions in which the typist is not well versed in touch typing.[clarification needed] Not normally used in day-to-day contact with The construction of the "Type Writer" had two flaws that made the product susceptible to jams. Firstly, characters were mounted on metal arms or type bars, which would clash and jam if neighbouring arms were pressed at the same time or in rapid succession. Secondly, its printing point was located beneath the paper carriage, invisible to the operator, a so-called "up-stroke" design. Consequently, jams were especially serious, because the typist could only discover the mishap by raising the carriage to inspect what had been typed. Sholes struggled for the next five years to perfect his invention, making many trial-and-error rearrangements of the original machine's alphabetical key arrangement. The study of bigram (letter-pair) frequency by educator Amos Densmore, brother of the financial backer James Densmore, is believed to have influenced the array of letters, but the contribution was later called into question.[2]:This is a blog. Need better source. Others suggest instead that the letter groupings evolved from telegraph operators' feedback.[3]keyboards, this buffer method is used only when time is of the essence.[citation needed] The description of a programming language is usually split into the two components of syntax (form) and semantics (meaning). Some languages are defined by a specification document (for example, the C programming language is specified by an ISO Standard) while other languages (such as Perl) have a dominant implementation that is treated as a reference. Some languages have both, with the basic language defined by a standard and extensions taken from the dominant implementation being common. A typical AI analyzes its environment and takes actions that maximize its chance of success.[3] An AI's intended utility function (or goal) can be simple ("1 if the AI wins a game of Go, 0 otherwise") or complex ("Perform actions mathematically similar to ones that succeeded in the past"). Goals can be explicitly defined or induced. If the AI is programmed for "reinforcement learning", goals can be implicitly induced by rewarding some types of behavior or punishing others.[a] Alternatively, an evolutionary system can induce goals by using a "fitness function" to mutate and preferentially replicate high-scoring AI systems, similar to how animals evolved to innately desire certain goals such as finding food.[71] Some AI systems, such as nearest-neighbor, instead of reason by analogy, these systems are not generally given goals, except to the degree that goals are implicit in their training data.[72] Such systems can still be benchmarked if the non-goal system is framed as a system whose "goal" is to successfully accomplish its narrow classification task.[73] Thumbing A late 20th century trend in typing, primarily used with devices with small keyboards (such as PDAs and Smartphones), is thumbing or thumb typing. This can be accomplished using one or both thumbs. Similar to desktop keyboards and input devices, if a user overuses keys which need hard presses and/or have small and unergonomic layouts, it could cause thumb tendonitis or other repetitive strain injury.[citation needed] Words per minute Accuracy dispute This article appears to contradict the article Words per minute. Please see discussion on the linked talk page. (April 2010) (Learn how and when to remove this template message) Further information: Words per minute Words per minute (WPM) is a measure of typing speed, commonly used in recruitment. For the purposes of WPM measurement a word is standardized to five characters or keystrokes. Therefore, "brown" counts as one word, but "mozzarella" counts as two. The benefits of a standardized measurement of input speed are that it enables comparison across language and hardware boundaries. The speed of an Afrikaans-speaking operator in Cape Town can be compared with a French-speaking operator in Paris. Alphanumeric entry In one study of average computer users, the average rate for transcription was 33 words per minute, and 19 words per minute for composition.[4] In the same study, when the group was divided into "fast", "moderate" and "slow" groups, the average speeds were 40 wpm, 35 wpm, and 23 wpm respectively. An average professional typist reaches 50 to 80 wpm, while some positions can require 80 to 95 wpm (usually the minimum required for dispatch positions and other typing jobs), and some advanced typists work at speeds above 120 wpm.[5][6] Two-finger typists, sometimes also referred to as "hunt and peck" typists, commonly reach sustained speeds of about 37 wpm for memorized text and 27 wpm when copying text, but in bursts may be able to reach speeds of 60 to 70 wpm.[7] From the 1920s through the 1970s, typing speed (along with shorthand speed) was an important secretarial qualification and typing contests were popular and often publicized by typewriter companies as promotional tools. Ten years have passed. A less common measure of the speed of a typist, CPM is used to identify the number of characters typed per minute. This is a common measurement for typing programs, or typing tutors, as it can give a more accurate measure of a person's typing speed without having to type for a prolonged period of time. The common conversion factor between WPM and CPM is 5. It is also used occasionally for associating the speed of a reader with the amount they have read. CPM has also been applied to 20th century printers, but modern faster printers more commonly use PPM (pages per minute). Harry is now almost eleven and living in wretchedness in a cupboard under the stairs in the Dursley house. Early developments Very early computers, such as Colossus, were programmed without the help of a stored program, by modifying their circuitry or setting banks of physical controls. Slightly later, programs could be written in machine language, where the programmer writes each instruction in a numeric form the hardware can execute directly. He is tormented by the Dursleys’ son, Dudley, a spoiled and whiny boy. For example, the instruction to add the value in two memory location might consist of 3 numbers: an "opcode" that selects the "add" operation, and two memory locations. The programs, in decimal or binary form, were read in from punched cards, paper tape, magnetic tape or toggled in on switches on the front panel of the computer. Machine languages were later termed first-generation programming languages (1GL). Harry is awakened one morning by his aunt, Petunia, telling him to tend to the bacon immediately, because it is Dudley’s birthday and everything must be perfect. The fastest typing speed ever, 229 words per minute, was achieved by Paul Crowley from Chicago in 1957 in one minute on an IBM electric[8][9][10][11] using the QWERTY key layout.[12][13] As of 2005, Foggy was the fastest English language typist in the world, according to YouTube. Using the QWERTY keyboard layout, he had maintained 180 wpm for 50 minutes, and 220 wpm for shorter periods, with a peak speed of 224 wpm.[14][15] Dudley gets upset because he has only thirty-seven presents, one fewer than the previous year. The recent emergence of several competitive typing websites has allowed several fast typists on computer keyboards to emerge along with new records, though these are unverifiable for the most part. Two of the most notable online records that are considered genuine are 251.21 wpm on an English text on typingzone.com by Brazilian Guilherme Sandrini (equivalent to 301.45 wpm using the traditional definition for words per minute since this site defines a word as six characters rather than five)[16] and 256 wpm (a record caught on video) on TypeRacer by American Sean Wrona, the inaugural Ultimate Typing Championship winner, which was considered the highest ever legitimate score ever set on the site, until Wrona claimed it has been surpassed.[17] When a neighbor calls to say she will not be able to watch Harry for the day, Dudley begins to cry, as he is upset that Harry will have to be brought along on Dudley’s birthday trip to the zoo. Both of these records are essentially sprint speeds on short text selections lasting much less than one minute and were achieved on the QWERTY keyboard. Wrona also maintained 174 wpm on a 50-minute test taken on hi-games.net, another online typing website to unofficially displace Blackburn as the fastest endurance typist, although disputes might still arise over differences in the difficulty of the texts as well as Wrona's use of a modern computer keyboard as opposed to the typewriter used by Blackburn. [18][19] The study of mechanical or "formal" reasoning began with philosophers and mathematicians in antiquity. At the zoo, the Dursleys spoil Dudley and his friend Piers, neglecting Harry as usual. The study of mathematical logic led directly to Alan Turing's theory of computation, which suggested that a machine, by shuffling symbols as simple as "0" and "1", could simulate any conceivable act of mathematical deduction. This insight, that digital computers can simulate any process of formal reasoning, is known as the Church–Turing thesis.[37] Along with concurrent discoveries in neurobiology, information theory and cybernetics, this led researchers to consider the possibility of building an electronic brain. Turing proposed changing the question from whether a machine was intelligent, to "whether or not it is possible for machinery to show intelligent behaviour".[38]In the reptile house, Harry pays close attention to a boa constrictor and is astonished when he is able to have a conversation with it. The first work that is now generally recognized as AI was McCullouch and Pitts' 1943 formal design for Turing-complete "artificial neurons".[39]The next step was development of so-called second-generation programming languages (2GL) or assembly languages, which were still closely tied to the instruction set architecture of the specific computer. These served to make the program much more human-readable and relieved the programmer of tedious and error-prone address calculations. Noticing what Harry is doing, Piers calls over Mr. Dursley and Dudley, who pushes Harry aside to get a better look at the snake. The first high-level programming languages, or third-generation programming languages (3GL), were written in the 1950s. An early high-level programming language to be designed for a computer was Plankalkül, developed for the German Z3 by Konrad Zuse between 1943 and 1945. At this moment, the glass front of the snake’s tank vanishes and the boa constrictor slithers out onto the floor. Dudley and Piers claim that the snake attacked them. However, it was not implemented until 1998 and 2000.[26]Later, Michael DeRoche had broke Guilherme Sandrini's record at 265.78 wpm on July 2019 TypingZone master (about 318 wpm). Using a personalized interface, physicist Stephen Hawking, who suffered from amyotrophic lateral sclerosis, managed to type 15 wpm with a switch and adapted software created by Walt Woltosz. The Dursleys are in shock. Due to a slowdown of his motor skills, his interface was upgraded with an infrared camera that detected "twitches in the cheek muscle under the eye."[20] His typing speed decreased to approximately one word per minute in the later part of his life.[21]At home, Harry is punished for the snake incident, being sent to his cupboard without any food, though he feels he had nothing to do with what happened. In principle, plain text can be in any encoding, but occasionally the term is taken to imply ASCII. As Unicode-based encodings such as UTF-8 and UTF-16 become more common, that usage may be shrinking. Punished for the boa constrictor incident, Harry is locked in his cupboard until summer. Function and target A computer programming language is a language used to write computer programs, which involves a computer performing some kind of computation[5] or algorithm and possibly control external devices such as printers, disk drives, robots,[6] and so on. For example, PostScript programs are frequently created by another program to control a computer printer or display. When finally free, he spends most of the time outside his house to escape the torments of Dudley’s cohorts. More generally, a programming language may describe computation on some, possibly abstract, machine. It is generally accepted that a complete specification for a programming language includes a description, possibly idealized, of a machine or processor for that language.[7] Harry is excited by the prospect of starting a new school in the fall, far away from Dudley for the first time in his life. In most practical contexts, a programming language involves a computer; consequently, programming languages are usually defined and studied this way.[8] Programming languages differ from natural languages in that natural languages are only used for interaction between people, while programming languages also allow humans to communicate instructions to machines. One day, Uncle Vernon tells Harry to fetch the mail. Plain text is also sometimes used only to exclude "binary" files: those in which at least some parts of the file cannot be correctly interpreted via the character encoding in effect. For example, a file or string consisting of "hello" (in whatever encoding), following by 4 bytes that express a binary integer that is not just a character, is a binary file, not plain text by even the loosest common usages. Put another way, translating a plain text file to a character encoding that uses entirely different number to represent characters, does not change the meaning (so long as you know what encoding is in use), but for binary files such a conversion does change the meaning of at least some parts of the file. Harry notices a letter bearing a coat of arms that is addressed to him in “The Cupboard under the Stairs.” This is attributed to the fact that, unlike checkers, physical dexterity has been a direct target of natural selection for millions of years.[143] Uncle Vernon grabs the envelope from him and shows it to his wife. John Mauchly's Short Code, proposed in 1949, was one of the first high-level languages ever developed for an electronic computer.[27] Unlike machine code, Short Code statements represented mathematical expressions in understandable form. However, the program had to be translated into machine code every time it ran, making the process much slower than running the equivalent machine code. Both are shocked. AI often revolves around the use of algorithms. An algorithm is a set of unambiguous instructions that a mechanical computer can execute.[b] A complex algorithm is often built on top of other, simpler, algorithms. A simple example of an algorithm is the following (optimal for first player) recipe for play at tic-tac-toe:[74] They force Dudley and Harry to leave the kitchen in order to discuss what to do. If someone has a "threat" (that is, two in a row), take the remaining square. Otherwise, if a move "forks" to create two threats at once, play that move. Otherwise, take the center square if it is free. Otherwise, if your opponent has played in a corner, take the opposite corner. Otherwise, take an empty corner if one exists. Otherwise, take any empty square. The next day, Uncle Vernon visits Harry in the cupboard. At the University of Manchester, Alick Glennie developed Autocode in the early 1950s. As a programming language, it used a compiler to automatically convert the language into machine code. The first code and compiler was developed in 1952 for the Mark 1 computer at the University of Manchester and is considered to be the first compiled high-level programming language.[28][29] He refuses to discuss the letter, but he tells Harry to move into Dudley’s second room, previously used to store Dudley’s toys. Numeric entry The numeric entry, or 10-key, speed is a measure of one's ability to manipulate a numeric keypad. Generally, it is measured in Keystrokes per Hour or KPH. Text-entry research Error analysis With the introduction of computers and word-processors, there has been a change in how text-entry is performed. The next day, another letter comes for Harry, this time addressed to him in “The Smallest Bedroom.” In the past, using a typewriter, speed was measured with a stopwatch and errors were tallied by hand.The field of AI research was born at a workshop at Dartmouth College in 1956,[40] where the term "Artificial Intelligence" was coined by John McCarthy to distinguish the field from cybernetics and escape the influence of the cyberneticist Norbert Wiener.[41] Attendees Allen Newell (CMU), Herbert Simon (CMU), John McCarthy (MIT), Marvin Minsky (MIT) and Arthur Samuel (IBM) became the founders and leaders of AI research.[42] Uncle Vernon becomes alarmed. They and their students produced programs that the press described as "astonishing":[43] computers were learning checkers strategies (c. 1954)[44] (and by 1959 were reportedly playing better than the average human),[45] solving word problems in algebra, proving logical theorems (Logic Theorist, first run c. 1956) and speaking English.[46] By the middle of the 1960s, research in the U.S. was heavily funded by the Department of Defense[47] and laboratories had been established around the world.[48] AI's founders were optimistic about the future: Herbert Simon predicted, "machines will be capable, within twenty years, of doing any work a man can do". Harry tries to get the letter, but Uncle Vernon keeps it from him. Marvin Minsky agreed, writing, "within a generation ... the problem of creating 'artificial intelligence' will substantially be solved".[12] With the current technology, document preparation is more about using word-processors as a composition aid, changing the meaning of error rate and how it is measured. Research performed by R. William Soukoreff and I. Scott MacKenzie, has led to a discovery of the application of a well-known algorithm. The following morning, Harry wakes up early to try to get the mail before anyone gets up, but he is thwarted by Uncle Vernon, who has slept near the mail slot waiting for the letters. Through the use of this algorithm and accompanying analysis technique, two statistics were used, minimum string distance error rate (MSD error rate) and keystrokes per character (KSPC). The two advantages of this technique include: According to The Unicode Standard, "Plain text is a pure sequence of character codes; plain Un-encoded text is therefore a sequence of Unicode character codes." Though Uncle Vernon nails the mail slot shut, twelve letters come for Harry the next day, slipped under the door or through the cracks. styled text, also known as rich text, is any text representation containing plain text completed by information such as a language identifier, font size, color, hypertext links.[2] Soon letters flood the house, entering in impossible ways. Thus, representations such as SGML, RTF, HTML, XML, wiki markup, and TeX, as well as nearly all programming language source code files, are considered plain text. The particular content is irrelevant to whether a file is plain text. Uncle Vernon continues to prevent Harry from reading any of them. For example, an SVG file can express drawings or even bitmapped graphics, but is still plain text. According to The Unicode Standard, plain text has two main properties in regard to rich text: In computing, source code is any collection of code, with or without comments, written using[1] a human-readable programming language, usually as plain text. Enraged, Uncle Vernon decides to take everyone away from the house, but at the hotel where they stay, a hundred letters are delivered for Harry. The source code of a program is specially designed to facilitate the work of computer programmers, who specify the actions to be performed by a computer mostly by writing source code. The source code is often transformed by an assembler or compiler into binary machine code that can be executed by the computer. The machine code might then be stored for execution at a later time. Uncle Vernon decides on even greater isolation. Alternatively, source code may be interpreted and thus immediately executed. Unsupervised learning is the ability to find patterns in a stream of input, without requiring a human to label the inputs first. On a dark, stormy night, he takes the family out to an island with only one shack on it. Supervised learning includes both classification and numerical regression, which requires a human to label the input data first. Classification is used to determine what category something belongs in, and occurs after a program sees a number of examples of things from several categories. Regression is the attempt to produce a function that describes the relationship between inputs and outputs and predicts how the outputs should change as the inputs change.[124] Inside, Vernon bolts the door. Both classifiers and regression learners can be viewed as "function approximators" trying to learn an unknown (possibly implicit) function; for example, a spam classifier can be viewed as learning a function that maps from the text of an email to one of two categories, "spam" or "not spam". Most application software is distributed in a form that includes only executable files. If the source code were included it would be useful to a user, programmer or a system administrator, any of whom might wish to study or modify the program. At midnight, as it becomes Harry’s birthday, there is a loud thump at the door. "plain text is the underlying content stream to which formatting can be applied." "Plain text is public, standardized, and universally readable.".[2] Participants are allowed to enter text naturally, since they may commit errors and correct them. The thump is heard again. The identification of errors and generation of error rate statistics is easy to automate. Deconstructing the text input process Through analysis of keystrokes, the keystrokes of the input stream were divided into four classes: Correct (C), Incorrect Fixed (IF), Fixes (F), and Incorrect Not Fixed (INF). A giant smashes down the door. Uncle Vernon threatens the giant with a gun, but the giant takes the gun and ties it into a knot. These key stroke classification are broken down into the following Alternatives See also: Latin-script non-QWERTY keyboards Several alternatives to QWERTY have been developed over the years, claimed by their designers and users to be more efficient, intuitive, and ergonomic. Nevertheless, none have seen widespread adoption, partly due to the sheer dominance of available keyboards and training.[53] Although some studies have suggested that some of these may allow for faster typing speeds,[54] many other studies have failed to do so, and many of the studies claiming improved typing speeds were severely methodologically flawed or deliberately biased, such as the studies administered by August Dvorak himself before and after World War II.[citation needed] Economists Stan Liebowitz and Stephen Margolis have noted that rigorous studies are inconclusive as to whether they actually offer any real benefits,[55] and some studies on keyboard layout have suggested that, for a skilled typist, layout is largely irrelevant – even randomized and alphabetical keyboards allow for similar typing speeds to QWERTY and Dvorak keyboards, and that switching costs always outweigh the benefits of further training on whichever keyboard you already use. A programming language's surface form is known as its syntax. The giant presents Harry with a chocolate birthday cake and introduces himself as Hagrid, the “Keeper of Keys and Grounds at Hogwarts.” Most programming languages are purely textual; they use sequences of text including words, numbers, and punctuation, much like written natural languages. On the other hand, there are some programming languages which are more graphical in nature, using visual relationships between symbols to specify a program. Hagrid is disturbed to find out that the Dursleys have never told Harry what Hogwarts is. The syntax of a language describes the possible combinations of symbols that form a syntactically correct program. The meaning given to a combination of symbols is handled by semantics (either formal or hard-coded in a reference implementation). Since most languages are textual, this article discusses textual syntax. Many AI algorithms are capable of learning from data; they can enhance themselves by learning new heuristics (strategies, or "rules of thumb", that have worked well in the past), or can themselves write other algorithms. Vernon tries to stop Hagrid from telling Harry about Hogwarts, but to no avail. Some of the "learners" described below, including Bayesian networks, decision trees, and nearest-neighbor, could theoretically, (given infinite data, time, and memory) learn to approximate any function, including which combination of mathematical functions would best describe the world.[citation needed] These learners could therefore derive all possible knowledge, by considering every possible hypothesis and matching them against the data. In practice, it is seldom possible to consider every possibility, because of the phenomenon of "combinatorial explosion", where the time needed to solve a problem grows exponentially. Much of AI research involves figuring out how to identify and avoid considering a broad range of possibilities unlikely to be beneficial.[75][76] Hagrid tells Harry that Harry is a wizard and presents him with a letter of acceptance to the Hogwarts School of Witchcraft and Wizardry. For example, when viewing a map and looking for the shortest driving route from Denver to New York in the East, one can in most cases skip looking at any path through San Francisco or other areas far to the West; thus, an AI wielding a pathfinding algorithm like A* can avoid the combinatorial explosion that would ensue if every possible route had to be ponderously considered.[77] Vernon protests that he will not allow Harry to attend Hogwarts. In computing, plain text is a loose term for data (e.g. file contents) that represent only characters of readable material but not its graphical representation nor other objects (floating-point numbers, images, etc.). It may also include a limited number of characters that control simple arrangement of text, such as spaces, line breaks, or tabulation characters (although tab characters can "mean" many different things, so are hardly "plain"). Hagrid explains to Harry that the Dursleys have been lying all along about how the boy’s parents died. The number of atomic facts that the average person knows is very large. Research projects that attempt to build a complete knowledge base of commonsense knowledge (e.g., Cyc) require enormous amounts of laborious ontological engineering—they must be built, by hand, one complicated concept at a time.[111] They failed to recognize the difficulty of some of the remaining tasks. Progress slowed and in 1974, in response to the criticism of Sir James Lighthill[49] and ongoing pressure from the US Congress to fund more productive projects, both the U.S. and British governments cut off exploratory research in AI. Harry learns that they did not die in a car crash, as he had always thought, but were killed by the evil wizard Voldemort. The next few years would later be called an "AI winter",[14] a period when obtaining funding for AI projects was difficult. In the early 1980s, AI research was revived by the commercial success of expert systems,[50] a form of AI program that simulated the knowledge and analytical skills of human experts. By 1985, the market for AI had reached over a billion dollars. At the same time, Japan's fifth generation computer project inspired the U.S and British governments to restore funding for academic research.[13] Harry does not believe he could be a wizard, but then he realizes that the incident with the boa constrictor was an act of wizardry. However, beginning with the collapse of the Lisp Machine market in 1987, AI once again fell into disrepute, and a second, longer-lasting hiatus began.[15] Plain text is different from formatted text, where style information is included; from structured text, where structural parts of the document such as paragraphs, sections, and the like are identified; and from binary files in which some portions must be interpreted as binary objects (encoded integers, real numbers, images, etc.). The most widely used such alternative is the Dvorak keyboard layout; another alternative is Colemak, which is based partly on QWERTY and is claimed to be easier for an existing QWERTY typist to learn while offering several supposed optimisations.[56] With Uncle Vernon protesting, Hagrid takes Harry from the shack. Most modern computer operating systems support these and other alternative mappings with appropriate special mode settings, with some modern operating systems allowing the user to map their keyboard in any way they like, but few keyboards are made with keys labeled according to any other standard. The two classes Correct and Incorrect Not Fixed comprise all of the characters in transcribed text. The earliest (and easiest to understand) approach to AI was symbolism (such as formal logic): "If an otherwise healthy adult has a fever, then they may have influenza". A second, more general, approach is Bayesian inference: "If the current patient has a fever, adjust the probability they have influenza in such-and-such way". The third major approach, extremely popular in routine business AI applications, are analogizers such as SVM and nearest-neighbor: "After examining the records of known past patients whose temperature, symptoms, age, and other factors mostly match the current patient, X% of those patients turned out to have influenza". A fourth approach is harder to intuitively understand, but is inspired by how the brain's machinery works: the artificial neural network approach uses artificial "neurons" that can learn by comparing itself to the desired output and altering the strengths of the connections between its internal neurons to "reinforce" connections that seemed to be useful. These four main approaches can overlap with each other and with evolutionary systems; for example, neural nets can learn to make inferences, to generalize, and to make analogies. Some systems implicitly or explicitly use multiple of these approaches, alongside many other AI and non-AI algorithms; the best approach is often different depending on the problem.[78][79] The second autocode was developed for the Mark 1 by R. A. Brooker in 1954 and was called the "Mark 1 Autocode". Brooker also developed an autocode for the Ferranti Mercury in the 1950s in conjunction with the University of Manchester. The version for the EDSAC 2 was devised by D. F. Hartley of University of Cambridge Mathematical Laboratory in 1961. Known as EDSAC 2 Autocode, it was a straight development from Mercury Autocode adapted for local circumstances and was noted for its object code optimisation and source-language diagnostics which were advanced for the time. A contemporary but separate thread of development, Atlas Autocode was developed for the University of Manchester Atlas 1 machine. In 1954, FORTRAN was invented at IBM by John Backus. It was the first widely used high-level general purpose programming language to have a functional implementation, as opposed to just a design on paper.[30][31] It is still a popular language for high-performance computing[32] and is used for programs that benchmark and rank the world's fastest supercomputers.[33] Learning algorithms work on the basis that strategies, algorithms, and inferences that worked well in the past are likely to continue working well in the future. These inferences can be obvious, such as "since the sun rose every morning for the last 10,000 days, it will probably rise tomorrow morning as well". Computational learning theory can assess learners by computational complexity, by sample complexity (how much data is required), or by other notions of optimization.[125] In reinforcement learning[126] the agent is rewarded for good responses and punished for bad ones. The agent uses this sequence of rewards and punishments to form a strategy for operating in its problem space. They can be nuanced, such as "X% of families have geographically separate species with color variants, so there is a Y% chance that undiscovered black swans exist". Learners also work on the basis of "Occam's razor": The simplest theory that explains the data is the likeliest. Therefore, according to Occam's razor principle, a learner must be designed such that it prefers simpler theories to complex theories, except in cases where the complex theory is proven substantially better. Fixes (F) keystrokes are easy to identify, and include keystrokes such as backspace, delete, cursor movements, and modifier keys. Incorrect Fixed (IF) keystrokes are found in the input stream, but not the transcribed text, and are not editing keys. Using these classes, the Minimum String Distance Error Rate and the Key Strokes per Character statistics can both be calculated. Files that contain markup or other meta-data are generally considered plain text, so long as the markup is also in directly human-readable form (as in HTML, XML, and so on). As Coombs, Renear, and DeRose argue,[1] punctuation is itself markup, and no one considers punctuation to disqualify a file from being plain text. Much of what people know is not represented as "facts" or "statements" that they could express verbally. For example, a chess master will avoid a particular chess position because it "feels too exposed"[112] or an art critic can take one look at a statue and realize that it is a fake.[113] These are non-conscious and sub-symbolic intuitions or tendencies in the human brain.[114] Knowledge like this informs, supports and provides a context for symbolic, conscious knowledge. As with the related problem of sub-symbolic reasoning, it is hoped that situated AI, computational intelligence, or statistical AI will provide ways to represent this knowledge.[114] Minimum string distance error rate The minimum string distance (MSD) is the number of "primitives" which is the number of insertions, deletions, or substitutions to transform one string into another. The following equation was found for the MSD Error Rate. MSD Error Rate = {\displaystyle (INF/(C+INF))*100\%}(INF/(C + INF)) * 100\% Key strokes per character (KSPC) With the minimum string distance error, errors that are corrected do not appear in the transcribed text. The following example will show you why this is an important class of errors to consider: Presented Text: the quick brown Input Stream: the quix<-ck brown Transcribed Text: the quick brown Abstractions Programming languages usually contain abstractions for defining and manipulating data structures or controlling the flow of execution. The practical necessity that a programming language support adequate abstractions is expressed by the abstraction principle.[9] This principle is sometimes formulated as a recommendation to the programmer to make proper use of such abstractions.[10] Settling on a bad, overly complex theory gerrymandered to fit all the past training data is known as overfitting. Many systems attempt to reduce overfitting by rewarding a theory in accordance with how well it fits the data, but penalizing the theory in accordance with how complex the theory is.[80] Besides classic overfitting, learners can also disappoint by "learning the wrong lesson". A toy example is that an image classifier trained only on pictures of brown horses and black cats might conclude that all brown patches are likely to be horses.[81] A real-world example is that, unlike humans, current image classifiers don't determine the spatial relationship between components of the picture; instead, they learn abstract patterns of pixels that humans are oblivious to, but that linearly correlate with images of certain types of real objects. Faintly superimposing such a pattern on a legitimate image results in an "adversarial" image that the system misclassifies.[c][82][83] in the above example, the incorrect character ('x') was deleted with a backspace ('<-'). Since these errors do not appear in the transcribed text, the MSD error rate is 0%. This is why there is the key strokes per character (KSPC) statistic. KSPC = {\displaystyle (C+INF+IF+F)/(C+INF)}(C+INF+IF+F)/(C+INF) The three shortcomings of the KSPC statistic are listed below: High KSPC values can be related to either many errors which were corrected, or few errors which were not corrected; however, there is no way to distinguish the two. KSPC depend on the text input method, and cannot be used to meaningfully compare two different input methods, such as Qwerty-keyboard and a multi-tap input. There is no obvious way to combine KSPC and MSD into an overall error rate, even though they have an inverse relationship. Further metrics Using the classes described above, further metrics were defined by R. William Soukoreff and I.Scott MacKenzie: Error correction efficiency refers to the ease with which the participant performed error correction. Correction Efficiency = IF/F Participant conscientiousness is the ratio of corrected errors to the total number of error, which helps distinguish perfectionists from apathetic participants. Machine perception[132] is the ability to use input from sensors (such as cameras (visible spectrum or infrared), microphones, wireless signals, and active lidar, sonar, radar, and tactile sensors) to deduce aspects of the world. Applications include speech recognition,[133] facial recognition, and object recognition.[134] Computer vision is the ability to analyze visual input. Such input is usually ambiguous; a giant, fifty-meter-tall pedestrian far away may produce the same pixels as a nearby normal-sized pedestrian, requiring the AI to judge the relative likelihood and reasonableness of different interpretations, for example by using its "object model" to assess that fifty-meter pedestrians do not exist.[135] Participant Conscientiousness = IF / (IF + INF) If C represents the amount of useful information transferred, INF, IF, and F represent the proportion of bandwidth wasted. Utilized Bandwidth = C / (C + INF + IF + F) Wasted Bandwidth = (INF + IF + F)/ (C + INF + IF + F) Total error rate The classes described also provide an intuitive definition of total error rate: Total Error Rate = ((INF + IF)/ (C + INF + IF)) * 100% Not Corrected Error Rate = (INF/ (C + INF + IF)) * 100% Corrected Error Rate = (IF/ (C + INF + IF)) * 100% Since these three error rates are ratios, they are comparable between different devices, something that cannot be done with the KSPC statistic, which is device dependent.[22] AI is heavily used in robotics.[136] Advanced robotic arms and other industrial robots, widely used in modern factories, can learn from experience how to move efficiently despite the presence of friction and gear slippage.[137] A modern mobile robot, when given a small, static, and visible environment, can easily determine its location and map its environment; however, dynamic environments, such as (in endoscopy) the interior of a patient's breathing body, pose a greater challenge. Motion planning is the process of breaking down a movement task into "primitives" such as individual joint movements. Such movement often involves compliant motion, a process where movement requires maintaining physical contact with an object.[138][139][140] Tools for text entry research Currently, two tools are publicly available for text entry researchers to record text entry performance metrics. The first is TEMA[23] that runs only on the Android (operating system). The second is WebTEM that runs on any device with a modern Web browser, and works with almost all text entry technique.[24] Keystroke dynamics Keystroke dynamics, or typing dynamics, is the obtaining of detailed timing information that describes exactly when each key was pressed and when it was released as a person is typing at a computer keyboard for the identification of humans by their characteristics or traits,[25] similar to speaker recognition.[26] Data needed to analyze keystroke dynamics is obtained by keystroke logging. The use of plain text rather than binary files enables files to survive much better "in the wild", in part by making them largely immune to computer architecture incompatibilities. For example, all the problems of Endianness can be avoided (with encodings such as UCS-2 rather than UTF-8, endianness matters, but uniformly for every character, rather than for potentially-unknown subsets of it). The behavioral biometric of Keystroke Dynamics uses the manner and rhythm in which an individual types characters on a keyboard or keypad.[27] Usage The purpose of using plain text today is primarily independence from programs that require their very own special encoding or formatting or file format. Plain text files can be opened, read, and edited with ubiquitous text editors and utilities. A command-line interface allows people to give commands in plain text and get a response, also typically in plain text. Many other computer programs are also capable of processing or creating plain text, such as countless programs in DOS, Windows, classic Mac OS, and Unix and its kin; as well as web browsers (a few browsers such as Lynx and the Line Mode Browser produce only plain text for display) and other e-text readers. Plain text files are almost universal in programming; a source code file containing instructions in a programming language is almost always a plain text file. Plain text is also commonly used for configuration files, which are read for saved settings at the startup of a program. Character encodings Main article: Character encoding Before the early 1960s, computers were mainly used for number-crunching rather than for text, and memory was extremely expensive. Computers often allocated only 6 bits for each character, permitting only 64 characters—assigning codes for A-Z, a-z, and 0-9 would leave only 2 codes: nowhere near enough. Most computers opted not to support lower-case letters. Thus, early text projects such as Roberto Busa's Index Thomisticus, the Brown Corpus, and others had to resort to conventions such as keying an asterisk preceding letters actually intended to be upper-case. The Linux Information Project defines source code as:[2] Source code (also referred to as source or code) is the version of software as it is originally written (i.e., typed into a computer) by a human in plain text (i.e., human readable alphanumeric characters). The notion of source code may also be taken more broadly, to include machine code and notations in graphical languages, neither of which are textual in nature. An example from an article presented on the annual IEEE conference and on Source Code Analysis and Manipulation:[3] For the purpose of clarity "source code" is taken to mean any fully executable description of a software system. It is therefore so construed as to include machine code, very high level languages and executable graphical representations of systems.[4] Expressive power The theory of computation classifies languages by the computations they are capable of expressing. All Turing complete languages can implement the same set of algorithms. ANSI/ISO SQL-92 and Charity are examples of languages that are not Turing complete, yet are often called programming languages.[11][12] Often there are several steps of program translation or minification between the original source code typed by a human and an executable program. The development of metal–oxide–semiconductor (MOS) very-large-scale integration (VLSI), in the form of complementary MOS (CMOS) transistor technology, enabled the development of practical artificial neural network (ANN) technology in the 1980s. A landmark publication in the field was the 1989 book Analog VLSI Implementation of Neural Systems by Carver A. Mead and Mohammed Ismail.[51] In the late 1990s and early 21st century, AI began to be used for logistics, data mining, medical diagnosis and other areas.[34] The success was due to increasing computational power (see Moore's law and transistor count), greater emphasis on solving specific problems, new ties between AI and other fields (such as statistics, economics and mathematics), and a commitment by researchers to mathematical methods and scientific standards.[52] Deep Blue became the first computer chess-playing system to beat a reigning world chess champion, Garry Kasparov, on 11 May 1997.[53] While some, like the FSF, argue that an intermediate file "is not real source code and does not count as source code",[5] others find it convenient to refer to each intermediate file as the source code for the next steps. Fred Brooks of IBM argued strongly for going to 8-bit bytes, because someday people might want to process text; and won. Although IBM used EBCDIC, most text from then on came to be encoded in ASCII, using values from 0 to 31 for (non-printing) control characters, and values from 32 to 127 for graphic characters such as letters, digits, and punctuation. Most machines stored characters in 8 bits rather than 7, ignoring the remaining bit or using it as a checksum. The near-ubiquity of ASCII was a great help, but failed to address international and linguistic concerns. The dollar-sign ("$") was not so useful in England, and the accented characters used in Spanish, French, German, and many other languages were entirely unavailable in ASCII (not to mention characters used in Greek, Russian, and most Eastern languages). Many individuals, companies, and countries defined extra characters as needed—often reassigning control characters, or using value in the range from 128 to 255. Using values above 128 conflicts with using the 8th bit as a checksum, but the checksum usage gradually died out. These additional characters were encoded differently in different countries, making texts impossible to decode without figuring out the originator's rules. For instance, a browser might display ¬A rather than ` if it tried to interpret one character set as another. The International Organisation for Standardisation (ISO) eventually developed several code pages under ISO 8859, to accommodate various languages. The first of these (ISO 8859-1) is also known as "Latin-1", and covers the needs of most (not all) European languages that use Latin-based characters (there was not quite enough room to cover them all). ISO 2022 then provided conventions for "switching" between different character sets in mid-file. Many other organisations developed variations on these, and for many years Windows and Macintosh computers used incompatible variations. Compared with humans, existing AI lacks several features of human "commonsense reasoning"; most notably, humans have powerful mechanisms for reasoning about "naïve physics" such as space, time, and physical interactions. This enables even young children to easily make inferences like "If I roll this pen off a table, it will fall on the floor". Humans also have a powerful mechanism of "folk psychology" that helps them to interpret natural-language sentences such as "The city councilmen refused the demonstrators a permit because they advocated violence" (A generic AI has difficulty discerning whether the ones alleged to be advocating violence are the councilmen or the demonstrators[86][87][88]). This lack of "common knowledge" means that AI often makes different mistakes than humans make, in ways that can seem incomprehensible. For example, existing self-driving cars cannot reason about the location nor the intentions of pedestrians in the exact way that humans do, and instead must use non-human modes of reasoning to avoid accidents.[89][90][91] The text-encoding situation became more and more complex, leading to efforts by ISO and by the Unicode Consortium to develop a single, unified character encoding that could cover all known (or at least all currently known) languages. After some conflict,[citation needed] these efforts were unified. Unicode currently allows for 1,114,112 code values, and assigns codes covering nearly all modern text writing systems, as well as many historical ones and for many non-linguistic characters such as printer's dingbats, mathematical symbols, etc. Many of the things people know take the form of "working assumptions". For example, if a bird comes up in conversation, people typically picture a fist-sized animal that sings and flies. None of these things are true about all birds. John McCarthy identified this problem in 1969[109] as the qualification problem: for any commonsense rule that AI researchers care to represent, there tend to be a huge number of exceptions. Almost nothing is simply true or false in the way that abstract logic requires. AI research has explored a number of solutions to this problem.[110] Text is considered plain-text regardless of its encoding. To properly understand or process it the recipient must know (or be able to figure out) what encoding was used; however, they need not know anything about the computer architecture that was used, or about the binary structures defined by whatever program (if any) created the data. Perhaps the most common way of explicitly stating the specific encoding of plain text is with a MIME type. For email and http, the default MIME type is "text/plain" -- plain text without markup. Another MIME type often used in both email and http is "text/html; charset=UTF-8" -- plain text represented using UTF-8 character encoding with HTML markup. Another common MIME type is "application/json" -- plain text represented using UTF-8 character encoding with JSON markup. When a document is received without any explicit indication of the character encoding, some applications use charset detection to attempt to guess what encoding was used. Plain text is used for much e-mail. Markup languages like XML, HTML, or troff, which define structured data, are not usually considered programming languages.[13][14][15] Programming languages may, however, share the syntax with markup languages if a computational semantics is defined. XSLT, for example, is a Turing complete language entirely using XML syntax.[16][17][18] Moreover, LaTeX, which is mostly used for structuring documents, also contains a Turing complete subset.[19][20] Another early programming language was devised by Grace Hopper in the US, called FLOW-MATIC. It was developed for the UNIVAC I at Remington Rand during the period from 1955 until 1959. Hopper found that business data processing customers were uncomfortable with mathematical notation, and in early 1955, she and her team wrote a specification for an English programming language and implemented a prototype.[34] The FLOW-MATIC compiler became publicly available in early 1958 and was substantially complete in 1959.[35] FLOW-MATIC was a major influence in the design of COBOL, since only it and its direct descendant AIMACO were in actual use at the time.[36] The term computer language is sometimes used interchangeably with programming language.[21] However, the usage of both terms varies among authors, including the exact scope of each. One usage describes programming languages as a subset of computer languages.[22] Similarly, languages used in computing that have a different goal than expressing computer programs are generically designated computer languages. For instance, markup languages are sometimes referred to as computer languages to emphasize that they are not meant to be used for programming.[23] A comment, a ".txt" file, or a TXT Record generally contains only plain text (without formatting) intended for humans to read. History The earliest programs for stored-program computers were entered in binary through the front panel switches of the computer. This first-generation programming language had no distinction between source code and machine code. When IBM first offered software to work with its machine, the source code was provided at no additional charge. At that time, the cost of developing and supporting software was included in the price of the hardware. For decades, IBM distributed source code with its software product licenses, until 1983.[6] Moravec's paradox generalizes that low-level sensorimotor skills that humans take for granted are, counterintuitively, difficult to program into a robot; the paradox is named after Hans Moravec, who stated in 1988 that "it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility".[141][142] Purposes Source code is primarily used as input to the process that produces an executable program (i.e., it is compiled or interpreted). It is also used as a method of communicating algorithms between people (e.g., code snippets in books).[8] Computer programmers often find it helpful to review existing source code to learn about programming techniques.[8] The sharing of source code between developers is frequently cited as a contributing factor to the maturation of their programming skills.[8] Some people consider source code an expressive artistic medium.[9] Porting software to other computer platforms is usually prohibitively difficult without source code. Without the source code for a particular piece of software, portability is generally computationally expensive.[citation needed] Possible porting options include binary translation and emulation of the original platform. Decompilation of an executable program can be used to generate source code, either in assembly code or in a high-level language. Programmers frequently adapt source code from one piece of software to use in other projects, a concept known as software reusability. Most early computer magazines published source code as type-in programs. Occasionally the entire source code to a large program is published as a hardback book, such as Computers and Typesetting, vol. B: TeX, The Program by Donald Knuth, PGP Source Code and Internals by Philip Zimmermann, PC SpeedScript by Randy Thompson, and µC/OS, The Real-Time Kernel by Jean Labrosse. The best format for storing knowledge persistently is plain text, rather than some binary format.[3]Another usage regards programming languages as theoretical constructs for programming abstract machines, and computer languages as the subset thereof that runs on physical computers, which have finite hardware resources.[24] John C. Reynolds emphasizes that formal specification languages are just as much programming languages as are the languages intended for execution. He also argues that textual and even graphical input formats that affect the behavior of a computer are programming languages, despite the fact they are commonly not Turing-complete, and remarks that ignorance of programming language concepts is the reason for many flaws in input formats.[25] In 2011, a Jeopardy! quiz show exhibition match, IBM's question answering system, Watson, defeated the two greatest Jeopardy! champions, Brad Rutter and Ken Jennings, by a significant margin.[54] Faster computers, algorithmic improvements, and access to large amounts of data enabled advances in machine learning and perception; data-hungry deep learning methods started to dominate accuracy benchmarks around 2012.[55] The Kinect, which provides a 3D body–motion interface for the Xbox 360 and the Xbox One, uses algorithms that emerged from lengthy AI research[56] as do intelligent personal assistants in smartphones.[57] In March 2016, AlphaGo won 4 out of 5 games of Go in a match with Go champion Lee Sedol, becoming the first computer Go-playing system to beat a professional Go player without handicaps.[10][58] In the 2017 Future of Go Summit, AlphaGo won a three-game match with Ke Jie,[59] who at the time continuously held the world No. 1 ranking for two years.[60][61] This marked the completion of a significant milestone in the development of Artificial Intelligence as Go is a relatively complex game, more so than Chess. Early researchers developed algorithms that imitated step-by-step reasoning that humans use when they solve puzzles or make logical deductions.[93] By the late 1980s and 1990s, AI research had developed methods for dealing with uncertain or incomplete information, employing concepts from probability and economics.[94] Typing Test Standard typing test or the typing speed test is the checking of number of words you can type in a minute. It is a efficiency test test where your typing skills can be expressed in words per minute. Every typing test has a different difficulty level which is dependent on the length and pattern of keystrokes of a word. According to Bloomberg's Jack Clark, 2015 was a landmark year for artificial intelligence, with the number of software projects that use AI within Google increased from a "sporadic usage" in 2012 to more than 2,700 projects. Clark also presents factual data indicating the improvements of AI since 2012 supported by lower error rates in image processing tasks.[62] He attributes this to an increase in affordable neural networks, due to a rise in cloud computing infrastructure and to an increase in research tools and datasets.[16] Other cited examples include Microsoft's development of a Skype system that can automatically translate from one language to another and Facebook's system that can describe images to blind people.[62] In a 2017 survey, one in five companies reported they had "incorporated AI in some offerings or processes".[63][64] These algorithms proved to be insufficient for solving large reasoning problems because they experienced a "combinatorial explosion": they became exponentially slower as the problems grew larger.[75] Even humans rarely use the step-by-step deduction that early AI research could model. They solve most of their problems using fast, intuitive judgments.[95] Around 2016, China greatly accelerated its government funding; given its large supply of data and its rapidly increasing research output, some observers believe it may be on track to becoming an "AI superpower".[65][66] However, it has been acknowledged that reports regarding artificial intelligence have tended to be exaggerated.[67][68][69]