Who invented c programming




















Of the 24K bytes of memory on the machine, the earliest PDP Unix system used 12K bytes for the operating system, a tiny space for user programs, and the remainder as a RAM disk.

This version was only for testing, not for real work; the machine marked time by enumerating closed knight's tours on chess boards of various sizes. Once its disk appeared, we quickly migrated to it after transliterating assembly-language commands to the PDP dialect, and porting those already in B.

By , our miniature computer center was beginning to have users. We all wanted to create interesting software more easily. Using assembler was dreary enough that B, despite its performance problems, had been supplemented by a small library of useful service routines and was being used for more and more new programs. Among the more notable results of this period was Steve Johnson's first version of the yacc parser-generator [Johnson 79a]. The advent of the PDP exposed several inadequacies of B's semantic model.

First, its character-handling mechanisms, inherited with few changes from BCPL, were clumsy: using library procedures to spread packed strings into individual cells and then repack, or to access and replace individual characters, began to feel awkward, even silly, on a byte-oriented machine. Second, although the original PDP did not provide for floating-point arithmetic, the manufacturer promised that it would soon be available. Floating-point operations had been added to BCPL in our Multics and GCOS compilers by defining special operators, but the mechanism was possible only because on the relevant machines, a single word was large enough to contain a floating-point number; this was not true on the bit PDP Finally, the B and BCPL model implied overhead in dealing with pointers: the language rules, by defining a pointer as an index in an array of words, forced pointers to be represented as word indices.

Each pointer reference generated a run-time scale conversion from the pointer to the byte address expected by the hardware. For all these reasons, it seemed that a typing scheme was necessary to cope with characters and byte addressing, and to prepare for the coming floating-point hardware. Other issues, particularly type safety and interface checking, did not seem as important then as they became later. Aside from the problems with the language itself, the B compiler's threaded-code technique yielded programs so much slower than their assembly-language counterparts that we discounted the possibility of recoding the operating system or its central utilities in B.

In I began to extend the B language by adding a character type and also rewrote its compiler to generate PDP machine instructions instead of threaded code.

Thus the transition from B to C was contemporaneous with the creation of a compiler capable of producing programs fast and small enough to compete with assembly language. NB existed so briefly that no full description of it was written. It supplied the types int and char , arrays of them, and pointers to them, declared in a style typified by int i, j; char c, d; int iarray[10]; int ipointer[]; char carray[10]; char cpointer[]; The semantics of arrays remained exactly as in B and BCPL: the declarations of iarray and carray create cells dynamically initialized with a value pointing to the first of a sequence of 10 integers and characters respectively.

The declarations for ipointer and cpointer omit the size, to assert that no storage should be allocated automatically. Within procedures, the language's interpretation of the pointers was identical to that of the array variables: a pointer declaration created a cell differing from an array declaration only in that the programmer was expected to assign a referent, instead of letting the compiler allocate the space and initialize the cell.

Values stored in the cells bound to array and pointer names were the machine addresses, measured in bytes, of the corresponding storage area.

Therefore, indirection through a pointer implied no run-time overhead to scale the pointer from word to byte offset. These semantics represented an easy transition from B, and I experimented with them for some months. Problems became evident when I tried to extend the type notation, especially to add structured record types. Structures, it seemed, should map in an intuitive way onto memory in the machine, but in a structure containing an array, there was no good place to stash the pointer containing the base of the array, nor any convenient way to arrange that it be initialized.

Where could the compiler hide the pointer to name that the semantics demanded? Even if structures were thought of more abstractly, and the space for pointers could be hidden somehow, how could I handle the technical problem of properly initializing these pointers when allocating a complicated object, perhaps one that specified structures containing arrays containing structures to arbitrary depth?

The solution constituted the crucial jump in the evolutionary chain between typeless BCPL and typed C. It eliminated the materialization of the pointer in storage, and instead caused the creation of the pointer when the array name is mentioned in an expression.

The rule, which survives in today's C, is that values of array type are converted, when they appear in expressions, into pointers to the first of the objects making up the array. This invention enabled most existing B code to continue to work, despite the underlying shift in the language's semantics. More important, the new language retained a coherent and workable if unusual explanation of the semantics of arrays, while opening the way to a more comprehensive type structure.

The second innovation that most clearly distinguishes C from its predecessors is this fuller type structure and especially its expression in the syntax of declarations. NB offered the basic types int and char , together with arrays of them, and pointers to them, but no further ways of composition. Generalization was required: given an object of any type, it should be possible to describe a new object that gathers several into an array, yields it from a function, or is a pointer to it.

For each object of such a composed type, there was already a way to mention the underlying object: index the array, call the function, use the indirection operator on the pointer. Analogical reasoning led to a declaration syntax for names mirroring that of the expression syntax in which the names typically appear. In all these cases the declaration of a variable resembles its usage in an expression whose type is the one named at the head of the declaration.

The scheme of type composition adopted by C owes considerable debt to Algol 68, although it did not, perhaps, emerge in a form that Algol's adherents would approve of.

The central notion I captured from Algol was a type structure based on atomic types including structures , composed into arrays, pointers references , and functions procedures.

Algol 68's concept of unions and casts also had an influence that appeared later. After creating the type system, the associated syntax, and the compiler for the new language, I felt that it deserved a new name; NB seemed insufficiently distinctive. I decided to follow the single-letter style and called it C, leaving open the question whether the name represented a progression through the alphabet or through the letters in BCPL.

Their tardy introduction explains an infelicity of C's precedence rules. Its original version was exceedingly simple, and provided only included files and simple string replacements: include and define of parameterless macros. Soon thereafter, it was extended, mostly by Mike Lesk and then by John Reiser, to incorporate macros with arguments and conditional compilation. The preprocessor was originally considered an optional adjunct to the language itself. Indeed, for some years, it was not even invoked unless the source program contained a special signal at its beginning.

This attitude persisted, and explains both the incomplete integration of the syntax of the preprocessor with the rest of the language and the imprecision of its description in early reference manuals. By early , the essentials of modern C were complete. The language and compiler were strong enough to permit us to rewrite the Unix kernel for the PDP in C during the summer of that year. Although it did not describe some additions that soon became common, this book served as the language reference until a formal standard was adopted more than ten years later.

Although we worked closely together on this book, there was a clear division of labor: Kernighan wrote almost all the expository material, while I was responsible for the appendix containing the reference manual and the chapter on interfacing with the Unix system. During , the language grew a bit: the type structure gained unsigned, long, union, and enumeration types, and structures became nearly first-class objects lacking only a notation for literals.

Equally important developments appeared in its environment and the accompanying technology. Writing the Unix kernel in C had given us enough confidence in the language's usefulness and efficiency that we began to recode the system's utilities and tools as well, and then to move the most interesting among them to the other platforms.

As described in [Johnson 78a], we discovered that the hardest problems in propagating Unix tools lay not in the interaction of the C language with new hardware, but in adapting to the existing software of other operating systems. The language changes during this period, especially around , were largely focused on considerations of portability and type safety, in an effort to cope with the problems we foresaw and observed in moving a considerable body of code to the new Interdata platform.

C at that time still manifested strong signs of its typeless origins. Pointers, for example, were barely distinguished from integral memory indices in early language manuals or extant code; the similarity of the arithmetic properties of character pointers and unsigned integers made it hard to resist the temptation to identify them. The unsigned types were added to make unsigned arithmetic available without confusing it with pointer manipulation. To encourage people to pay more attention to the official language rules, to detect legal but suspicious constructions, and to help find interface mismatches undetectable with simple mechanisms for separate compilation, Steve Johnson adapted his pcc compiler to produce lint [Johnson 79b], which scanned a set of files and remarked on dubious constructions.

Although by the middle s Unix was in use by a variety of projects within the Bell System as well as a small group of research-oriented industrial, academic, and government organizations outside our company, its real growth began only after portability had been achieved.

During the s the use of the C language spread widely, and compilers became available on nearly every machine architecture and operating system; in particular it became popular as a programming tool for personal computers, both for manufacturers of commercial software for these machines, and for end-users interested in programming.

At the start of the decade, nearly every compiler was based on Johnson's pcc ; by there were many independently-produced compiler products. By it was clear that C needed formal standardization.

While it foreshadowed the newer approach to structures, only after it was published did the language support assigning them, passing them to and from functions, and associating the names of members firmly with the structure or union containing them.

Finally, the incipient use of C in projects subject to commercial and government contract meant that the imprimatur of an official standard was important. Thus at the urging of M. From the beginning, the X3J11 committee took a cautious, conservative view of language extensions. In the old style, external functions were declared like this: double sin ; which says only that sin is a function returning a double that is, double-precision floating-point value.

In the new style, this better rendered double sin double ; to make the argument type explicit and thus encourage better type checking and appropriate conversion. Python Pillow. Python Turtle. Verbal Ability. Interview Questions. Company Questions. Artificial Intelligence.

Cloud Computing. Data Science. Angular 7. Machine Learning. Data Structures. Operating System. Computer Network. C is the basis of many system kernels. Other programming languages, like Python and Perl, use compilers or interpreters that are written in C. C has changed over the years and is still a common language to use in lower level programs, like kernels.

His management of iTech News has led him to work with many brands on writing technology focus articles. Discover Section's community-generated pool of resources from the next generation of engineers. The simple, flexible deployment options your customers expect with the low overhead your team craves. For Infrastructure Providers. Simple, centralized, intelligent management of distributed compute locations on massive scale.



0コメント

  • 1000 / 1000