Princeton University |
Computer Science 441 |
Absolutely!!!
Otherwise, why would the Java folks have rejected C++ for their programming language?
Explore in this course all aspects of programming languages, including their features, type systems, programming styles supported, and implementations.
Programming languages are the main interface between computers and programmers, allowing us to express or understand algorithms to be executed by the computer.
By providing abstractions (or mechanisms to create abstractions) they influence the way we think about problems.
Data abstractions:
Control abstractions:
For all constructs need to have clearly specified syntax and semantics:
Syntax always given formally (as well as informally).
Semantics usually given informally (English), but more and more formally.
Both necessary in order to ensure programs give predictable results.
Phases in the development process are:
Need to evaluate languages with respect to overall picture. Not good if just supports one aspect. Important to evaluate language based on what its goals are.
Languages which are good for quick hacking together of programs may not be suitable for large-scale software development.
Better choices for large-scale software include:
Most languages today designed to support specific software development methodology or philosophy. E.g., top-down design, object-based design, encapsulation, information-hiding.
Languages influence the way people think about programming process.
Important to be aware of different programming language paradigms, allow one to think about problems in different ways.
Partially driven by new architectures (or at least not constrained by old). Imperative is closest to machine architecture.
Other paradigms:
Machine language -> Assembly language -> High-level languages
Programmers: Single highly trained programmer -> large teams which must cooperate
abstraction, encapsulation, information hiding, polymorphism, higher-order operators.
Start out by learning ML so can explore some new ideas & rapidly program interesting applications.
Write our own interpreters for simple languages so can see impact of various design decisions.
In his 1978 Turing award lecture (granted in recognition of his role in the development of FORTRAN, ALGOL 60, and BNF-grammars), John Backus attacked the pernicious influence of imperative programming languages and their dependence on the von Neumann architecture.
What is problem with imperative languages?
Designed around architectures available in 1950's.
Components:
To execute an instruction, go through fetch, decode, execute cycle.
Ex. To execute statement stored in location 97 (ADD 162):
Simple statement like A:=B+C results in several accesses to memory through "Von Neumann bottleneck."
Imperative program can be seen as control statements guiding execution of a series of assignment statements (accesses and stores to memory).
Variable in programming language refers to location whose contents may vary with time.
Hard to reason about variables whose values are always changing, even within same procedure or function.
Math notation not like that. Static. If want to add time, add new parameter. Gives static reasoning about dynamic processes.
Important notion called referential transparency. Can replace an expression anywhere that it occurs by its value.
Very important for parallelism, since compute once and then reuse.
Not true of imperative languages. Can't compute x+1 once and replace all occurrences by its value.
Order of execution in imperative programs very important - inhibits parallel execution.
We will see several advantages of functional programming.
Other important reasons to study functional languages:
Imperative languages are organized around notion of statements.
Meaning of a statement is operation which, based on current contents of memory, and explicit values supplied to it, modifies the current contents of memory.
How are results of one command communicated to the next? Via changes to values in memory.
Too low level and architecture dependent.
Examples:
Expressions (at least in math) better behaved than commands.
Meaning of a (pure) expression is operation which, based on current contents of memory, and explicit values supplied to it, returns a value.
Independent of the surrounding expression.
Therefore once have evaluated an expression in a particular context, never have to evaluate it again in that context since value won't change.
Math. expressions are referentially transparent.
Ex. To evaluate "(2ax + b) (2ax +c)" in a context in which a = 3, b = 4, c = 7, and x = 2, sufficent to evaluate "2ax" only once.
Can determine meaning of f(g(x)) by only knowing the value of f, g, and x (independently).
Moreover if meaning of g' is same as g, then f(g(x)) = f(g'(x)).
(Note importance of replacing construct by equivalent one in compiler optimizations)
Lose referential transparency if allow functions with side effects.
I.e. suppose call to f(x) results in incrementing x by 1.
Then f(x) + f(x) != 2 * f(x).
Program supporting referential transparency much easier to prove correct since only need be concerned about meaning of components and then put them together.
With imperative languages, lose referential transparency.
x := x + y; y := 2 * x;
versus y := 2 * x; x := x + y;
Since each command changes underlying state of computation and evaluation depends on state, ordering is critical.
Also correctness of program depends on contents of all memory cells.
Even when try to isolate portions of computations into procedures, can have non-local effects because of use of non-local variables and reference parameters.
If i > 0
and A[i] <> 99 then ....
What happens if A : ARRAY [1..100] OF INTEGER
and i = 0
?
Pascal vs. Modula-2 conventions.
Some language conflate (identify) expressions and commands (ALGOL 68 and C).
Often artificial and results in loss of advantages of expressions (e.g., referential transparency).
Ex: x = (y = x+1) + y + (x++)
Compare 2*(x++) and (x++) + (x++)
We will restrict our attention (for the most part) to functional languages with pure expressions.
Try to eliminate problems of commands and take advantage of referential transparency.
Promote reasoning about programs & implementation on parallel computers.
Idea - Program is simply application of a function to data.
No notion of memory or assignment - like a mathematical function - No side effects.
Very rich expressions - virtually all expressions first-class (unlike most imperative languages) in particular, functions are first class objects.
Gödel's general recursive functions (developed further by Kleene) (§10.6) and Church and Kleene's lambda calculus (§10.7) used as foundations for computable functions (before Turing machines). All found to be equivalent, leading to Church's thesis.
John McCarthy (then at MIT) in 1958-60 introduced a functional language (LISP), originally in study of symbolic differentiation with linked lists. Key article published in 1960 showing examples of important programs could be expressed as pure functions operating on lists. (LISP since been revised into competing dialects - Common LISP and Scheme.)
Functional languages or notation used in describing denotational semantics of programming languages starting in 1960's.
Most stunning event was Backus' Turing award lecture in 1978.
Proposed language FP (since replaced by FL) supporting "functional" style of programming.
First ML compiler was put out in 1977 (originally in support of interactive theorem proving system - text Edinburgh LCF by Gordon, Milner, and Wadsworth published). (Milner just won Turing award.) Standardized in about 1986.
Other important languages include SASL, KRC, and Miranda (all by David Turner). Haskell is successor. All support lazy evaluation.
Currently 3 main schools of functional languages:
First two classes of languages support imperative features (though much more controlled in ML).
First uses dynamic typing, other two support static typing w/ polymorphic functions and type inference.
We choose ML for somewhat arbitrary reasons. Heavily used to develop real software, supports modern programming constructs.
The point of this part of the course is NOT to teach you ML, it is to teach familiarity with thinking in the functional paradigm with ML as the example language (though talk about others as well). I expect you to mainly learn ML on your own in the lab while I lecture on related material.
First 10 or so pages of Backus' Turing award lecture
(for WHY of functional programming):
J.W. Backus, "Can programming be liberated from the von Neumann style? A functional style and its algebra of programs,"Communications of ACM, 21(8), 613-641.
Developed in Edinburgh in late 1970's as Meta-Language for automated theorem proving system.
Designed by Robin Milnor (last year's Turing award winner), Mike Gordon and Chris Wadsworth.
Success led to adoption and strengthening as programming language.
Important attributes:
How to use the run-time system.
Before launching sml, you must add its directory to your path.
Add /usr/local/sml/bin to your path.
For most of you, this will mean adding the following to your .cshrc file:
setenv PATH ${PATH}:/usr/local/sml/bin
If you use the CS Dept's version of .cshrc, you will see the obvious place to uncomment a similar line and make minor changes.
To launch ML type:
sml
System responds with message saying in ML, and then "-" prompt.
Can load definitions from UNIX file by typing:
use "myfile.sml";
where myfile.sml
is the name of your file. It should be in the same directory you were in when you typed sml.
Terminate session by typing control-D.