[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: ::scr Touchy Feely?



> A lot of compsci is about finding fundamental abstractions.

A lot of the concerns in the compsci versus programming debate seem to be
around abstractions. In programming - and, actually, in ordinary life - we
make use of abstractions all the time. Indeed, you can argue that a tendency
towards increasing abstraction is one of the major trends of the last 500
years of history.

The thing about abstraction is, though, that when you create an abstraction
you must, by definition, remove from it some of the features of the original
objects it is an abstraction for. So the problem in designing abstractions
is choosing which bits to miss out and which bits to leave in.

Abstraction is absolutely essential in programming. Anyone who says it isn't
has never really though about it.  Even machine code is an abstraction from
the physical reality of the machine. However, it does have to be the right
abstraction for the purpose at hand. A lot of the early evangelistic writing
about OO now comes off as incredibly naive, because they talk about being
able to sell an "Aircraft" class that could then be used in any system
having anything to do with aircraft. Anyone who has ever written a real
system knows that the "Aircraft" class in a air traffic control system is
totally different from that in an airline's parts management system. The
same is true, to a slightly lesser extent, for the "Aircraft" class in two
different air traffic control systems, or even in two different components
of the same system.

This problem comes up a lot in academic systems, and I think this is part of
the explanation for the failure of more abstract systems to penetrate the
commercial world. I particularly remember once working on a system for
hardware-software codesign that modelled all communication in a globally
asynchronous, locally synchronous mannner. Now, that is a nice model in some
ways. For example, it makes it relatively easy to fit disparate components
together into a functioning whole. It also makes it very easy to synthesise
hardware and software providing you completely control the hardware
architecture. The difficulty is that a lot of software systems, and some
hardware systems, are fundamentally synchronous. A read or write addressed
to a memory mapper peripheral is a syncrhonous operation, at least from the
perspective of the program's current thread of control. If you model
everything as asynchronous, you've fundamentally lost this bit of
information, and thereby destroyed any hope of mapping the system onto
many - even most - possible architectures. The project failed for that
reason, and the academics who started it could never move beyond the GALS
abstraction they started out with.

The other problem with asbtraction, coming from the other side, is that most
people are not very good at it. And that includes most programmers. We live
in a world now where the "journeyman programmer", who can write "business
logic" or "web applications" (or, in a bygone age, "client-server software")
with reasonable competency, but doesn't understand systems programming or
much beyond secondary school maths, accounts for most of the professional
community. Creating or understanding new abstractions is really quite hard,
especially when they're as ephemeral as software is. It requires a certain
way of thinking that not everyone has. Most of the time, we try to pull more
and more information into thinking about something. To think at an abstract
level, you need to be very selective about what you pull into the thinking
process.

Going back to the original article (and this represents the point where I
gave up reading): this is also why "Cow" objects are best thought about as
abstractions and not signs (which is what semiotics is a pomo neologism
for). Although the words used a class, method and variable names are signs,
the classes and methods themselves are not. They're parts of a machine, and
the goal of programming is to make the behaviour of  that machine correspond
to the behaviour of the real world thing. Which is why you need abstraction:
a "Cow" class can't chew the cud or fart.

Simon