Design decisions are a set, not a series.

Design decisions are taken in the context of the design as a wole. That whole is subject to change throughout the design process. Therefore, logically speaking, all design decisions remain up for debate and themselves subject to change throughout the design process.

This is highly impractical, of course. In practice, therefore, important decisions are ‘frozen’ at some point during a design project. In practice, the reason for some decisions then becomes something like ‘because that follows from what we decided earlier’ or ‘it would cost too much to change’.

The Format of Design Project Reports Leads Students to Develop Misconceptions

This is an unstructured, thinking-out-loud exploration of pretty much the same point as I made in Design Reports vs. Design Papers.

I think there may be a fundamental problem in the way we ask students to produce design reports that is making it unclear what the lessons are, exactly.

Design reports in education serve two separate functions: presenting/justifying the design proposal and showing that the student(s) did the work. This combination leads students to develop misconceptions, I believe. Because the reason we want to see some things in their reports (evidence of their process) is that we want to check whether they did and learned from applying the methods we ask them to practice. And the reason for wanting other things/properties in their reports (a consistent and coherent argument with only the evidence relevant to that argument) is another one: because that is what is necessary for a convincing outcome.

We judge things like a morphological chart (especially in earlier projects) based on criteria relevant to how the student is developing their approach/process. But those criteria are not quite the same as the ones relevant for a judgment as to how convincing the overal result of final claims are.

Another way to phrase this difference might be the difference between efficiency and effectiveness. We want students to develop an efficient and effective process, but the value of this is instrumental. In the end, only the effectiveness counts when we’re judging design proposals.

This tension or difference also becomes apparent when we compare student design reports with published papers reporting the results of design work. In a paper or presentation to critical peers, it is not a relevant question whether you wasted time or not. The only thing that counts is the final design, what claims you make about it, and what evidence you have for those claims. Much is left out that we do ask students to show in their reports. And this is a difference in kind, not just a difference in level, depth, detail, or quality.

This difference also highlights the contrast between design as an academic discipline and design as professional practice. In industry, efficiency, risk management, effective use of time and resources are important. Satisficing strategies are often appropriate. Academic values are different. There, understanding, logical consistency, accuracy, and other goals are more important. Aiming at ‘complete’ exploration and mapping of options is more important in this context. And tolerance for leaving certain practical matters in the design for later and focussing on a core, novel working principle first is far higher.

Three Ways of Justifying Design Features

Yesterday, in a discussion with a student on how to structure their design report, I found myself constructing a little typology of three types of justification for design decisions, each with their own rhetorical structure and form of presentation.

First, a particular feature of a design can be selected from alternatives developed in parallel. We do this at the overall level with concepts, usually three of them. These alternatives do not follow from the other, but are developed independent of each other, they are explorations of different approaches, and each represent a different set of trade-offs. Sometimes, these are developed in a sequence, one after the other but they are sufficiently independent of each other that they could have been developed in parallel, as three alternative answers to the same design problem, and so that each option can be evaluated using the same set of criteria. You can also do this at the level of details. Alternative ways to construct the frame, for instance, or different options for a hub assembly. In a report, you’d present these options side-by-side, with an argument for why one of them is the better choice.

Second, design features or geometries can be the endpoint of a single-track, iterative exploration or evolution. In this case you also have a number of alternatives that were considered, but they are not equivalent, and could not have been developed independently, in parallel. Instead, they form a sequence, where an evaluation of the strengths and weaknesses of each iteration forms the argument for the next one. The criteria used to get from one step to the next might differ from the considerations that led to the step after that. In a report, you can present the main stages of such an evolution, arranged chronologically, together with an explanation of the dimensions, features, or phenomena that turned out to be the most relevent, and how they shaped (and justify) the final form and properties of the part or construction.

Third, design features can also be the outcome of calculations that determine their correct or optimal value. Such design decisions may also have gone through iterations, or have been considered next to alternatives, but that history is no longer relevant for arguing the final outcome. Such decisions (a gear ratio, the length of a lever, the thickness of a beam) are best and most clearly justified by presenting a mathematical model, or formula, incorporation particular assumptions, constraints, and safety margins, leading to a single correct or optimal value.

Design Reports vs. Design Papers

One of the things I find difficult in design education is the difference between teaching our students the skill of doing design – coming up with and developing products, machines, and other plans – and teaching them the logic of how to argue for the results of that work – presenting, justifying, and giving reasons for their proposals.

We teach our students (some version of) the design process, and then we ask them to write a report that presents that process and their design. There is a tension in that combination. In this set-up it seems logical to show how your process ‘led to’ your design. Showing your (cleaned up, idealized) process is treated as the justification or support for the final design. But the quality of your process is not necessarily evidence for the quality of your design. Vice versa, with this approach it doesn’t make sense to present all your discarded ideas and other dead ends, or to show all seven and a half earlier versions of what became the final design. That would create a report that’s just as messy and chaotic as the average design process.

A ‘design report’ in this fashion tries to serve two functions: to provide evidence of learning activities, and to provide evidence for the final design’s quality. Those two sometimes conflict. At the very least they’re not the same and trying to do both in one document compromises the effect of both.

Perhaps, therefore, it would be good to make an explicit distinction between a ‘report’ and a ‘paper’? A report reports – it tells your teachers what happened. A paper presents – it describes a problem, shows evidence, and argues a proposal to a audience of peers.

If you want to see whether undergraduate students are learning the right skills and methods, ask them for a report. If you want graduate students to produce something similar to an academic paper, leave the reporting out of it.

Diminishing Validity of Concept Selection as an Argument down the Line

The detailed development, implementation, and operation of a design usually represents a significant investment. This makes it a good idea to first explore a number of possible approaches before committing to a single concept.

But concept selection is a strategic choice. The decision comes down to a judgement of which concept looks most promising, not to a determination of which one is certain to have the best possible performance. And at the end of a completed design project, you can never be certain that a choice to go with a different concept would not, in fact, have led to a better outcome. It is just that at the time, this concept looked best, and that therefore it was the one selected for further investment of development resources. Who knows what would have happened if those same resources had been invested into a different concept?

Soft Spots in Design Arguments

A design is always presented as a means to achieve a goal of some kind, in a certain situation or context. To argue that the proposed design will actually do this requires a bit of a detour, however. First of all, goals are usually complex, ambiguous, and ill-defined. They need to be made operational in a set of objectively testable criteria (functional requirements, performance criteria, and constraints). Secondly, it is not obvious from the plans for an artefact how that thing will do its work, precisely. Its behavior needs to be predicted. Predicted behavior can be evaluated in terms of the operational criteria. This is the claim that designers can actually establish. It serves as a proxy for the actual motivation behind the design, the expectation that the design will actually achieve its goals in the real world.

The translation of a complex goal into an unambiguous, operational set of criteria is not straightforward. Different people can legitimately interpret the same goal differently. The argument for a design proposal needs to establish, therefore, that this translation is a good one. Does it capture all the relevant aspects? Is anything lost in the definitions and quantifications employed? Is it possible to formally meet these criteria, while clearly failing to achieve the actual goal?

Predicting the behavior and performance of the proposed system can look like the straightforward, rational, objective part of a design project. But this is not straightforward either. To predict something’s behavior, we need to model it. Models are always simplified, partial and idealized representations. Abstract models can be validated through controlled tests with a prototype, but tests also only pick out parts of the actual operation of a system, and prototypes are, like abstract models, partial, idealized representations. In fact, they often introduce properties that the actually proposed design would not have. Here as well, the argument relies heavily on judgements of definition, translation, and interpretation.

Discovery and Justification in Design Proposals

What is the logic of design proposals? What argument is or needs to be made when you present a design? What is it that a design proposal does and what criteria must it meet to perform this function?

Engineering can be contrasted with science in that it is not only descriptive, but also prescriptive. The goal of a scientific paper is to describe and explain the world as it is. An engineer’s design prescribes or at least proposes what should be done or changed in the world: ‘if you have a certain goal, then here is a plan to achieve it’.

This makes a design proposal, in rhetorical terms, an argument about policy. Much of it may be concerned with facts and causation, in the end it is a question of means, ends, and value. Such an argument is always relative. The proposal can be compared to existing options, alternative proposals, and to leaving the situation unchanged. And while scientific claims aim at universality, designs are always context-dependent, appropriate to a specific time and place.

If this is the argument we need to make, how do we argue it?

Continue reading Discovery and Justification in Design Proposals

What About the Logic of Design Proposals?

Next to a description of an artefact, plans for its production, and plans for its use, the product of a design project must always be a design proposal. There is the rare case where a design “speaks for itself”, but even in that instance, what that design says amounts to an argument that proposes the design’s actualization. And in arguing for a boss, client, or teacher, to make it like this, automatically means to not make it like that, or to leave the world as it is and keep making the same thing as before, or to make nothing new at all.

In practice, the goal and measure of success of such a proposal is that it persuades. In academic circles, we should instead be interested in whether the argument is any good in terms of its logic and evidentiary weight. Also in practice, however, those on the receiving end of a design proposal will want to judge how successfully the arguments offered actually justify a belief in the value of the design under consideration, and to poke through any rhetorical flourishes and salestalk that may be involved. In fact, I would argue that engineers –as opposed to those with sales and business titles– are under a moral obligation to strive for the same: an honest presentation of the merits of a design, accurate rather than merely giving the impression of accuracy. If “trust me, I’m an engineer” is to remain a valid request, we should strive to be trustworthy.

What is the logic of design proposals? What, exactly, are the claims that are made when designers present the results of their efforts? And how are and can these be justified?

Is the result of design always a proposal? Do designs published in academic journals fit this description?

At first glance, they don’t. Their message is more “Here is what we made. It’s really good/interesting/valuable/impressive.” But isn’t this the same as saying “This is how we should make these kinds of things for these kinds of situations.”? Or, “This is how we should solve this problem, or reach this goal.”?

Law and Order in the Studio, Part II

If a design presentation and critique is compared to a courtroom setting, then what roles do the design, designer, and design judges play?

Perhaps the design critique is a reverse trial. The design is ‘on trial’, but as something to be accepted rather than judged. The designer is the prosecutor, with the burden of evidence to establish beyond a reasonable doubt that the design has merit, or even that it is the best possible answer to th design brief. The critic (boss, client, teacher) plays to roles of defense council, judge, and jury. While the designer tries to make their case, they look for holes, weaknesses, and alternative explanations. When the case of the prosecutor (the designer) stands up, the design is judged as ‘good’, or ‘acceptable’.

It might be fun to separate these three functions in an educational setting. The teacher can be judge (only making sure everybody follows the rules and plays fair), an external critic can be the defense lawyer (doing their best to point out weaknesses and poke holes in the designer’s work), and other students can be the jury (rendering their verdict after hearing all the evidence).

A Valid Form of Justification by Process

The history of a design proposal is, in fact, sometimes a relevant argument for or against it. It can say something about the uncertainties (“unknown unknowns”) around the concept.

When an idea or design concept has been put through criticism, when it has been poked, prodded, changed back and forth, there has been a process of discovery. Were there any unexpected behaviours, properties, interactions, etcetera?

When we are presented with two proposals that seem comparable in terms of predicted performance, but one has only been drawn up yesterday and the other has been the subject of development for significantly longer, even when the final level of detail is similar, the ‘younger’ idea can be legitimately opposed simply on the basis that there might very well be something wrong with it while the other one can be accepted with less risk.