Monday 28 July 2014

Tim Tyler: Optimization

Transcript:

Hi. I'm Tim Tyler and this is a video about optimization. It will mostly be about the significance of optimization.

Firstly, what is optimization? Optimization involves problem solving. Particularly it involves solving problems where you have to find a good solution from a space of possible alternatives. Since practically any problem can be framed in those terms, optimization is a very general concept.

A classical example of optimization is where you want to make a cylindrical tin can which encloses the most volume using the smallest quantity of metal. There's a ratio of diameter to length that is most cost effective which is independent of the size of the can. This is the optimal solution to the problem - and the process of finding it is known as "optimization".

Optimization problems are common. Whenever you try to get from one place to another as quickly as you can, you are solving an optimization problem. Trying to get proper nutrition while minimizing your calories is an optimization problem. Trying to acquire money with out breaking the law is another optimization problem.

Optimization techniques were discovered in the 19th century and were considered to be part of mathematics. The field rapidly split into discrete optimization, and continuous optimization - which often involved calculus. For some problems, the best possible solution available is needed while in other cases, you just want a solution that meets some criteria. Different approaches are often needed for these different classes of problem. All optimization problems feature a utility function that says how good solutions are. Some have an additional function that specifies when to stop the search. This can involve time limits, resource limits or a specification of what solution qualifies as being satisfactory.

Some optimization problems are linear - and special techniques were developed for solving those. However most optimization problems are not linear. Some optimization problems were best described in terms of constraints. The effects of adding and removing constraints has been studied.

Optimization can involve either maximization or minimization. It makes no difference mathematically.

The goals of agents can be represented using what are generally known as "utility functions". The agents then behave so that they maximize their utility function. This idea has been used extensively in economics. It has also been also used to formalize some ethical systems.

As the concept of utility maximization is very general, it provides a framework for modelling and comparing the goals of arbitrary computable agents.

Conflicting goals can be modelled using utility functions and this provides a useful framework for examining conflict. In particular the idea of "Pareto optimality" can be applied to conflicts. A solution to a conflict is "Pareto optimal" if no agent can get more of what they want without some other agent getting less of what they want. There's often a set of such solutions - known as the "Pareto set".

Some optimization problems have a temporal dimension, raising the issue of how long-term gains need to be balanced against short term ones. The solutions to some of these optimization problems change over time, and optimizing agents sometimes need to track optima whose location changes over time. Such problems require a dynamical system to track them - and then the stability of optima can become a significant factor. Oscillating or orbiting around optima becomes possible, and optima themselves may decline or completely collapse. The addition of a temporal dimension can result in a much harder optimization problem.

Though optimization techniques can be applied to specific problems, optimization is a general purpose skill that can be applied to a broad range of problems. Competence at optimization is closely related to intelligence - and the idea that optimization capability is a general purpose skill is related to the empirical observation that high intelligence results in increased competence across a broad spectrum of tasks - among humans.

It's often possible to describe properties of an ideal optimizer in a specified problem domain. For example, for the game tic-tac-toe, optimal strategies for both sides are known. In this case, if both players play optimally, the result is always a draw. In economics, this idea is known as economic efficiency. Deviations from maximum efficiency can arise as a result of stupidity or internal conflict. Conflicts can arise as a result of battles with peers or parasites, for example. Stupidity and internal competition are not always easy to distinguish from one another as causes of inefficiency. Internal competition looks a lot like stupidity from the perspective of an external observer.

We have a grand unified theory about how optimization techniques work. The basic idea is due to Charles Darwin and is widely known as Darwinian evolutionary theory. Optimization involves trial and error. It has the classic form of a Darwinian evolutionary process, where variations on existing solutions are generated and then tested - with the more successful solutions being retained and forming the parents of the next generation.

It is true that there are some optimization techniques do not closely resemble this model. For example, random search is an optimization technique - but it has no concept of memory or inheritance. Exhaustive search is another optimization technique that doesn't look very much like Darwinian evolution. It uses memory, but it only has a single simple lineage instead of a tree of variants. However, these techniques are trivial - and are only useful on tiny problems.

Techniques for solving more complex problems tend to look much more like Darwinian evolution. With more complex problems, trying solutions at random - without paying attention to what has already been tried - is not a very attractive option. You are forced to perform local searches. These more complex cases represent the vast majority of real-world problems and the process of solving them more closely resembles Darwinian evolution.

The Darwinism involved is of a very general kind. It permits the use of interpolation and extrapolation during recombination. In addition to retaining successful variants to show where to search, failed variants can be retained to show where not to search. It is a form of Darwinism which incorporates the principles of intelligent design. It more closely resembles Darwinism plus genetic engineering - or the kind of Darwinism that is involved in cultural evolution. Some think that the term "Darwinism" is a misnomer - although it is hard to deny that Darwin originally came up with the basic idea.

Paradigmatic optimization techniques include genetic and memetic algorithms - which are explicitly modelled on gene-based and meme-based evolution respectively.

Optimization techniques are very useful tools for solving problems. However, optimization has also turned out to have some interesting scientific applications. It is possible to model the behaviour of organisms using optimization models in which the organisms behaved as though they are maximizing the number of their distant descendants.

Ecosystems can also be modelled as maximizing a function - they behave as though they maximize entropy. Entropy maximization is a different idea from the second law of thermodynamics. The second law just says entropy tends to increase. Entropy maximization is a very different idea. It says that if there are a range of possible entropy increases, larger ones are more likely than smaller ones, on average. There is more than one reason why entropy is maximised, but the easiest one to understand involves the basic statistical fact that high entropy states are more numerous than low entropy ones - and so undirected changes are likely to lead away from low-entropy states.

The concept of entropy maximization turns out to have a broad domain of applicability. In addition to organisms and ecosystems, electrical discharges, drainage basins, propagating cracks and stars all behave as though they are maximizing entropy. It turns out that maximization is important in physics and chemistry - as well as in biology.

Lastly, the concept of 'optimization' is significant at the moment partly because it is a foundational concept for those interested in building intelligent machines. A synthetic intelligence would be a powerful optimization process. Such entities will probably be the most powerful optimization processes produced by evolution to date - as far as we know. Understanding how to build intelligent machines is essentially the same project as learning how to optimize effectively. It is a challenging project. We know that intelligent humans can fall prey to addictions and religions. We want to design an optimization process that isn't vulnerable to such things.

Although is started out as a relatively small area of mathematics, it has become clear that optimization is a subject of enormous technical and scientific importance with a correspondingly large social impact.

Enjoy,

No comments:

Post a Comment