8 Technology Forecasting Methods

Technology forecasting is a systematic process used to predict the future development, adoption, and impact of technologies. Various methods and techniques are employed to anticipate technological trends, thereby helping organizations make informed decisions and plan for the future.

Overview of technology forecasting methods

Extrapolation of Trends: This method This method involves extending current trends into the future. It relies on the assumption that historical patterns will continue, offering a direct way to project future developments. While simple and easy to apply, especially when historical data is reliable, this method assumes linear progression, which may fail to capture sudden disruptions or paradigm shifts.

Expert Opinion and Delphi Method: This approach involves gathering expert opinions in the field. The Delphi method iteratively collects and synthesizes these expert opinions to converge on a consensus forecast. It leverages expert insights, which is particularly useful for emerging technologies with limited historical data. However, it remains susceptible to bias, and the quality of the forecasts depends on the expertise and diversity of the experts involved.

Technology Roadmap: This strategic planning tool provides a visual representation of how technologies evolve over time. It identifies key milestones, dependencies, and potential future developments. It helps align technological development with organizational objectives, thus facilitating strategic planning. However, it may struggle to account for unpredictable external factors and disruptive innovations.

Cross Impact Analysis: This approach examines the relationships and interactions between different factors to identify the potential impact of changes in one area on others. Often used to assess the implications of multiple technological developments, it provides a more nuanced understanding of the interdependencies between factors. Its complexity increases with the number of factors considered, and its accuracy depends on the quality of the data.

Bass Diffusion Model: Primarily used to predict the adoption of new technologies in a market, this model considers the influence of innovators and imitators in the diffusion process. It is useful for estimating adoption patterns and market saturation, but relies on assumptions about the influence of different user categories.

Technology S-Curve Analysis: This method examines a technology's lifecycle, depicting its growth, maturity, and eventual decline. It helps understand the adoption rate and potential of disruptive technologies. It provides insights into the technology's trajectory and saturation point, but assumes a predictable lifecycle, which may not hold true for rapidly evolving technologies.

Agent-Based Modeling: This involves simulating the interactions of individual agents or entities within a system to observe emergent patterns. Used to understand complex systems and predict their behavior, this method allows for the modeling of complex interactions and nonlinear dynamics. However, it requires detailed data and computing resources, and accuracy depends on how closely the model reflects real-world dynamics.

Extrapolation of trends

Trend extrapolation is a widely used method for predicting future developments. By leveraging historical data, it assumes that existing trends and growth rates will continue in a predictable manner. Essentially, trend extrapolation involves projecting the current trend into the future. 

Although this method is simple and offers valuable information for short-term forecasting, it also attracts its share of criticism, especially when applied to long-term forecasting. This analysis delves deeper into the intricacies of trend extrapolation, examining its methodologies, applications, benefits and limitations.

At its core, trend extrapolation is about extending current data trends into the future. It is based on the idea that patterns persisting in historical data can provide insight into future behavior. This method is typically applied using statistical tools and techniques to derive future values based on historical patterns.

Linear extrapolation assumes a constant growth rate, making it suitable for forecasting near-future events where substantial changes in the growth rate are unlikely. Nonlinear extrapolation, on the other hand, recognizes that growth rates can change and is often used in scenarios involving cyclical trends or known factors that could affect growth.

Many trends, particularly in sectors like retail or tourism, have seasonal components. Adjusting for seasonality involves identifying and accounting for these recurring fluctuations, ensuring that the extrapolated trend is not falsely representative due to seasonal variations.

Modern trend extrapolation often involves software solutions capable of processing large amounts of data and using advanced algorithms to predict future values. Time series analysis, regression models, and machine learning can help refine extrapolated forecasts.

Trend extrapolation is a fundamental tool in the world of forecasting, providing a simple approach to predicting future developments based on historical patterns. Its simplicity, cost-effectiveness and reliability for short-term forecasting make it a preferred choice by many professionals in various sectors. 

However, its inherent assumptions and potential pitfalls require a balanced approach, often combining it with other forecasting methods to obtain a more complete picture. In an age where data drives decisions, trend extrapolation remains a crucial analytical tool. However, as with any method, its effective use requires understanding its strengths, its limitations and the context in which it is applied. In the ever-changing landscape of the modern world, a blend of historical data, advanced analytical tools and expert knowledge ensures that we approach the future with caution and foresight.

Expert Opinion and Delphi Method

A first questionnaire, containing a very general question, is sent to all the experts. They must then return it completed to the team of researchers who will construct the following questionnaire based on the answers obtained in the first. The process continues until the team reaches significant consensus on the answers. Three questionnaires are generally sent.

The Delphi survey then generally follows various stages:

1. Form a team (steering group) to undertake and monitor the Delphi survey. The steering group will also delimit the study area  Identification and formulation of the problem under study

2. Experts (or key informants) are selected, the group of experts can range from 5 to 15 people

3. The 1st round questionnaire is developed and sent to interested parties after having been tested (to remove ambiguities, inaccuracies, etc.)

4. The completed questionnaires are processed and the responses analyzed: statistics are developed or content organized

5. A 2nd questionnaire is created with new questions which allow you to explore the theme in greater depth or to verify it. The second questionnaire is used to clarify, modify or even justify the experts' initial judgment. this questionnaire is sent again

6. Completed questionnaires are processed. Statistics or thematic categories are developed, based on the responses

7. When a consensus is reached or saturation is achieved, the process stops. Generally, this consensus is reached after three questionnaires.

8. Once consensus is found or saturation is reached, a final results report is prepared by the steering group and sent to the applicant.

The first questionnaire includes a rather general question, which allows experts to express themselves generally. The purpose of the second questionnaire is to review the priorities defined in the first, to clarify them and to classify them in order of importance. These responses are then summarized and returned to the experts in a third questionnaire. They must then take a position in relation to the reformulated answers. The process usually stops at this point. A consensus around the questions to be developed is then found.

One of the advantages of this method is being able to work with geographically dispersed experts without there being any influence between them. It avoids the effects of influence that can be created during face-to-face meetings with experts and guarantees anonymity.

Technology roadmap

A technology roadmap helps teams document the rationale for when, why, how, and which technology solutions can help the business move forward.

From a practical point of view, the roadmap should also describe the types of tools for which it is best to spend money and the most efficient way to introduce new systems and processes.

The roadmap can also help you connect the technology the business needs with its long- and short-term business objectives at a strategic level.

A technology roadmap typically includes:

  • The objectives of the company or team

  • New system capabilities

  • Release plans for each tool

  • The steps to achieve

  • The necessary resources

  • The necessary training

  • Potential risk factors or barriers to consider

  • Reviews of state reports

Several teams and stakeholders are typically involved in the technology roadmap, from IT to general managers, including product, project, operations, engineering, finance, sales, marketing and legal teams.

The roadmap allows everyone to align and understand how the different tasks and responsibilities related to implementation impact their productivity.

Cross Impact Analysis

Cross-impact analysis is the general name given to a family of techniques designed to assess changes in the probability of occurrence of a given set of events following the actual occurrence of one of them. The cross-impact model was introduced as a way to account for interactions between a set of forecasts, when these interactions may not have been considered when developing individual forecasts.

Cross-impact analysis is primarily used in foresight and technology forecasting studies rather than in foresight exercises per se. In the past this tool was used as a simulation method and in combination with the Delphi method.

The steps described in this section are valid for different variants of cross-impact analysis. The steps required to implement the SMIC method (developed in France in 1974 by Duperrin and Godet), which relies on dedicated software, are described here.

The list of events to be studied can also be established with the support of experts on the selected issue, or can come from other methods used to gather opinions, such as the Delphi method.

Design of the probability scale and definition of the time horizon: Defining a probability scale is necessary to translate experts' qualitative assessments of the degree of occurrence (e.g., most likely, most probable, etc.) into probabilities. Generally, the probability scale for cross-impact methods typically ranges from 0 (impossible event) to 1 (almost certain event).

Estimation of probabilities: In this step, the initial probability of occurrence of each event is estimated. Then, conditional probabilities in a cross-impact matrix are estimated in response to the following question: "If event x occurs, what is the new probability that event j will occur?" The entire cross-impact matrix is completed by asking this question for each combination of an event that occurred and an event that was impacted. The SMIC aims to verify the consistency of expert estimates. The SMIC method invites experts to answer the following questions on a grid:

• the probability of occurrence of each unique event over a given time horizon
• the conditional probabilities of the separate event taken in pairs over a given time horizon:
• P (i/j) probability of i if j occurs
• P (i /not j) probability of i if j does not occur

Generation of scenarios: THE result of the application of a cross-impact model is a production of scenarios.
Regardless of how the probability assignment issue is resolved in specific cross-impact models, the usual procedure is to perform a Monte Carlo simulation (Martino and Chen, 1978). Each run of the model produces a synthetic future history, or scenario, which includes the occurrence of some events and the non-occurrence of others.

The model is therefore executed enough times (approximately 100 in the SMIC version) so that the set of output scenarios represents a statistically valid sample of the possible scenarios that the model could produce.

In a model with n events, possible scenarios "2 to the power of n" are generated, each differing from all the others by the occurrence of at least one event. For example, if there are 10 events to consider, there are 1024 possible scenarios to estimate.

Once the cross-impact matrices have been calculated, a sensitivity analysis can be performed. Sensitivity analysis involves selecting an initial probability estimate or a conditional probability estimate, which is subject to uncertainty. This judgment is then modified, and the matrix is re-run. If significant differences emerge between this analysis and the original one, it appears that the modified judgment plays a significant role. It might be worthwhile to reconsider this particular judgment.

Bass Broadcast

The Bass model, or Bass diffusion model, was developed by Frank Bass. It is a simple differential equation that describes the process by which new products are adopted by a population. The model provides an explanation of how current and potential adopters of a new product interact. The basic principle of the model is that adopters can be classified as innovators or imitators, and that the speed and timing of adoption depend on their degree of innovation and the degree of imitation among adopters. 

The Bass model has been widely used in forecasting, especially in new product sales forecasting and technology forecasting. Mathematically, the basic Bass diffusion is a Riccati equation with constant coefficients equivalent to the Verhulst-Pearl Logistic growth.

In 1969, Frank Bass published his paper on a new model for the growth of durable consumer products. Previously, Everett Rogers had published *Diffusion of Innovations*, a highly influential work describing the various stages of product adoption. Bass contributed some mathematical insights to the concept. While the Rogers model describes the four stages of the product life cycle (Introduction, Growth, Maturity, Decline), the Bass model focuses on the first two (Introduction and Growth). Some Bass-Model extensions present mathematical models for the last two (maturity and decline).

S-curve

If you look at all the technological revolutions of the last few decades, you will find that they all tend to follow a similar behavior, called curve in S.

  • At the beginning, technology proves expensive, bulky and poorly adopted . Improvements seem slow as fundamental concepts are understood (think of early automobiles, for example. Development began in 1672, and it wasn't until 1769 that the first steam-powered automobile capable of carrying humans was created).
  • Then there is usually a period of rapid innovation and feature expansion, driving mass adoption (this is when engines and cars became better and cheaper, more attractive than a horse and cart and almost everyone bought one).
  • Then, as the market matures, significant improvements tend to slow down and there aren't many new customers to sell to. We have reached the peak of the S-curve which is saturated.
  • Now a whole new technology with its own S-curve can take over . This is seen in cars (emergence of electric vehicles), planes or other inventions. 

Agent Modeling

Creating digital twins using multi-agent modeling is very complex. We invite you to view our course on Multi-Agent on our site complex-system-ai.com