Skip to main content

Uncertainties in climate models

The main purpose of models is to represent the complex real system into simple mathematical equations and/or  computer simulations that is useful for (a) understanding the current scenario and establish the causality,  and (b) predicting the evolution of important variables into the future. So, models make studying an environmental system  easier but not simpler. Direct observations are costly, if possible, and prediction is based purely on  statistical techniques which may not use advances in the basic sciences. For example, to study weather and climate there are a number of ways. One method is directly going out into the field observe, photograph, film and draw whatever conclusion based on pure observation. Remote sensing is also part of the field observation. Observation data of such method can be employed for further statistical and/or laboratory analyses. Even though observation is regarded as the best method for obtaining accurate information about the environment, it is not without problems. If we disregard the concepts of quantum physics, the measurements are affected by uncertainties attributed to  human and instrument errors. At the microscopic level, on the other hand, the accuracy of the measurements are affected by quantum uncertainty and the action of measurement that would have profound impact on the outcome of the observation. While validation of a model (e.g., regional and global climate models) is mostly regarded as a common way of representing the level of accuracy of the models, it should be taken with great caution because both the models and the observations are far from reality - All models are wrong but some are useful. 
Most studies on methods of models quality control  suggest comparing model results with observation as a first step called empirical accuracy and with other models called robustness to get an overview of the performance of the model. The closer the models to each other and observations, the better is their performance in representing reality. In cases where the models are used for adaptation, mitigation and resiliency analyses, empirical accuracy and robustness of the models may not be enough. A model can be developed that may give good empirical accuracy and robustness but based on an assumption that would violate the scientific laws and principles. For example, a model that would violate the laws of thermodynamics at the macroscopic level may provide data which may fit observation, i.e., one can get empirically accurate results for the wrong reasons . The predictive capacity of models developed with the wrong reasons is like a house built on sand - when winds blew and rains came and flood,   the house falls down. However, a model with the right reasons supported by the basic scientific principles is like a house built on rock - when the extremes hit it, it stays intact. So, relying on the basic scientific principles at the foundation of the models is a must one should look for, in addition to empirical validation and robustness of the model.   
The scientific method in studying an environmental system is  based on the conservation equations of energy, momentum and mass. It is based purely on the physics behind energy, momentum and mass transfer equations. Classical physics based on Newtonian mechanics is mostly enough since the speed at which the interactions occur are much less than the speed of light. However, there are situations in which the concept of quantum and relativistic physics is necessary in concepts of energy absorption, emission, scattering and other interactions between mass and light. The result is coupled partial differential equations where in analytical solutions are almost impossible to find. Numerical and statistical techniques, and methods of simplifying the differential equations through simplified theory are the only ways of obtaining information out of otherwise meaningless ( at least to the non-scientific community) series of equations. The uncertainty of the models start from these numeric and simplifying techniques. Each of these methods impart errors to the models that would propagate in each step and later become significant to cause models big uncertainty. Based on the numerical methods and assumptions to simplify the equations, the errors grow exponentially and render the model's performance sky-dive after few iterations. It is not whether someone is careful or not, errors and uncertainties are part of the mathematical computations. The problem is that there are "scientists" who deny this fact because of different reasons. Some are due to competency and inexperience in modelling, but few of them are due to ignorance.  Yet others claim the pressures of getting  permanent academic positions with the high number of publications. I want to leave the pressure behind for the time being and focus on the mathematical and  computational (the uncertainty due to computer precision) uncertainties. Assume that we have an n class model with N1, N2, N3, ...,Nn logical flow, i.e.,

      N1->N2->N3->...->Nn steps 

Assume  that each unit has a probability of being correct p and being wrong q, then the total probability of being correct for the whole model is P= p**n   , and being wrong Q = 1-p**n . Assume again that the model has 100 steps and probability of being accurate p=0.993 at each step. This is ~99.3% accurate for each step and that's what eludes us mostly that our model seems working fine. But no that is not the case. The joint  probability of the model being correct is P=p**n=(0.993)**100= 1/2. The model is surprisingly as accurate as it is wrong at the end of the step.   However, no policy or business wants models with such big uncertainty. So, the question is how to minimise the uncertainties, if possible, or at least make the errors and scientific assumptions (however, wrong they are) transparent. Transparency is subjective and is problematic in modern modelling culture because of competitions among the modellers under a pressure of "publish or perish". The scientific publications themselves claim that they are under pressure because of the rating so that they accept positive results only and reject negative results. Positive and negative is presented here in a sense that the outcome of the study is in favour of or against the hypotheses. I currently saw a top level "scientific" publication journal saying they accept manuscripts only if they have statistically significant results. What kind of science is this?  While statistically significant test analysis is good, it should not be a hindrance to the publication. It is just that the new proposed method doesn't lead to better results and no other scientist should repeat that so that the science progresses forward.  Let us forget about the many problems of  publications and focus on relevant methods useful to reduce the uncertainties in climate models. 
To minimise the uncertainties in climate models,  scientists have developed methods that would incorporate physical processes into grids through parameterization schemes . These schemes are evolved and adjusted becoming better everyday. To mention some of the parameterization of concern in climate models are the radiation parameterization (both long wave and short wave), the planetary boundary layer (PBL) parameterization, the  convective, cloud micro-physics , soil, vegetation, urban and other processes.  While models are getting better due to these methods (getting better each day), the focus on improving the basic sciences, in my view, is very minimum. For example, what are the steps taken to solve the Navier-Stokes equations analytically and how many groups are involved in finding solutions to turbulent problems despite the mathematical delinquency? What alternative physics ideas do we have?  While it is good to work on minimising the uncertainties in climate models through parameterizations, down-scaling and accounting for other processes, it is equally necessary to focus on  improving the basic sciences leading to better solutions to the flow equations. 

Comments

Popular posts from this blog

How to cool during heatwaves

Under a warming climate intense heatwaves are expected that would stay for hours, days, weeks and months.  It becomes even worse under urbanization - the more cities grow , the  more natural land covers such as vegetation and soil are replaced by impervious surfaces such as concrete, asphalt roads and buildings. These impervious  urban surfaces can't retain water much longer like soil and vegetation and this would exacerbate the heating further.  Urban heat islands add more than 3 deg. C (Fig. 1) to the heatwaves. The combined effects of heatwaves and urban heat islands are heat-related illnesses and death - specifically affecting young toddlers and old people because their body regulation mechanism is lower. Figure 1. Urban heat island adds more than 3 deg. C to the heatwaves and increases the frequency or duration of the heat. Under a warming climate, it is only a matter of time before we start to seek for cooling mechanisms both for short term and long t...

Urbanisation: The problem of local climate modification

People migrate to cities due to different reasons such as easy access to schools, health facilities, jobs and transportation . More people, therefore, live in urban areas than in rural. According to the  United Nations estimate, in 2014 more than 54% of the world population dwell in cities. It was 30% in 1950. Projections indicate that 66%  of the world's population will live in urban by 2050 . Most of urban people reside in relatively small areas with high number of inhabitants per square meter of land (UN, 2015): " Close to half of the world's urban dwellers reside in relatively small settlements of less than 500, 000 inhabitants, while only around one in eight live in the 28 mega-cities with more than 10 million inhabitants. " It is ages since people noticed that urban air was different from rural air. However, it was air pollution which is the hallmark of the urban atmosphere. In 1818 Luke Howard (1772-1864) published the first edition of a book deal...

Urban flooding - Dire Dawa city Example

Dire Dawa is the second largest city in Ethiopia situated in the eastern part of the nation. Dechatu river crosses this city dividing it almost equally into the east and west portions. The river rises from the Ahmar Mountains and flows north towards the larger, and mostly used for irrigation, Awash river.  Even if the Dechatu flows towards the Awash river, it does not visibly empty into Awash river, rather it loses itself in the Cantur plain (Buren Meda) north of the city.    Figure 1. Dechatu river crossing Dire Dawa town dividing the city in to eastern and western portions.  Flash floods in urban Dire Dawa, Ethiopia happened many times in different years. In 2006 alone, 256 people reported to have lost their lives, 244 people disappeared, 10,000 people displaced, and houses, infrastructure, and livelihoods were damaged (ETV and VOA news,  2006).    Figure 2. Asphalt road in the middle of Dire Dawa town being ...