Translate

Forecast with the Adanic Method

 Abstract

In the present study, we expose a new technique of quantitative forecast called Adanique. First of its kind, because it wants to be a newcomer in the classic prediction methods known until today. Generally divided into two, in this case statistical methods (endogenous approach) and econometric methods using a purely exogenous approach. The new so-called adanic forecasting technique uses this double principle to allow a new forecasting approach. The approach of the adanic method and its demonstration on the scientific values are spread out in the sections below.

Keywords: Forecasting, Econometrics, Statistics and Adanic


Conclusion

The adanic forecasting method is both based on the ethical principle of statistical and econometric forecasting. The future estimation of the values of a chronicle indirectly by explanatory variables is the main part of the econometric forecast. With regard to statistics, we are satisfied only with the historical values of the series in question. As a result, indirect and direct prediction is the principle of adanic forecasting. Experience has shown that the result is positive because it has shown a retractability of historical and observed values of the series over the years. On the other hand, it should be recalled that the forecasting indicator did not prove very favorable to adanic forecasting. It must not be thought that there is a magnitude whose reliability is absolute, because the future behavior of a series is primarily random and not deterministic. Can it be that the average of three methods or two methods is also an option for predicting a given magnitude?


https://www.researchgate.net


Abdi-Basid ADAN


The Abdi-Basid Courses Institute

Modèle à Correction d’Erreur à Equations Simultanées : en Cas Standard et Spatial des Données de Panel

 Résumé : 

À la suite de la déficience d’un modèle performant de premier plan en économétrie, le monde scientifique semble à jamais opposé sur plusieurs théories économies, parmi lesquelles la courbe de Kuznets Environnementale, l’existence des convergences conditionnelles des économie…etc. Cette crise dont souffre le monde scientifique particulièrement dans le domaine de l’économie m’a poussé à réfléchir sur un nouveau modèle économétrique plus complexe contournant plusieurs défaillances des autres modèles : c’est le modèle à équations simultanées à correction d’erreur standard et spatial des données de panel. 

Mots Clés : modèle à équations simultanées à correction d’erreur standard et spatial des données de panel, courbe de Kuznets environnementale.


Conclusion 

La réduction en une équation réduite d’un modèle à équations simultanées exigerait une cointégration afin d’en déduire un modèle à correction d’erreur à équations simultanées. Cette approche va certainement proposer des estimateurs encore mieux efficients pour permettre de trancher sur plusieurs articles contradictoires. De plus l’introduction de la matrice des poids dans un tel modèle ne fera que renforcer ses capacités dans la prise en compte des effets spatiaux. Ici, la correction d’erreur est venue avant l’intégration de la matrice des poids. Bien qu’il soit en cours de développement, une équation spatiale à correction d’erreur ou une équation à correction d’erreur spatiale ne serait pas identique. En outre, la prise en compte des aspects plus complexes dans un modèle performant palliant les défaillances des autres modèles permettra de proposer un consensus dans un monde scientifique perplexe. Le modèle par excellence serait celui à correction d’erreur à équations simultanées spatiale ou standard en panel.


https://www.researchgate.net/publication/324978703_Modele_a_Correction_d'Erreur_a_Equations_Simultanees_Cas_Standard_et_Spatial_en_Donnees_de_Panel

Qualitative Data Analysis (QDA)

 From the econometric point of view: 

On the econometric level, the analysis of the qualitative variable (categorical or nominal) takes place in two ways: either it consists of considering it and apprehending it as being an endogenous variable, or of course retaining it as an exogenous variable and to study it in a model of qualitative econometrics. Added to this is the possibility of analyzing the link between two qualitative variables through Chi-square test of independence (especially not to be confused with the Chi-square test of adequacy). In addition, models of qualitative econometrics are diverse, and some may be very complex than they are. Let us quote a few of the simplest to the most complex: the Probit binominal model, Logit binominal, Gombit, simple Tobit, Tobit generalized type I, II, III, IV, V, Tobit double censored, Tobit multiple censorship (truncated or limited), Heckit ... etc. 

The data structure, although dynamic on the temporal component, may vary according to each observation and in this case takes the form of panel data. We will explain in the previous section some principles of these models. To find out about the nature of the model that one has to do, one can easily define according to the domain of definition of the function but also of the modalities provided by the endogenous qualitative variable. These are variables, which are often derived from inquiries concerning a character of appreciation, opinion, satisfaction. ... etc. Others can be calculated and not observed as such. Their studies are as indispensable as the quantitative ones to make significant efforts in a specific direction. 

The simultaneity of a qualitative endogenous variable explained by another qualitative exogenous variable is certainly possible. To study in such a case, it will be necessary to consider one of the modalities of the variable while retaining the others to serve as references in the meaning of the results obtained. As for the qualitative endogenous variable, it is presented in terms of probability. To achieve this, a latent continuous variable is enough to facilitate the calculation in terms of probability of the modalities of the endogenous variable. A positive probability is indeed synonymous with a growing number of chances. It is eminent to note, on the other hand, the convergence of the solution after iterations in both the concave and convex cases. Beyond the significance by predictor variable, the overall significance or the adequacy of the model is a notion of appreciation of the model design. Thus, the coefficient of determination indicator of Mc Fadden and Hekman also called pseudo Rsquare makes a judgment on the quality of the fit of the model. In other words, the explanatory power or the part of the fluctuation explained by the variables retained in the model. 

On the other hand, the remaining percentage is usually less than 50% and corresponds to the relevant variables not considered. The Hosmer Lemeshow test also goes in the same direction on the quality of the fit. Indeed, it is by the marginal effects that we know more about the impact of each variable introduced into the model. The obtained estimators give an idea of the nature of the influence of the exogenous variable on the endogenous. In principle, it is recognized that there is a difficulty in interpreting the terms of the explanatory variable. It is then better to set a modality as a reference and to interpret by comparing with those used as references in the analysis. In most cases, the choice of modeling between Logit, Gombit (or extreme value) and Probit is done with the predictive power of the model. The best of them will be retained for the final modeling. 


  From a statistical point of view: 

In statistics, there are a variety of univariate and multivariate analysis procedures, including the family of factorial methods: factorial correspondence analysis, multiple correspondence analysis, multiple factor analysis, factorial analysis of mixed data. In some of these methods, it could concern both the qualitative and quantitative variables, so we evoke a mixed analysis of the variables. In addition, other techniques study the link between nominal and mixed variables. In this case, for example, the Chi-square test, the Cramer Rao coefficient, the correlation ratio, the analysis of the variance (ANOVA), etc. Statistically analyzing the qualitative variables is to perform the same operation seen in econometrics. In other words, make each modality a new variable. Therefore, in some studies we talk about the existence of a complete disjunctive table or Burt's chart. In addition, the match name refers to the link between the nominal variables. The search for axes expressing more meaning to the data is the common denominator of all factorial methods. It is rather in Multiple Correspondence Analysis that there is a massive loss of information and therefore the need to take certain results with care. The principles of the factorial analysis of correspondences, as the name indicates it allows to highlight the correspondences between two qualitative variables. In other words, the link whose modalities intervene and specially to identify the nature of the link that can be attractive, repulsive or independent. In this sense, it is an exploratory method, descriptive of data, established by Benzecri in the 70's. 

The idea is to translate the proximity of modalities as a link between the variables and specially to grasp as an identical profile for individuals which they describe. In contrast, the multiple analysis of the correspondences is a generalization of the factorial analysis of the correspondences, which itself is a double Analysis in Principal Components on on the one hand the line profile and on the other hand the profile column in a table contingency. Another aspect of distinction is that in simple factor analysis, the raw table is not studied directly, that might be interpreted as differences between rows and columns. It is also important, when interpreting, to avoid marginal low-margin modalities lest it influence the contributions of others. In discriminant analysis, it requires the presence of a qualitative variable with several quantitative variables. 

The principle is to put in place a linear combination of the quantitative variables separating at best the studied population. The discriminant function can be obtained using multiple linear regression. According to a threshold and the modalities of the qualitative variable, one determines the points individuals misplaced. The ideal is that it requires more than a minimum number of misplaced individuals. For this, one should think about including other variables in the regression and repeating at several iterative. The particularity of the discriminant analysis is that it is apart from its exploratory function, a decision-making method. 


 From the point of view of hierarchical classification: 

From the point of view of the ascending or descending hierarchical classification, the implementation is made possible by multitudes of metric distance calculation algorithms, among which the Manhattan method, weighted distance, Ward ... etc. The idea is to reduce the number of classes by iteration by grouping the one that is similar or the one whose dissimilarity is minimal (according to the index of aggregation). In other words, we try to minimize intra class variance. This partitioning derives from the distance matrix in a space of R power variable number. Identical profiles show individuals with the same preference for a given choice or the same profile for a characteristic of the individuals sought.



Abdi-Basid ADAN

The concept of Darwin, a partial and partial theory

 

The concept of Darwin, a partial and partial theory


Charles Darwin, palaeontology and English naturalist, published in 1859, the book "From the origins of species." Although independent thinker, he had to borrow the hypothesis emitted by Jean-Baptiste de Lamarck, 50 years rather. In fact, almost all of his research is based on three major points: unity, species diversity and natural selection. In truth, the idea that Darwin puts forward his argument is simply illusory, because indeed, in his formulation, he suggested that one constantly keeps the excavations of remains of spaces to support his postulates.

It is very damaging to see the birth of a Darwinism current that persists with determination, educating the new generations of values ​​that do not conform to the principles of scientificity and that defends truthful facts as mere facts. In spite of what, I would demonstrate in this article the unfoundedness of Darwin's theories on the human lineage in general.

From the point of view of the uniqueness of human beings, it is necessary to ascertain by nature the true existence of a relative similarity between several living beings more on the anatomical than on the morphological level; chromosomal; molecular ... etc. As well for the species of the same medium as those of the distinct environments. This should not, under any circumstances let us think of the idea of ​​a common ancestor. The evolution of a being "origin" or plural as presupposes Darwinism can not explain the settlement of the billions of billions of living species on Earth. This constitutes a misinterpretation of nature and its evolution. In principle, the commonality between species is irrefutable and must be seen from the angle of belonging to the same supreme being and not to the same common ancestor. Now it is amazing to explain an infinite multiplicity of beings, knowing that others are undiscovered until 2018 as being a drift of one or more common ancestors. On the principle of science, phylogeny is not in line with the evolution of species in a much more global dimension, it is partial and partial in its concept. Time would not be the mastermind of the "magic" differentiation of these species. In the excavation of skeletal remains, discoveries constantly call into question even the relative consideration of Darwin's principles. In 2004, a team of prospecting Aramco Exploration Team would have discovered, besides in Saudi Arabia a human skeleton of outsized proportions. The finding of remains of the giant men would be the incontestable proof of the basis of Darwinism thought.

With regard to the diversity of living things, it is quite clear that nature is balanced by this principle, it is necessary and unconditional diversity to allow the longevity of life on Earth. In this sense, all species intertwine with each other. This diversity can not be denounced, it is welcomed without exception, Again, it must be remembered that plants are living beings and that the earth is so. I can not understand how Darwin could think of these beings. Descartes in his explanation brings about the origin of the mistakes that rushing further pushes people to make mistakes. Darwin's attitude is the perfect example. This diversity, in fact, derives from the ingenuity of a supreme being unequivocal and unambiguous. Finally, natural selection, although defined as the equilibrium of the environmental balance over species over time. It is quite complex to be able to understand in a complete dimension. Reproduction of some species is in favor of others. The disparity of others is advantageous for future new species, which can derive from minority strains. The complex environment, predator and prey governs this complex principle par excellence.

In conclusion, to think that uniqueness gives way to the diversity of species is illusory. Although natural selection is the driving force behind the evolution of species over time. It is not the latter that determines the period between the uniqueness and the diversity of species. The mystery of the uniqueness of living beings still remains to be seen, in my opinion it is the manifestation of a supreme being which defines it. The diversity of beings is an essential essence for the stability and continuity of life on Earth. The role of natural selection should be understood as the interaction between environment and species. From these points of view, there is no common ancestor in the human lineage with apes. The only similarity can not justify this, it is the utopian drift of the currents Darwinisms.

The perception of the notion of time

 The perception of the notion of time

A prodigious question does time really exist? the reality in front of us, would it hide a truth as intimate as it is trying to conceal it from the good of a revolutionary discovery of all time. Let's go back to the 17th century, from one evening in 1666, under a tree with a clear sky, Newton observes the fall of an apple, so why would not the moon fall like that? was born classical physics. It is remarkable and amazing to see that nature in the evening of 1666 transmitted its secret to Newton. Even further, in 1905, Einstein extends the classical physics to the infinitely small, at the very heart of matter with a limited relativity (in a frame of reference not subjected to a fictional acceleration as the non-Galilean case) and on a cosmic scale with its general relativity (existence of the curvature space temp). Here, if we see that we are still in the explanatory trend of nature in two different dimensions. The most disturbing question for Einstein was the light and it is from this interrogation that the door of his new physics opens, quantum physics. In truth, he was the only one to understand that time could slow down from one point to another. His explanation of the photoelectric effect or the absorption of photon (quantum of light) by the material, when interacting with the light, earned him the Nobel Prize in physics in 1921. An extraordinary physics, which is not more superficial as it was with Newton but leaving, this time in depth of nature to explain the wonderful laws that govern. From this point of view, it is clear and unequivocal that science in general is the alternative for man to better understand the nature that conditions every element of life. This is undoubtedly the key to better understanding the main condition of life. In this sense, there is no science without nature.

What does time depend on? The paradox that I think one can emit is to say from the revolution of the earth and that of rotation, the count in 24 hours and the 365 days of the year is concrete but separately. If the 24 hours and 365 days are not excesses of the same movement, we have a discordance purely clear between the short term and the long term. In this sense, reference or position and mobility play an important role in determining time. It is therefore very relative and not absolute.

 

 

To discover the secret on the principle of time, we will first have to lend an intention to nature. Because the latter can serve us a third door on the mystery of physics. Time, by definition, is nothing more than the interaction between life and nature through the environment. It requires an environment conducive to life under the order of nature. Perfectly knowing the non-continuity of life, all the stakeholders of life are recycled and do not evolve on an infinite trajectory. It is just renovating in a new dimension exactly as the bell curve of Laplace Gauss says. Some species live longer than others. Time is not only unevenly distributed over every point of reference, but there are also parts where the time is even more abnormal. Why this inequality or super inequality of time in space? One could consider the idea that nature although it is based on the principle of balance, adapt the time to maintain its foundations (stability, diversity cycle, balance ... etc.). 

Obviously, time does not exist for certain entities. It is a primary component to which time does not apply to it. This idea is indebted because the element in question is not related to nature or life. It is undoubtedly in this case the vital breath in other words the soul. To survive, he has no need to drink or eat. Indeed, it is the body that determines the need but not in him. He is therefore outside the principle of life. Nature does not apply to him. And from this point of view, time too. Many physicists ask themselves whether time does not exist. To tell the truth, time does not exist for certain entities, but it is for others in a relative way. Even more surprising, say that the grandfather of a family and his grandson have the same age spiritually. It's different, it's the age of their organisms.

In a word, time is the stranglehold of nature on life. It is relative according to the referential chosen and applies only on all the elements which contribute the life, the solar system and its various forms, the species with different life expectancies, the cycle of the environment and time and space itself are all concerned.

Spatial Autocorrelation Patterns and Inequality Analysis

Objective Description

This analysis examines spatial autocorrelation patterns in synthetic raster data using the Getis-Ord Gi* statistic to identify hotspots and coldspots, and measures spatial inequality with the Gini index. It involves generating a synthetic raster (representing NDVI), visualizing it as an NDVI map, creating a shapefile, extracting raster values, computing spatial neighbors, calculating Gi* statistics, classifying hotspots, visualizing hotspots with a thematic map, and exporting outputs as shapefiles and rasters. The analysis highlights spatial clustering and inequality in a simulated dataset.


🎯 The detailed methodology and results can be accessed through this link:

👉click here now! : https://rpubs.com/abdibasidadan/spatialautocorrelation


Abdi-Basid ADAN, 2025

The Abdi-Basid Courses Institute

Multi-Objective Optimization Analysis with NSGA-II

Objective Description

This analysis employs the Non-Dominated Sorting Genetic Algorithm II (NSGA-II), a robust multi-objective evolutionary algorithm, to address complex optimization problems. The study includes two scientifically relevant case studies:
1. Car Example: Optimizes fuel consumption and maximum speed based on vehicle weight and power. This is critical in automotive engineering for designing sustainable vehicles, balancing environmental impact (fuel efficiency) with performance (speed), which influences market competitiveness and regulatory compliance.
2. DRASTIC Index Example: Optimizes weights of the DRASTIC parameters (Depth to water, net Recharge, Aquifer media, Soil media, Topography, Impact of vadose zone, and Conductivity) to maximize correlation with nitrate concentration (NO3) while minimizing Root Mean Square Error (RMSE). This enhances groundwater vulnerability assessment, providing valuable insights for environmental management and policy-making in regions prone to contamination.
The analysis generates Pareto fronts to visualize trade-offs, computes optimal solutions, and exports results as CSV files for further scientific evaluation.





The Abdi-Basid Courses Institute

Hierarchical Cluster Analysis with Dendrogram

Objective Description

This analysis applies hierarchical clustering to a simulated dataset of 50 observations with 5 variables using Ward’s method and Euclidean distance, visualizes the results with a dendrogram, and extracts cluster assignments.


https://rpubs.com/abdibasidadan/Dendrogram


The Abdi-Basid Courses Institute

Monte Carlo Simulation and Bootstrap for Estimating the Cost of Wind Energy

Objective Description

This analysis estimates the Levelized Cost of Wind Energy (LCOE) using Monte Carlo simulation to account for uncertainties in key parameters (investment cost, operation and maintenance cost, interest rate, capacity factor, and lifetime) and bootstrap methods to compute confidence intervals for the mean LCOE. It includes wind speed modeling, vertical extrapolation, power density calculation, Weibull distribution fitting, and correlation analysis.


https://rpubs.com/abdibasidadan



The Abdi-Basid Courses Institute

The Abdi-Basid Courses Institute