-
ANDERSON MORAIS DE SOUZA
-
BOOLEAN CHAOS: IN SEARCH OF SYNCHRONIZATION
-
Fecha: 16-dic-2021
-
Hora: 09:00
-
Mostrar Resumen
-
The study of dynamical systems makes it possible to describe the characteristics of a infinity of phenomena, in particular, the study of nonlinear dynamical systems, which have stood out in the mathematical modeling scenario for presenting a rich behavior in dynamic informations and with several particularities. It is in this context that this work employs mathematical and computational tools based on concepts of analysis of nonlinear systems that are applied in the study of Lorenz system. Based on the analysis of phase plans, phase spaces, numerical simulations and the behaviors that such a system presents, we will analyze the dynamics of Autonomous Boolean Networks, in which the future state of the network will be determined by the history of switching events, previous interactions and delays along the links. As well as we will analyze the study of the non-ideal behavior of logic gates, the rejection of short pulses and an analysis of the memory effect called "degradation" play a fundamental role in understanding the dynamics and chaos presented by the network under discussion. Therefore, the knowledge of the factors that can generate
chaotic behavior in Autonomous Boolean Networks with time delays will allow the study of synchronization between Autonomous Boolean Networks.
|
-
RAFAELA SOUZA MORAIS
-
Exploring the parameter space for the RM1 semiempirical method by using nonlinear optimization
-
Fecha: 15-dic-2021
-
Hora: 15:00
-
Mostrar Resumen
-
Molecular modeling allows us to calculate properties of molecular compounds, being used mainly in the discovery of new drugs or in the improvement of existing prototypes. There are several ways to generate these models, the ab initio methods being the most accurate, but they are extremely slow computationally. As an alternative, semiempirical methods were proposed, which use approximations to obtain a much more computationally efficient result, but with a accuracy that varies a lot, depending on the approach and on the chosen or adjusted parameters. One such method is RM1 (Recife Model 1), created in 2006 as a reparameterization of AM1 (Austin Model 1), which was created in 1985 and was very successful. RM1 achieved good results, but it is important to assess whether the chosen parameterization was the best possible. In this work, the parameter space for the RM1 method was explored, using a variation of the nonlinear optimization algorithm DFP starting from different points, evaluating whether it is possible to offer a substantial improvement in its accuracy only with a reparameterization, or if it is necessary to modify the structure of the method in order to achieve this objective. The starting points were parameterizations found by a previous work, using genetic algorithms, which offered slightly better results than RM1. The optimization of this work did not find better points than the genetic algorithm, perhaps because the cost function used in the minimization was not adequate. To improve the results, it would be necessary to adapt the cost function, which is possible with a procedure presented as a suggestion for a future work.
|
-
EMERSON CHARLES DO NASCIMENTO MARREIROS
-
COMPUTATIONAL CALCULATION OF THE BORDER OF THE VORONOI DIAGRAM IN THE PLANE WITH TWO SITE AND ONE CIRCULAR OBSTACLE.
-
Fecha: 10-dic-2021
-
Hora: 09:00
-
Mostrar Resumen
-
The objective of the work is to numerically calculate the boundary of the Voronoi diagram for two generating points on the y axis and the center of the circular obstacle on the x axis. The task is to build a graph structure to solve the set point problem when the set is the region of points on the plane that are closest to one of the sites.
|
-
JHONATAN BRUNNO FERREIRA DA SILVA LINO
-
Methods for optimizing water distribution networks
-
Fecha: 29-oct-2021
-
Hora: 10:00
-
Mostrar Resumen
-
This work aims to present a computational model for the optimization of water distribution networks, to minimize its cost. The model will be developed in Python language through the scipy.optimize package with the minimize routine. The network will be created through the Epanet program. Furthermore, the results presented by Lenhsnet (network optimization module of the Epanet program) will be obtained. Simulations will be carried out in the model, varying the optimization method used, verifying which one presents better results in relation to convergence and processing time.
|
-
MELQUISEDEC ANSELMO DA COSTA AZEVÊDO
-
FORECAST AND ANALYSIS OF THE ICMS IN PARAÍBA
-
Fecha: 29-jul-2021
-
Hora: 13:00
-
Mostrar Resumen
-
The search to anticipate the facts is quite common throughout the ages, considering something as likely based on clues, be they scientic or popular beliefs. In the
economic context, forecasts are necessary to plan actions in advance and conclude
on the main interventions and their likely consequences, because if the budget is
overestimated, will lead to over-spending, which may lead to a decit or contingencing, which is the temporary reduction of expenditure to reach the scal target and
if resources are underestimated, which may hinder urgent and/or extremely important actions. In this way the present dissertation presents a modeling methodology,
forecasting and analysing the collection of the Transaction Tax on the Movement of
Goods and on the Provision of Interstate and Inter-municipal Transport and Communication Services of the State of Paraíba (ICMS-PB)for representing more than
80% of the State's tax revenue. Data were collected from January 1997 to April
2021, which is truncated into distinct dates generating four series to verify if the
dynamics of the series vary. So it is using, for the four series, the Holt-Winters
exponential smoothing algorithms with additive and multiplicative seasonality, and
Box-Jenkins models with the integrated seasonal auto-regressive models of moving
averages (SARIMA) and SARIMAX with the variable dummy referent with the
dummy variable referring to the COVID-19 pandemic, trend and seasonality as regressive variables. Comparing them between themselves and with the real values
of the ICMS of Paraíba. Finally, considering the mean quadratic error and total
error obtained through the relationship between collections and forecasts, the models that generated the best forecasts for each series were selected, displaying the
graph with the real values, the forecasts and the 95% condence interval, verifyng
the circumstances that the models best t to predict the ICMS of Paraíba.
|
-
NIVALDO ANTÔNIO DE SOUZA SILVA
-
FORECAST OF WATER CONSUMPTION AT THE TREATMENT STATION
GRAVATA WATER - PB
-
Fecha: 27-jul-2021
-
Hora: 13:00
-
Mostrar Resumen
-
The present thesis aims to predict the consumption in the Water Treatment
Plant of Gravatá-PB. Through this research, we approached the water supply system, with its components, the water treatment plant, and the water supply system
in Campina Grande. To predict water consumption, we use as a basis the main
concepts related to time series, namely, the exponential smoothing algorithm and
the Box-Jenkins models. These models show the integrated autoregressive models
of ARIMA and ARIMAX moving averages. In ARIMAX we use independents variables. The independent variables used in ARIMAX were the estimate of the trend,
the dummy variable, which represents the consumption of water in the period of the
COVID-19 pandemic, and the temperature of the Campina Grande region. Finally,
we performed comparisons with the results obtained from the predictions and verified that the model ARIMAX(1,1,4) with the independent variable obtained better
predictive result.
|
-
ADRIANA RIBEIRO MOURA
-
MODEL SELECTION CRITERIA: A COMPARATIVE STUDY
-
Fecha: 23-jul-2021
-
Hora: 16:00
-
Mostrar Resumen
-
In classical inference, families of probability distributions are imposed as possible
candidates for modeling the phenomenon of interest and it is desired to decide among
distributions belonging to these families, which one best ts the data through an
adjustment adequacy criterion .
The idea, in general, comes from a compound hypothesis testing problem, in
which a random sample is considered $X1,X2,...,Xn$ of a population with distribution
function continuous cumulative $F_X$. The objective is to test the null hypothesis $H0 : F_X = G$
against the alternative hypothesis $H1 : F_X \neq G$, in that $G$
is an imposed cumulative distribution function and
thus known and $F_X$ is the actual distribution of the data which is generally unknown.
Considering the families of generalized distributions denoted by $G_c^sup$ and $G_c^inf$
proposed by Tablada (2017), with parameter $c > 0$, where $G$ (baseline) is any
probability distribution, in this work we propose a fit adequacy criterion based on
the properties of these distributions. Thus, accepting the null hypothesis, in the
criterion, will imply that $G_c^sup$ is equivalent to $G_c^inf$ to model the specified data.
In turn, the equivalence of these two distributions will induce that the data can be
modeled by $G$.
To compare the performance of the new criterion against the criteria: Akaike in-
formation criterion (AIC), corrected Akaike information criterion (AICc), Bayesian
information criterion (BIC), Hannan-Quinn information criterion (HQIC) and the
modified Crámer-Von Mises criteria (W∗) and Anderson-Darling (A∗) we performed
diferent simulation scenarios. Also for comparison purposes, we illustrate
its applicability using real datasets.
As a complement to the investigation carried out throughout this work, we
present the most important results on multiple linear regression and perform sim-
ulations in order to compare the performance of the proposed criterion together
with the following criteria: Akaike's information criterion (AIC), corrected Akaike
information criterion (AICc), Bayesian information criterion (BIC), Hannan-Quinn
information criterion (HQIC), also in multiple linear regression models. Finally, we
use the new criterion along with the Akaike information criteria (AIC), Bayesian
information criteria (BIC) and the adjusted coeficient of determination ($R^2$) and
we check its performance applied to a real dataset.
|
-
GEDEÃO DO NASCIMENTO CORPES
-
BETA PRIME DISTRIBUTION INFLATION
-
Fecha: 14-jul-2021
-
Hora: 14:00
-
Mostrar Resumen
-
Probability distributions, whether discrete or continuous, can
find barriers when we talk about data that contains "excess zeros".
So that these distributions can measure such data,
you create mixed distributions called inflated distributions.
How this inflation takes place will depend on its support set.
In this research we propose the construction of the beta prime distribution inflated in
zero (BPIZ) from the reparametrization presented in BOURGUIGNON et al. (2018).
We also determined maximum likelihood estimators and intervals
confidence for the BPIZ model. We numerically evaluate the estimators and
the confidence intervals, where we check its efficiency.
Finally, we performed
application to also verify its efficiency in relation to real data.
|
-
EMILIA GONÇALVES DE LIMA NETA
-
Improvement of the likelihood ratio test based on the profiled likelihood function
-
Fecha: 12-feb-2021
-
Hora: 15:00
-
Mostrar Resumen
-
In Statistics, hypothesis tests are used to make inferences of the parameters of a given probabilistic model. However, it is often convenient to perform inferential study only for a subset of parameters, which are called parameters of interest, and the others, nuisance parameters. Inferences of parameters of interest can be made based on profiled likelihood function. However, making inferences based on this function can lead to inaccurate results when the number of nuisance parameters is large comparing to the sample size. Also, the profiled likelihood function is not genuine. Thus, some basic properties of the likelihood function may not be valid. To mitigate these problems, Barndorf-Nielsen (1983) and Severini (1998) proposed adjusted versions of the profiled likelihood function. It is known from the literature that the likelihood ratio statistic, under the null hypothesis, has an asymptotic chi-square distribution. Therefore, for small or moderate samples, the asymptotic distribution is not a good approximation for the exact null distribution. To improve inferences, Sousa (2020) proposed a method for improving the likelihood ratio test, which consists of correcting the tail of the asymptotic null distribution through the chi-square inf. The main aim of this work is to compare the performance of tests based on the likelihood ratio statistic (considering the profile likelihood function and the modified versions) with the improvement method proposed by Sousa (2020) in infinite samples. Tests corrected by the bootstrap resampling technique will also be included in the comparison. Specifically, this comparison will be made by applying the diferent approaches to the Weighted Lindley and Exponentialized Weibull distributions. For that, Monte Carlo simulations will be performed, considering different scenarios. Finally, we performed four numerical examples based on real data sets.
|
-
ALEXANDRE GOMES SOUZA
-
Volatility of returns from renewable energy indices and uncertainty shocks in the USA and Europe
-
Fecha: 29-ene-2021
-
Hora: 14:00
-
Mostrar Resumen
-
The goal of this work is to estimate the volatility of returns and shocks of uncertainties on the indices related to the performance of the renewable energy market in the USA and Europe. In this sense, an analysis of the risks associated with the European Renewable Energy and Renewable Energy Generation indexes can reveal how they affect the sector's performance. First, structural break tests will be estimated on the return trajectories and check if the analysis should be divided between regimes. To estimate volatility, conditional heteroscedastic models proposed in the literature will be used, particularly the Dynamic conditional correlation multivariate GARCH (DCC-MGARCH). The data were chosen based on the Standard \& Poor's 500, WilderHill, Arca Tech 100, West Texas Intermediate and Morgan Stanley Capital International, Thomson Reuters/ CoreCommodity and U.S. Dollar indices. In addition to the estimates of all parameters, quasi-correlation matrices, quasi covariance, and uncertainties shocks over the studied indices will be obtained through impulse response functions obtained by a VAR model.
|
-
JEFFERSON BEZERRA DOS SANTOS
-
HYBRID ENERGY BALANCE MODEL BASED ON STOCHASTIC DUAL DYNAMIC PROGRAMMING
-
Fecha: 29-ene-2021
-
Hora: 14:00
-
Mostrar Resumen
-
With the increase in demand for electricity, great technological advances are indispensable for efficient growth. In Brazil, hydroelectric is the main source of energy generation. However, due to the disproportionate increase in demand and the scarcity of rainfall, it has been necessary to activate thermoelectric plants to supply the demand, with the main disadvantages of environmental damage being greater than other sources such as hydroelectric plants. Glimpsing the need to perform an adequate management of the energy dispatch, in order to minimize the generation costs and a reduction of the environmental impact, in this work a study based on Dual Stochastic Dynamic Programming for hydrothermal systems with a complementary wind generation system was proposed. . To simulate the random behavior of the wind, Brownian motion is used, assuming that the wind speed over time is a continuous Gaussian process. In this work, the construction and analysis of the hydrothermal-wind model was performed using variations in productivity for various planning scenarios. For the numerical analysis of the results, real load curve data and a wind curve sample were used. In the end, the research results identified two dispatch configurations that show that complementary wind generation and variation in the productivity index brought benefits to the generation of the system and reduced expected cost.
|
-
WILTER DA SILVA DIAS
-
Clusterwise Segmentation Model with Hybrid Prototypes
-
Fecha: 28-ene-2021
-
Hora: 14:30
-
Mostrar Resumen
-
This dissertation presents a methodology that combines prediction and grouping techniques called the Clusterwise Segmentation Model with Hybrid Prototypes (MoSCH), which aims to segment the data into clusters so that each cluster is represented by a predictive model, such as, for example, a regression model or machine learning algorithm (prototype), among a list of predefined methods. The choice of the best prototype for each cluster is intended to minimize an objective function. In addition to the implementation of the MoSCH method estimation algorithm, we consider different allocation techniques for new observations in order to assess the predictive power of the algorithm. A proof of convergence is presented, as well as the application of the proposed method in synthetic data and in real databases. A new allocation method based on KNN, called allocation with KNN of the combined clusters, is proposed, presenting interesting results. In the experiment with synthetic data, the MoSCH algorithm is compared with another algorithm in 6 different scenarios, with an excellent performance. In the validation of the MoSCH algorithm with real data, the proposed method presents a relevant performance when compared to 3 other algorithms, as well as the evaluation of 5 different allocation methods.
|
-
CARLOS AUGUSTO DOS SANTOS
-
Measles Dynamics in Partially Immunized Populations
-
Fecha: 22-ene-2021
-
Hora: 09:00
-
Mostrar Resumen
-
Basic concepts related to measles and mainly its form of contagion will be introduced, as well as mathematical models that govern its dynamics considering vaccination effects. The following will present concepts from the Theory of Optimal Control that will be used to formulate and solve a problem that aims to minimize the density of infected people.
|