Duration: 2 hours
Start: September 03, 15:00

Location: Auditorium "José Adem"




Abstract:
El auge de la inteligencia artificial es innegable en la actualidad, ya que sistemas como ChatGPT, Dall-E, CoPilot, entre otras, han captado el interés del público en general. Lo anterior, ha fomentado el uso de estas tecnologías por todo tipo de personas, desde expertos hasta usuarios sin conocimientos técnicos sobre lo que hay detrás de dichos sistemas. En este taller, se explicarán las bases matemáticas de las redes neuronales, centrándonos en el modelo más simple, conocido como perceptrón. Posteriormente, haremos una implementación de una red neuronal densa y nos centraremos en entender y modificar los parámetros más relevantes de la misma; para ello, nos enfocaremos en conocer las funciones de activación más importantes, así como en el método de descenso de gradiente, junto con algunas variantes, para el ajuste de los pesos de la red.

 

Requirements:
Este taller será de carácter introductorio, pero es deseable contar con conocimientos del lenguaje de programación Python y de cálculo multivariable, así como contar con una laptop en el caso de asistencia presencial.

Contact:

Dr. Oliver Fernando Cuate González This email address is being protected from spambots. You need JavaScript enabled to view it.



Short Bio: Ph.D. and M.Sc. degrees in Computer Science from Centro de Investigación y Estudios Avanzados (CINVESTAV-IPN); B.Sc. degree in Mathematical Engineering from the National Polytechnic Institute (IPN). Currently, he is an adjunct professor with the Mathematics Department of the Physics and Mathematics School (ESFM) at the IPN in México City. His research interests include multi- and many-objective optimization, continuation methods, and decision making. He achieved second place in the Springer Best Paper Award in the 10th International Conference on Evolutionary Multi-Criterion Optimization in 2019. He is a member (level 1) of the Sistema Nacional de Investigadores SNI.

 

 

 

As part of the special session “Applications of Machine Learning” within the activities of the NEO 2024, we are pleased to invite you to participate in the Workshop on Generative Artificial Intelligence using Amazon Bedrock and LangChain, a hands-on session designed for AI enthusiasts, researchers, developers, and professionals interested in exploring the potential of generative models in real-world applications.

 

Workshop Details:

  • Date: September 05, 2024
  • Time: 12:00 – 15:00 (tentatively)
  • Location: Auditorio Jose Adem, Cinvestav Zacatenco (venue of the NEO 2024)
  • Duration: 2.5 hours (plus a coffee break)

 

Workshop Description:

Generative artificial intelligence is revolutionizing how we approach problem-solving across various domains, from content creation to complex decision-making. This introductory workshop will explore AWS tools for building and applying generative AI models.

In this workshop, participants will engage in activities to:

  • Build and deploy generative AI models using Amazon Bedrock.
  • Integrate generative AI models into applications with LangChain.
  • Explore use cases in areas such as natural language processing, content generation, and more.

 

Who Should Attend?

This workshop is ideal for all NEO 2024 participants interested in learning about generative AI technologies.

 

How to Participate:

To apply for participation, please fill out the following form:
https://forms.gle/qViX34McgRbdv1V77.

 

Requirements:

  • Bring your laptop.

 

Note: Spaces are limited. We hence  recommend early registration.

 

This workshop is organized in collaboration with Amazon and it has no additional cost to the NEO 2024 registration.

 

 

Duration: 2 hours
Start: September 05, 16:30

Location: Aula A


Abstract:

Grouping problems are a family of combinatorial optimization problems that seek to identify an efficient distribution of an element set. We can find grouping problems in a wide range of everyday life situations, e.g., in industry, transportation, health, education, economics, and telecommunications, to mention a few examples. The Grouping Genetic Algorithm (GGA) is a variant of the traditional Genetic Algorithm, developed especially to address grouping problems, that uses a representation scheme based on groups and genetic variation operators that work at the group level. Many grouping problems belong to the NP-hard class, i.e., no solution method optimally solves all the possible cases of a grouping problem. Therefore, this is an open research area. The specialized literature includes different GGAs, which incorporate different genetic variation operators.

In this tutorial, we will start with a general introduction to grouping combinatorial optimization and GGAs. In the second part, we will focus on grouping variation operators. For this purpose, we will present different crossover and mutation operators to analyze their algorithm procedure and algorithmic behavior in solving the R||Cmax grouping problem. We close the tutorial discussing possible future research paths in this direction.

 


Literature:

Ramos-Figueroa, O., Quiroz-Castellanos, M., Mezura-Montes, E., & Kharel, R. (2021). Variation operators for grouping genetic algorithms: A review. Swarm and Evolutionary computation60, 100796.



Contact:

Dr. Marcela Quiroz-Castellanos This email address is being protected from spambots. You need JavaScript enabled to view it.

Dr. Octavio Ramos Figueroa This email address is being protected from spambots. You need JavaScript enabled to view it.



Marcela Quiroz is a Full-Time Researcher with the Artificial Intelligence Research Institute at the Universidad Veracruzana in Xalapa City, Mexico. Her research interests include: combinatorial optimization, metaheuristics, experimental algorithms, characterization and data mining. She received her Ph.D. in Computer Science from the Instituto Tecnologico de Tijuana, Mexico. She studied engineering in computer systems and received the degree of master in computer science at the Instituto Tecnológico de Ciudad Madero, Mexico. She is a member of the Mexican National Researchers System (SNI), and also a member of the directive committees of the Mexican Computing Academy (AMexComp) and the Mexican Robotics Federation (FMR).

Octavio Ramos-Figueroa holds a postdoctoral position at the Artificial Intelligence Research Institute at Universidad Veracruzana (IIIA-UV) in Xalapa City, Mexico. His research interests include continuous and combinatorial optimization, experimental study of metaheuristic and hyper heuristic algorithms, characterization, data mining, and data science pipelines. He received his Ph.D. and the degree of master in Artificial Intelligence from the Artificial Intelligence Research Institute at the Universidad Veracruzana, Mexico. He studied engineering in information and communications technologies at the Instituto Tecnológico de Tepic, Mexico. He is a member of the Mexican National Researchers System (SNI)

 

 

 

Duration: 2 hours
Start: September 03, 15:00

Location: Aula A


Abstract:

Often, we face problems where we need to optimize multiple conflicting objectives. Further, we can have a series of decisions rather than just one. This kind of problem can be modelled as a multi-objective Markov decision process (MOMDP), and one approach to solving it is the so-called multi-objective reinforcement learning (MORL).
As expected, the solution for this kind of problem is a set rather than a single solution, which is a set of policies (mappings of states and actions to probabilities). In the last few years, the attention to solving MOMDPs has increased, given its significance in solving real-world problems. From the multi-objective optimization perspective, MOMDPs pose interesting challenges since they are typically large dimensional, dynamic and have different kinds of uncertainty.
In this tutorial, we will first introduce MOMDPs and some of their properties and challenges. Next, we will relate the problem to some more common problems from the multi-objective  optimization literature. Then, we will present some methods, both from MORL and evolutionary algorithms, along with snippets of code, to address MOMDPs. Finally, we will show some possible research directions for applying NEO methods to MOMDPs.


Contact:

Dr. Carlos Hernández, This email address is being protected from spambots. You need JavaScript enabled to view it.
https://cihdezc.github.io/

 



Carlos Hernández received his Ph.D. from CINVESTAV-IPN in 2017. In 2018, he did a postdoctoral fellowship at the University of Oxford in the United Kingdom, focusing on driving strategies for autonomous vehicles. He is currently an associate researcher in the
National Autonomous University of Mexico.
For his research work, he has received various awards, notably the Arturo Rosenblueth Award for his doctoral thesis in 2018. He is a Level 1 member of the National System of Researchers. He has published more than 25 scientific articles and is the author of two books on specialized topics. Google Scholar reports over 500 citations to his work. His research interests include multi-objective optimization under uncertainty, evolutionary algorithms, set oriented numerics and multi-objective reinforcement learning.

 

 

 

Regrettably, we won't be able to stream the poster sessions of the RED and NEO.  As alternative, we present here some videos that are

  •  poster presentations from the CinvesComp 2024

  •  the presentation of the area "Artificial Intelligence" of the Computer Science Department of the Cinvestav, Campus Zacatenco, and

  •  one visual abstract for a work submitted to the IEEE Latin American Transactions.