Projects

I have experience in area of computer vision, data modelling and management, generative AI, machine learning, natural language Processing and automation.

3D point clouds model creation with OpenSfM

This is a part of trench's profile measurement, I created 3D point clouds model of trenches. First, I captured a video of a trench. Next, I calibrated the camera and generated the images from the video I collected with OpenCV Python. Then, I manually selected the key frames. After that, I undistorted the key frames with OpenCV Python lib and saved the results images. In config file of OpenSfM, I set depthmap_min_consistent_views parameter to 6. Lastly, I loaded those undistorted key frames to OpenSfM and created 3D point clouds model of the trench.

"The depthmap_min_consistent_views parameter specifies the minimum number of images in which a particular point should be visible in order to consider it as a consistent point and estimate its depth. If the number of consistent views for a point is below this threshold, the point may be considered unreliable or not visible enough to estimate its depth accurately."

Project Link: 3D point clouds model creation with OpenSfM

3D Trench's Profile Measurement

In this project, I measured Trench's profile. To capture the necessary data, I used 2 different 3D point cloud models for same trench using two cutting-edge techniques: 3D point cloud modeling with OpenSfM and data acquisition with the highly capable Intel Realsense D455 camera. First, I implmented a measurement algorithm with Python. Then, I analyzed and measured 3D trench's profile with 2 different measurement techniques: my measurement algorithm and MeshLab software. Based on the findings, the performance of my measurement algorithm is on par with that of MeshLab, a widely recognized software tool. The results obtained through my algorithm demonstrate its ability to deliver accurate and reliable measurements comparable to those achieved using MeshLab. This highlights the effectiveness and reliability of my algorithm in capturing and analyzing data, showcasing its potential as a valuable alternative in the field of measurement analysis.

The processes for measurement algorithm are as follow.

  • 3D point clouds models.
  • Remove outliers with statistical outlier removal.
  • Create bounding box of 3D trench.
  • Rotate 3D trench to be fit in bounding box.
  • Crop a section of 3D trench to measure.
  • Save the cropped 3D trench for later use.
  • Draw bounding box for the cropped 3D trench.
  • Measure 3D trench's profile with bounding box dimensions.

The processes for MeshLab measurement are as follow.

  • Import the cropped 3D trench to MeshLab.
  • Measure 3D trench's profile by manually selecting two points on 3D trench.

Project Link: 3D Trench's Profile Measurement

Road Scene Semantic Segmentation

In this project, I trained a UNet Model for road scene understanding. Resnet101 is used as enconder and ImageNet pretrained is used as encoder weight. I combined 3 different datasets: Mapillary Vistas Dataset, Dataset collected by AICenter, Asian Institute of Technology, and Dataset collected by me in Carla simulator. The whole dataset has total of 25200 training images and 4850 validation images. During the trained, the images are augmented with probability of 0.5 in each occurance; augmentation methods include horizontal or vertical flipping, random cropping, RGB shifting, gaussian noise, random rotation, random brightness contrast. The model is trained with 256 x 192 image size with batch size 32. Adam Optimizer with weight decay is used during the training. The best validation IoU score achieve is 0.772.

Record videos during realtime testing on the real road: Road Scene Semantic Segmentation

Simple Augmented Reality

This project served as a valuable learning experience for me to delve into the world of augmented reality (AR) using aruco cards. Leveraging cutting-edge technologies, I employed Mediapipe, a powerful tool for hand and joint detection, along with OpenCV to utilize my webcam. By strategically attaching aruco cards to the detected joints through Mediapipe, I was able to overlay captivating AR content onto the cards. In addition to its role in aligning AR objects with the aruco cards, Homography was also employed to facilitate image transformations and reshape the appearance of my AR objects. The combination of these techniques enabled me to create an immersive AR experience with precise object placement and realistic visual effects.

Record video during realtime testing: Simple Augmented Reality

Servo Motor Control by hand rotation

This project was specifically designed as a library to be utilized in an IoT course at KiddeeLab Training School. Its main focus involved the development of an innovative algorithm that harnesses the power of computer vision to control the angles of two servo motors. In this project, OpenCV is used to access webcam and Mediapipe is used to detect tha hands. Motor 1 is controlled by the movements of the right hand, while Motor 2 responds to the gestures of the left hand. This intuitive control mechanism adds a dynamic and interactive element to the project. Additionally, I created a user-friendly web interface using FastAPI, Python's high-performance web framework, along with HTML and CSS. This web interface allows for real-time data updates, eliminating the need for image display in certain scenarios. This feature enhances the overall efficiency and responsiveness of the system.

The versatility of this project extends beyond its immediate application, as it can be seamlessly integrated into various IoT projects. For instance, it can serve as a foundation for robotic applications that incorporate IoT devices like ESP32, ESP8266, and others. This flexibility opens up numerous possibilities for expanding the functionality and practicality of the project in diverse IoT-based applications.

Record video during realtime testing: Servo Motor Control by hand rotation

Relational Database Management System (SQL)

As part of the Data Modelling and Management class in our Master's study, our team undertook a project to develop a comprehensive database management system for a COVID healthcare service. The aim was to create a system that could be utilized in hospital projects to streamline various aspects of healthcare services. To begin, we established the business rules, defining the roles and permissions for different user types such as admins, doctors, and patients. Our system allowed patients to check the availability and schedule of specific doctors, make appointments, and view payment receipts. Doctors, on the other hand, could generate consultation summary reports, modify their availability, and manage their schedules. We carefully designed the database to incorporate the necessary data constraints and ensure data integrity. The database encompassed user information, patient details, doctor profiles, medicine information, doctor work schedules, appointment information, consultation details, and payment records for both consultations and medicines.

To showcase our expertise in data modeling, we created the conceptual design, logical design, and established data dictionaries. These artifacts served as blueprints for the database structure and relationships. To demonstrate our proficiency in data management, we created mock data online. Once the mock data was generated, we proceeded to load it into the MySQL database. We extracted meaningful insights by formulating various SQL queries. Additionally, we performed operations such as user registration, appointment scheduling, and updating work schedules, among others, to simulate real-world scenarios. Throughout the project, we conducted 27 data inquiries, recording and analyzing the results to showcase the functionality and performance of our system during project presentations.

This project demonstrated our proficiency in applying data modeling and management principles to healthcare services using SQL databases. We effectively managed and stored diverse healthcare data, enabling streamlined operations and data-driven decision-making. Our focus on data privacy and security ensured the confidentiality of sensitive healthcare information. Overall, this project laid a solid foundation for improving healthcare operations through efficient data management.

Project Prsentation Slide: Relational Database Management System (SQL) Slides

Project Presentation Record Video: Relational Database Management System (SQL) Video

Database Management System (NoSQL)

As part of the Data Modelling and Management class in our Master's study, our team undertook a project to develop a comprehensive database management system for a COVID healthcare service. The aim was to create a system that could be utilized in hospital projects to streamline various aspects of healthcare services. To begin, we established the business rules, defining the roles and permissions for different user types such as admins, doctors, and patients. Our system allowed patients to check the availability and schedule of specific doctors, make appointments, and view payment receipts. Doctors, on the other hand, could generate consultation summary reports, modify their availability, and manage their schedules. We carefully designed the database to incorporate the necessary data constraints and ensure data integrity. The database encompassed user information, patient details, doctor profiles, medicine information, doctor work schedules, appointment information, consultation details, and payment records for both consultations and medicines.

To showcase our expertise in data modeling, we created the document model design, graph model design. These artifacts served as blueprints for the database structure and relationships. To demonstrate our proficiency in data management, we created mock data online. Once the mock data was generated, we proceeded to load it into the MongoDB for document model and Neo4j for graph model. We extracted meaningful insights by formulating various MongoDB and Cypher queries. Additionally, we performed operations such as user registration, appointment scheduling, and updating work schedules, among others, to simulate real-world scenarios. Throughout the project, we conducted 25 data inquiries for document model and 24 data inquiries for graph model, recording and analyzing the results to showcase the functionality and performance of our system during project presentations.

This project demonstrated our proficiency in applying data modeling and management principles to healthcare services using NoSQL databases. We effectively managed and stored diverse healthcare data, enabling streamlined operations and data-driven decision-making. Our focus on data privacy and security ensured the confidentiality of sensitive healthcare information. Overall, this project laid a solid foundation for improving healthcare operations through efficient data management.

Project Prsentation Slide: Database Management System (NoSQL) Slides

Project Presentation Record Video: Database Management System (NoSQL)Video

Yoga pose classification

My team and I undertook this project as part of our Machine Learning class at the Asian Institute of Technology. The primary objective was to develop a robust system for recognizing and classifying five different yoga poses: "Cobra," "DownDog," "Standing Forward Bend," "Tree," and "Warrior 1." To create a comprehensive dataset, we collected videos of these poses from two experienced yoga practitioners and several individuals from our AIT community. In total, we obtained 30 videos for each pose, ensuring diverse and representative samples.

To preprocess the videos and extract the necessary information for classification, we utilized OpenCV and Mediapipe. The Mediapipe library proved invaluable in accurately detecting the key joints or keypoints of the human poses in each frame of the videos. By saving the images with keyframes for five sequential frames we established a solid foundation for further analysis and training.

For the classification task, we implemented a Long Short-Term Memory (LSTM) model, a type of recurrent neural network (RNN) known for its ability to effectively capture temporal dependencies in sequential data. Through extensive training and validation, we achieved an impressive validation score of 94 percent, indicating the model's high accuracy and proficiency in distinguishing between the different yoga poses.

Overall, this project demonstrates our team's successful application of machine learning techniques to create a robust system for yoga pose classification. The utilization of OpenCV, Mediapipe, and LSTM models enabled us to preprocess the videos, extract meaningful features, and train a powerful classifier, ultimately providing accurate recognition and classification of yoga poses.

Project Prsentation Slides: Yoga pose classification Slides

Project Report Paper: Yoga pose classification Paper

Text to Car Image Geneartion

This project was undertaken as part of the Recent Trend in Machine Learning course at the Asian Institute of Technology. Our team explored the application of Semantic-Spatial Aware GAN, a novel architecture that combines an Inception v3 image encoder with a bi-directional LSTM text encoder. To accommodate our server's GPU memory limitations, we made modifications to the architecture by reducing the number of layers. This adjustment allowed us to effectively train the model within the available resources. For the dataset, we acquired a car image dataset from Kaggle, which served as the foundation for our experiments. In order to create a comprehensive and targeted training dataset, we manually selected specific car companies and their corresponding car models. We curated a text dataset that included descriptions and attributes for these cars, providing the necessary textual context for the GAN model. Additionally, we conducted experiments using BERT as a text encoder to compare its performance with the LSTM text encoder.

Throughout the training process, we fine-tuned hyperparameters and carefully monitored the model's progress. This iterative approach allowed us to optimize the performance and enhance the quality of the generated outputs.

By leveraging the original architecture without modifications, our project was able to achieve commendable results. The chosen architecture evidently possessed inherent capabilities that aligned well with our task, contributing to the positive outcome we obtained. However, in retrospect, we realize that further improvements could have been made. A key realization is the significance of focusing more on training, adopting a data-centric approach. At the time of our project, we may not have been fully aware of the extent to which training affects model performance. With this newfound understanding, we recognize the importance of dedicating more attention to the training phase.

Enhancing the training process involves several strategies that can positively impact the models. These include employing advanced data augmentation techniques, fine-tuning hyperparameters, and optimizing the model's architecture. By investing additional effort into improving and diversifying our training dataset, we have the potential to unlock higher levels of performance and accuracy in our models. While it is natural to gain insights and refine our understanding after completing a project, it is crucial to acknowledge the achievements made at the time. By recognizing the areas where further improvements could be made, particularly through a data-centric approach to training, we have identified valuable opportunities for future research and development in the field of machine learning.

Project Report Paper: Text to Car Image Geneartion

Text to Image iOS App

In this project, for the image generation component, I implemented the use of Python as the programming language. To generate the images from text input, I utilized the stable diffusion model developed by Stability AI, which is available on the Hugging Face model repository. After downloading the pre-trained model, I harnessed its capabilities to generate images based on the provided text input. To ensure accessibility and availability of the image generation functionality, I hosted the Python API code on the Google Cloud Platform. By leveraging the cloud infrastructure, users can access and utilize the image generation service from anywhere, providing a seamless and convenient experience. Hosting the Python API code on the Google Cloud Platform not only ensures easy accessibility but also allows for scalability and reliability. Users can take advantage of the platform's resources to handle requests and process image generation efficiently, providing a robust and responsive experience. By combining the stable diffusion model from Stability AI, the power of Python, and the scalability of the Google Cloud Platform, I was able to create an image generation system that delivers high-quality results based on text input, accessible from anywhere via the hosted Python API code.

To develop the iOS app, I employed Swift. Within the iOS app, I integrated three user inputs: text prompt, image style, and image size. This allowed users to customize their generated images according to their preferences. To optimize the user experience, I implemented a functionality that only enables the "generate" button when there is text input. If no text input is provided, the button appears grayed out and remains unclickable. To provide visual feedback to the user during the image generation process, I incorporated an icon that indicates the ongoing progress. This feature allows users to easily track the status of the image generation. Furthermore, I included a "download" button within the app to enable users to save and download the generated images. Unfortunately, deploying the app to iOS requires a paid developer account. As a result, I shared my test record on a virtual macOS environment. For your convenience, I have provided testing videos for your reference. You can access them through the link provided below.

Record videos during realtime testing: Text to Image iOS App

Customer intention detection system

This project, conducted as part of the Natural Language Understanding class in my Master's study, focused on exploring intent detection using transformers. Specifically, our team utilized the RoBERTa and XLNET transformers from the Hugging Face library to perform the task. To train the models, we employed the banking 77 dataset and compared the results obtained from each transformer. In addition, we investigated the impact of different loss functions on intent detection performance. Specifically, we trained the models using both cross entropy loss alone and cross entropy loss combined with supervised contrastive learning loss. This allowed us to assess the usefulness of these loss functions in the intent detection task. Throughout the training process, we employed the Adam optimizer with weight decay to optimize the models. Due to time constraints, we could only train the models for a few epochs. Although the obtained results may not be applicable in real-world applications, they were sufficient for comparison purposes.

Our findings revealed that RoBERTa outperformed XLNET in terms of intent detection performance. Moreover, we observed that combining the cross entropy loss with supervised contrastive learning loss led to higher validation scores compared to using cross entropy loss alone. This indicates that the inclusion of supervised contrastive learning loss was beneficial for improving the intent detection capabilities of the models. While the obtained results may have limitations due to the limited training duration, they provided valuable insights into the performance of RoBERTa and XLNET transformers in intent detection tasks. Additionally, our exploration of different loss functions shed light on their potential impact on model performance. Overall, this project contributed to our understanding of intent detection using transformers and provided a foundation for further research and improvements in this area.

Project Prsentation Slides: Customer intention detection system

Question Answering System

During the Natural Language Understanding class in my Master's study, our team undertook a project centered around developing a Question Answering (QA) system using transformers. We specifically leveraged the RoBERTa and DistilBERT transformers from the Hugging Face library for this task. Our focus was on implementing a closed domain and extractive QA approaches. In a closed domain QA approach, the system is designed to answer questions within a specific predefined domain or topic. Extractive QA focuses on extracting the answer directly from a given text or set of documents. To train and evaluate our models, we employed a COVID QA dataset that was relevant to the domain we were working with. Throughout the training process, we employed the Adam optimizer with weight decay to optimize the models. Due to time constraints, we could only train the models for a few epochs. Although the obtained results may not be applicable in real-world applications, they were sufficient for comparison purposes.

After conducting experiments and evaluating the performance of the models, we observed that DistilBERT outperformed RoBERTa in the context of our QA system. DistilBERT demonstrated superior capabilities in accurately answering questions from the given dataset. These findings indicate the effectiveness of DistilBERT as a transformer model for question answering tasks. However, it's important to note that the performance comparison may vary depending on the specific dataset, domain, and evaluation metrics used. By successfully implementing and comparing these transformers in the context of QA, our team gained valuable insights into their strengths and suitability for this task. This project contributed to our understanding of how transformers can be utilized in Natural Language Understanding applications, specifically in the domain of Question Answering.

Project Prsentation Slides: Question Answering System

Brain channel region classification

In this project, our team focused on the implementation of various machine learning algorithms for the classification of P300-based Brain-Computer Interface (BCI) datasets. We aimed to compare their classification results and evaluate their performance in different scenarios. For the classification task, we considered specific scenarios such as training on one subject and testing on another subject, subject-specific classification, and regional channel classification. The dataset was divided into different regions, including the frontal, central, parietal, occipital, and temporal regions.

To address this classification task, we implemented three different algorithms: BN3, BN3+LSTM, and CNN Conv2D. These algorithms were selected due to their effectiveness in handling temporal and spatial features present in the BCI data. BN3 (BrainNetCNN) is a convolutional neural network architecture specifically designed for BCI applications. It utilizes deep learning techniques to capture complex patterns and spatial dependencies in the data. BN3+LSTM combines the BN3 architecture with a Long Short-Term Memory (LSTM) layer. The LSTM layer is capable of capturing temporal dependencies, making it suitable for analyzing sequential data. CNN Conv2D (Convolutional Neural Network with 2D convolutions) is a well-known deep learning architecture that has proven effective in various image classification tasks. By applying 2D convolutions to the BCI data, this algorithm can extract relevant spatial features.

In the subject-specific task, we found that the CNN (Conv2D) algorithm achieved the highest accuracy of 92.46%. This indicates that the Conv2D architecture effectively captured the spatial features and patterns in the BCI data, resulting in accurate subject-specific classification. For the train on one subject and test on another subject task, the CNN (Conv2D) algorithm also performed exceptionally well, achieving an accuracy of 91.35%. This result demonstrates the generalizability of the Conv2D model across different subjects, indicating its robustness in capturing relevant patterns in the BCI data. In the channel region task, the BN3 model exhibited the highest accuracy among the three implemented models. This suggests that the BN3 architecture, specifically designed for BCI applications, successfully captured the unique characteristics and patterns present in the different channel regions of the BCI data.

These findings highlight the effectiveness of the CNN (Conv2D) algorithm for subject-specific and cross-subject classification tasks, while the BN3 model excelled in the channel region classification. By comparing and evaluating the performance of these algorithms, our team gained valuable insights into their strengths and suitability for different classification scenarios within the P300-based BCI dataset. These results contribute to the advancement of BCI research and provide a foundation for further exploration and development of machine learning techniques in the field of brain-computer interfaces. This project showcases our team's efforts to explore machine learning algorithms and their application in the analysis of P300-based BCI datasets. Through our work, we aimed to contribute to the field of brain-computer interfaces and advance the understanding and utilization of such technologies.

Project Prsentation Slides: Brain channel region classification Slides

Business intelligence system for Music Streaming Platform

In this project, we created a business intelligence system for a music streaming platform. We used two datasets of KKBox music streaming service from Kaggle: WSDM - KKBox's Music Recommendation Challenge and WSDM - KKBox's Churn Prediction Challenge. In our project, we employed Tableau to create insightful dashboards for data analysis. To accomplish this, we utilized five CSV files: user information, transaction information, user and song relation information, song information, and song extra information. We imported these datasets into Tableau and established connections between them. Firstly, we linked the user information, transaction information, and user and song relation information based on their corresponding user IDs. This integration allowed us to gain a comprehensive understanding of user behavior and transaction history. Next, we linked the user and song relation information, song information, and song extra information using song IDs. By establishing these connections, we were able to explore the relationship between users, songs, and additional details related to the songs.

Based on these integrated datasets, for the company, we created various dashboards to provide valuable insights and support decision-making processes. For example, we developed a customer demographic dashboard, a subscription sales performance dashboard, and a top songs/artist and genre dashboard. These dashboards enabled the company to analyze data, improve sales strategies, and make informed marketing decisions. Moreover, we designed individualized dashboards for each user, allowing them to view their most listened-to song genres, popular songs, and top artists on the platform. Additionally, we created artist-specific dashboards that provided artists with valuable insights, such as their top songs, most listened-to tracks, and popular song genres. By leveraging Tableau's visualization capabilities and our comprehensive dataset, we were able to create impactful dashboards that empowered the company and its users to make data-driven decisions, optimize marketing strategies, and enhance the overall user experience.

In our project, we also developed a recommendation system to predict the potential popularity of songs if they were streamed on the company's platform. To accomplish this, we experimented with several machine learning algorithms, including Logistic Regression, GaussianNB, XGBoost, and Decision Tree. We utilized various input features to train our models, including song length, song genre, language, and target user characteristics such as age, gender, and location. These features were selected based on their potential influence on song popularity. After training and evaluating the performance of the different algorithms, we found that XGBoost achieved the highest validation accuracy among the models tested. Therefore, we selected XGBoost as the final model for our recommendation system.

By leveraging the trained XGBoost model, the recommendation system can predict the likelihood of a song becoming popular on the platform based on the provided inputs. This information can be valuable for the company in deciding which songs to promote and stream, potentially enhancing user engagement and increasing overall platform success. By incorporating machine learning techniques and considering various input features, our recommendation system provides the company with valuable insights for making informed decisions regarding song promotion and content curation on their platform.

Project Prsentation Slides: Business intelligence system for Music Streaming Platform Slides

Project Report: Business intelligence system for Music Streaming Platform Report

Automatic Transport Vehicle

This project, undertaken as part of the microprocessor class during my bachelor's study, focused on controlling the direction of a mobile robot and displaying its status on an LCD screen. To achieve this, our team utilized MATLAB Simulink code with Waijung blocks to control the microprocessor, specifically the STM32F417IV. This microprocessor facilitated the control of output devices such as motors and the LCD screen of the mobile robot. By implementing the Simulink code and integrating it with the microprocessor, we were able to effectively control the movement and direction of the robot. Additionally, the LCD screen provided real-time status updates, allowing for easy monitoring and understanding of the robot's actions.

This project showcased our proficiency in microprocessor programming and demonstrated our ability to interface with external devices to control the behavior of a mobile robot. The successful implementation of this project contributed to our understanding of embedded systems and their applications in robotics. Overall, this project served as a practical application of microprocessor concepts and highlighted our team's ability to design and implement control systems for mobile robots using MATLAB Simulink and microprocessors like the STM32F417IV.

Project Report: Automatic Transport Vehicle

Testing Video Records:

Automatic Coin Sorting Machine

This project involved the development of a coin sorting machine model using PLC (Programmable Logic Controller) and HMI (Human Machine Interface) devices as part of the mechatronic engineering laboratory in my bachelor's study. The backend system was designed using Ladder Programming, allowing for efficient control and automation of the coin sorting process. The PLC played a crucial role in accurately counting the number of each Thai coin inserted into the machine. On the front-end, an HMI application was implemented to provide a user-friendly interface for interacting with the coin sorting machine. The HMI allowed users to input coins and view the count of each coin type, enhancing the overall user experience.

By combining PLC and HMI technologies, this project showcased my ability to integrate hardware and software components in mechatronic systems. The coin sorting machine model served as a practical application of automation and control techniques, contributing to the field of mechatronic engineering. Overall, this project demonstrated my proficiency in designing and implementing a coin sorting machine using PLC and HMI devices, providing an efficient and user-friendly solution for counting Thai coins.

Project Report: Automatic Coin Sorting Machine Report

Intermediate Model-Based Design Training Program with VCU using MPC5744P

In my senior project as a mechatronics student at Assumption University, our team, consisting of three engineering students, undertook this project under the guidance and supervision of Asst. Prof. Dr. Narong Aphiratsakun, who serves as the Dean of the Vincent Mary School of Engineering at Assumption University of Thailand. With his expertise and guidance, we were able to successfully complete our senior project and achieve our objectives. Dr. Narong Aphiratsakun's mentorship and support played a crucial role in the successful execution of our project.

This senior project, conducted in collaboration with Mine Mobility Research Company, a subsidiary of Energy Absolute Thailand, and Assumption University of Thailand, aimed to develop a training course using a model-based design system with an industry-grade vehicle control unit (VCU) that uses an MPC5744P microcontroller board. The primary objective of this project was to enhance the skills and knowledge of engineering students at Assumption University and workers at the company in VCU control. The training program consisted of seven topics, covering various aspects of VCU control, including flashing the MPC5744P bootloader, digital and analog input/output control, timer and alarm configuration, DC stepper motor control, Controller Area Network (CAN) communication, brushed DC motor control, and electric vehicle DC-DC converter control.

The entire program was designed using MATLAB Simulink codes, allowing participants to gain hands-on experience and practical knowledge. The training began with learning how to control the devkit MPC5744P microcontroller board, and as participants became familiar with the board, they progressed to practicing with the VCU. This practical experience provided participants with valuable skills for future employment in the electrical vehicle industry. By collaborating with Mine Mobility Research Company and Assumption University, this senior project aimed to bridge the gap between academic knowledge and industry requirements. The training program equipped participants with practical experience in VCU control, preparing them for future roles in electrical vehicle companies. The project showcased our ability to design and deliver a comprehensive training program, fostering the development of skilled professionals in the field.

Project Report: Intermediate Model-Based Design Training Program with VCU using MPC5744P