Farhaan Nazirkhan

I am a software engineer, Medium article writer, and entrepreneur who loves to create content for the internet.

About

About Me

Farhaan Nazirkhan

Software Engineer

Hi, my name is Farhaan, and creating content for the internet is something I enjoy doing. My interest in programming began in 2018, when I started building tools to simplify tasks with just a few clicks.

Today, I’m a software engineer with full-stack expertise in IoT, AI, RPA, Flutter, Firebase, and more. I also write on Medium, sharing insights and solutions in tech. I'm passionate about continuous learning and excited by projects that challenge and expand my skills. Let’s connect and make the internet even better!

  • Nationality:Mauritian
  • Study:University of Mauritius
  • Degree:Bachelor
  • Interest:R&D
  • Freelance:Available

Programming Skills

Java95%
Python 365%
SQL SERVER, SQ Lite, mySQL80%
HTML 5 & CSS 380%
Flutter, Firebase & Figma95%
AI & Machine Learning80%

Language Skills

English90%
French76%
Urdu50%
Hindi50%
Creole100%

Knowledge

  • Flutter, Python, C++, C#, Java, HTML 5, CSS 3
  • Arduino IDE, Android Studio, VSCode, Visual Studio
  • GitHub Version Control System
  • Jira Software, Monday.com
  • SQL Server MS 20, Firebase (Complete)
  • OS (Windows/macOS/Linux)

Interests

  • Make UI/UX Design
  • Create Mobile App
  • Build Machine Learning Models
  • Custom Website
  • Learn Ecommerce & Investment

Education

  • 2021 - 2024

    University of Mauritius

    BSc (Hons) Software Engineering
  • 2019 - 2021

    BR SSS

    A Levels (A* A A)

Experience

  • 2023 - 2023

    CLARITY

    Software Engineer (Trainee)
  • 2021 - Now

    WFH

    Freelance Software Dev

Testimonials

Portfolio

Some Things I've Built

  • Dive into our exploration of machine learning techniques for predicting heart attacks, where we compare Decision Tree and Multilayer Perceptron models on a labeled dataset for optimized accuracy.

    Abstract

    This study implements two supervised machine learning models, Decision Tree and Multilayer Perceptron (MLP), to predict heart attack likelihood using a labeled dataset of 1,888 rows and 14 features. Leveraging significant features identified in prior research, our optimized models achieved accuracy and F1-score of 92.33%, evaluated through metrics like precision, recall, and specificity. Compared to similar studies, the models showed enhanced performance due to the larger dataset and hyperparameter tuning. This research demonstrates the potential of machine learning for early heart disease diagnosis, aiming for future real-time clinical applications.

    Introduction

    Cardiovascular diseases (CVDs) are a major global health concern, responsible for a substantial proportion of worldwide mortality. According to the World Health Organization (2021), approximately 17.9 million people died from CVDs in 2019, representing 32% of all deaths globally. Of these, 85% were due to a heart attack and stroke. While major progress has been made in medical diagnostics, early and accurate prediction of heart attack risk remains critical in reducing mortality rates.

    In recent years, machine learning has emerged as a promising tool for predictive healthcare analytics, offering the potential to enhance early diagnosis by identifying patterns in complex datasets that may not be apparent to traditional medical analyses. In this project, we employed two machine learning models — Decision Tree and Multi-Layer Perceptron (MLP) neural network — trained on a large dataset to predict the likelihood of heart attacks. We specifically chose features proven to be significant in heart attack prediction based on a review of more than five research papers.

    The dataset used in this study was generated by combining five publicly available datasets, creating a comprehensive dataset of 1,888 rows and 14 attributes after dropping missing data. Hyperparameter tuning and performance evaluation were conducted for both models. Additionally, we calculated feature importance to understand which factors played a critical role in the model’s predictions. However, while feature importance was analyzed, it was not directly used in model tuning.

    The following sections will detail the methodology, hyperparameter tuning processes, and the resulting performance of both models. We will also compare our findings with existing research to highlight the advancements made in heart attack prediction using machine learning.

    You can access the dataset we used and uploaded on Kaggle here, and the code for the model implementation can also be found on my Kaggle notebook here. Additionally, the full source code is available on GitHub.

    You can also check out the video presentation of the system down below.

    Research Gap

    Despite several studies on heart attack prediction using machine learning have been conducted, there still lies several gaps:

    • Limited Dataset Size: Many studies have relied on small datasets, limiting the generalizability of their models. By merging four public datasets, we sought to address this gap and provide a more robust dataset to train our models (Alshraideh, et al., 2024).
    • Exclusion of Critical Features: Age, sex, cp, restecg, thalach, exang, oldpeak, slope, ca, and thal are found as most relevant attributes in predicting heart diseases (Chellammal & Sharmila, 2019). However, some research models exclude these critical features (Hossain, et al., 2023). Our dataset incorporates the critical attributes for predicting heart diseases.

    The findings from this study aim to fill these gaps by using a larger, more diverse dataset and by incorporating critical health features that are often overlooked in prior research.

    Data Collection & Pre-processing

    We compiled a comprehensive dataset by merging five public heart disease datasets from Kaggle and one from Figshare. This larger dataset provides a richer set of patient data, which will enhance the training and testing of the machine learning models. The datasets used are detailed in the table below.

    Dataset Details

    Key Features in the Dataset:
    1. age: The age of the patient
    2. sex: Gender (1 = male, 0 = female)
    3. cp: Chest pain type (four categories)
    4. trestbps: Resting blood pressure (in mm Hg)
    5. chol: Serum cholesterol in mg/dl
    6. fbs: Fasting blood sugar (1 = >120 mg/dl, 0 = otherwise)
    7. restecg: Resting electrocardiographic results (three categories)
    8. thalach: Maximum heart rate achieved
    9. exang: Exercise-induced angina (1 = yes, 0 = no)
    10. oldpeak: ST depression induced by exercise
    11. slope: Slope of the peak exercise ST segment
    12. ca: Number of major vessels colored by fluoroscopy
    13. thal: Thalassemia (four categories)
    14. target: Risk of heart attack (1 = high, 0 = low)

    Preprocessing

    Data Cleaning

    The initial combined dataset contained 2,181 rows and fourteen columns. Upon inspection, 293 rows were found to contain missing data across key features. The most significant missing data was in the ca (291 missing values), thal (266 missing values), and slope (190 missing values) columns. Rather than imputing the missing values, we chose to delete these rows, leaving 1,888 rows for training and testing.

    The decision to delete rows instead of imputing was driven by:

    • High Proportion of Missing Data: Features like ca and thal had sizable portions of missing values. Imputing such extensive missing data could introduce bias and reduce the model’s reliability.
    • Maintaining Data Integrity: Deleting incomplete rows ensured the dataset’s consistency, reducing the risk of introducing unreliable or biased data through imputation.
    Standardization of Feature Names

    The datasets used different naming conventions for features such as trestbps, exang, ca, thal, target, and slope. To ensure consistency across the combined dataset, we standardized all feature names to maintain uniformity. This step was essential for proper feature alignment during the merging process and subsequent model training and evaluation.

    Feature Selection

    All 14 features were retained based on their proven significance in predicting heart disease risk, as highlighted in multiple research studies. These features include age, cholesterol levels, resting blood pressure, and ECG outcomes. By retaining all features, we ensured the models had access to sufficient information to accurately predict heart attack risk.

    Machine Learning Models

    1. Decision Tree Classifier

    Decision Trees (DTs) are a non-parametric supervised learning method used for classification and regression. The objective is to build a model that, by utilizing basic decision rules deduced from the data features, predicts the value of a target variable (scikit-learn, n.d.).

    Best Parameters:
    • Criterion: Gini impurity
    • Splitter: Best
    • Max Depth: 5
    • Type of Pruning: ccp_alpha
    • Random State: 8412 (yielded the best accuracy)
    Key Metrics for Decision Tree Classifier:
    • Accuracy: 92.33%
    • Precision: 92.33%
    • Recall: 92.33%
    • F1-Score: 92.33%

    The Decision Tree model’s performance exceeded expectations, achieving a near-perfect classification of high and low heart attack risks.

    2. Multilayer Perceptron (MLP)

    The Multilayer Perceptron (MLP) is a type of artificial neural network (ANN) that consists of multiple layers of neurons, including an input layer, hidden layers, and an output layer (Chan, et al., 2023).

    Best Hyperparameters:
    • Hidden Layers: 2 layers with 50 neurons each (after hyperparameter tuning)
    • Activation Function: Logistic (best-performing activation function)
    • Batch Size: 200 (best-performing batch size)
    • Learning Rate: Constant (best-performing learning rate)
    • Epochs: 1000 (optimal number of epochs)
    Key Metrics for MLP Classifier:
    • Accuracy: 92.33%
    • Precision: 92.39%
    • Recall: 92.33%
    • F1-Score: 92.33%

    We performed extensive hyperparameter tuning to find the best set of parameters that yielded the highest accuracy. The MLP classifier algorithm was adjusted to loop through various hyperparameters, including the number of neurons, hidden layers, activation functions, and batch sizes. The best-performing configuration consisted of 2 hidden layers with 50 neurons each, a batch size of 200, a constant learning rate, logistic activation function, and 1000 epochs.

    Results & Comparison with Other Research

    1. Decision Tree Performance:

    Our Model: Achieved an accuracy of 92.33%, with precision, recall, and F1-score of 0.923.

    Comparison 1: : A study that applied the Jellyfish Optimization Algorithm to a Decision Tree model reported an accuracy of 97.55% (Ahmad & Polat, 2023). Their higher accuracy could suggest that the Jellyfish Optimization Algorithm would be a better fit for this scenario.

    Comparison 2: : Another study using the Particle Swarm Optimization (PSO) technique reported an accuracy of 85.71% with a Decision Tree (Alshraideh, et al., 2024). Our model’s 92.33% accuracy significantly exceeds this by 6.62%, showing the substantial impact of using a larger dataset and the importance of carefully tuning model parameters.

    2. Neural Network (ANN) Performance:

    Our Model: Achieved an accuracy of 92.32%, with precision, recall, and F1-score all around 0.923%.

    Comparison 1: In another study, an ANN-based model reported an accuracy of 73.33% using the same dataset attributes (Rabbi, et al., 2018). The 19.00% increase in accuracy for our model demonstrates the critical role that dataset size and hyperparameter tuning play in model performance. Our larger dataset, combined with optimized ANN configurations, allowed for significantly better results.

    Comparison 2: A CNN-based heart disease prediction model achieved an accuracy of 91.71% (Arooj, et al., 2022). While CNNs are known for their power in image and structured data classification, our simpler ANN model slightly outperformed this with 92.32% accuracy. This further highlights the effectiveness of dataset size and optimization in achieving competitive results even with a relatively simpler model architecture.

    Future Work

    For future work, we plan to use optimization algorithms, such as Particle Swarm Optimization (PSO) or Genetic Algorithms, to enhance both the Decision Tree and MLP models. Additionally, incorporating real-time clinical data and validating the model in a live healthcare environment would further demonstrate its applicability in real-world scenarios.

    Conclusion

    Our project highlights the potential of machine learning models to significantly improve heart attack prediction. By leveraging a large and diverse dataset, employing rigorous preprocessing methods, and optimizing hyperparameters, we were able to achieve high accuracy rates. These results suggest that machine learning, when properly tuned, can be a valuable tool in assisting healthcare professionals with the early diagnosis of heart disease, ultimately saving lives.

    Thank you for reading!

    References

  • In Game of Code 2024, Team A4 developed an engaging web app using HTML, JavaScript, and CSS to teach coding concepts through interactive puzzles, real-time feedback, and an AI-driven game assistant.

    Introduction

    In October 2024, our team—Team A4, made up of Levyn Kwong, Sarwin Rajiah, Farhaan Nazirkhan, and Feiz Roojee—took on the intense challenge of Game of Code 2024. The task: to design a web or mobile app in just 12 hours that makes learning to code engaging through gaming elements. Our goal was ambitious: to create an educational web app that transforms coding into an interactive, fun experience, blending key programming concepts with an accessible and engaging format.

    The Challenge

    Game of Code 2024 set the bar high with its 12-hour challenge. The objective was to design a game-driven educational app that not only taught coding but did so in a way that felt natural and enjoyable. Our goal was to structure coding puzzles so users could solve them as if they were playing a game, gradually learning key concepts while navigating obstacles and challenges.

    Our Solution: An Immersive Coding Web App

    With our concept in mind, we got to work building a web app that used HTML, JavaScript, and CSS as its core technologies. Our solution is a web application where users solve coding puzzles to move a character through a series of tiles, progressively learning coding principles. Every element was designed to encourage learning while keeping players engaged.

    Key Features

    • Code Editor with Syntax Highlighting: Our embedded code editor allows users to write and test code, supporting syntax highlighting to improve readability.
    • Real-Time Console Feedback: As players write code, the console provides immediate feedback on their input, displaying errors, warnings, or successful execution outputs.
    • Tile-Based Puzzle Game: Users navigate a character across tiles by solving coding problems, applying coding concepts practically as they progress.
    • Developer Tool Exploration (F12 Encouragement): We encourage users to explore their browser’s developer tools, fostering a deeper understanding of web development.
    • AI Assistant (GPT-4 Turbo): Our AI assistant provides hints, making the game challenging yet manageable with a witty, interactive personality.

    Why We Chose This Approach

    Our primary objective was to make coding feel accessible and enjoyable. By turning coding puzzles into game levels, we aimed to alleviate the intimidation many beginners feel. The AI assistant adds humor and interaction, creating a unique learning experience that encourages critical thinking.

    Building the App: Challenges and Lessons Learned

    Working under a strict 12-hour deadline presented challenges. Key issues included:

    • Time Constraints: Balancing rapid development with quality required feature prioritization for maximum learning impact.
    • User Experience Design: Designing an interface that minimized learning curves for beginners.
    • AI Integration: Ensuring relevant hints required focused logic, which was challenging under the time constraint but rewarding in the final product.
    Reflections and Future Development

    Game of Code 2024 was a memorable journey. While our app is a functional learning tool, we see room for future expansion:

    • Advanced Coding Challenges: Adding complex levels for intermediate learners.
    • Personalized AI Interactions: Improving the AI assistant to adapt more closely to each user’s skill level.
    • Mobile Optimization: Expanding the app to work seamlessly on mobile devices.
    Conclusion

    Game of Code 2024 pushed Team A4 to create an immersive, educational coding app. Our app combines coding fundamentals with interactive learning, providing a game-like experience. We’re excited about future possibilities and hope our app inspires others to dive into coding with confidence and curiosity.

  • This project enhances e-commerce logistics by optimizing air cargo delivery routes using Ant Colony Optimization, addressing delivery challenges in speed, cost, and reliability with a scalable, adaptive algorithmic approach.

    1. Introduction

    In the dynamic landscape of the e-commerce industry, the efficiency and timeliness of package delivery stand as pivotal elements in achieving customer satisfaction. Major online retail giants such as AliExpress, Amazon, and Temu constantly struggle with the complexities of delivery logistics. The most important part of maintaining a competitive edge in this sphere lies in the thorough planning and execution of delivery routes. These companies strive to navigate the multifaceted challenges of ensuring deliveries are not only prompt but also cost-effective and reliable. This project is dedicated to implementing an algorithm designed to refine and enhance the delivery process, marking a step towards optimizing operational efficiencies within the realm of e-commerce logistics.

    1.1 Selecting an NP-Hard Problem

    The Traveling Salesman Problem (TSP) was chosen due to its well-known computational complexity and its relevance to logistical operations in e-commerce. It exemplifies a class of problems without polynomial-time solutions, making it a staple challenge in optimization and algorithm design, especially for logistics and route planning.

    2. Understanding the Problem

    The core challenge of streamlining e-commerce deliveries, including notable air methods, inherently lies in the logistic planning multifaceted nature. Additionally, optimizing the grouping, splitting, and routing of packages adds to the challenge of efficiency and reliability in deliveries. Understanding, in depth, the magnitude of the challenge at hand leads to analyzing the factors causing complexity in logistics.

    Geographic and Regulatory Constraints

    Geographically, e-commerce deliveries face no-fly zones, presenting a significant challenge for air transit. These areas are off-limits for varied reasons, including national security, environmental protection, and urban planning considerations. The dynamic nature of these restrictions necessitates real-time adaptable routing algorithms that uphold both efficiency and regulatory compliance.

    Operational Complexities

    E-commerce logistics still has huge challenges when it comes to multiple packages, especially in the air delivery realm, as all those packages may originate from different places and reach various airports. This complicates routing since traditional logistic models assume a single start point for multiple routes. The need to execute many travel paths concurrently, depending on package origins and destinations, presents a complex problem. Each route must be optimized to reduce travel time and expenses while improving service efficiency. Moreover, the variability in package sizes, priorities, and delivery deadlines demands intricate solutions for effectively managing and optimizing these diverse delivery routes.

    Economic and Environmental Considerations

    The economic viability of air delivery is dependent on a variety of factors. Fuel prices, aircraft sizes, capacity utilization, and distance traveled all have a substantial impact on the bottom line. Additionally, smaller, and lighter packages may be less economical compared to other means of transport. It is important to carefully consider costs against the benefits of speedy delivery and customer satisfaction.

    Environmentally, the focus is shifting towards minimizing the carbon footprint of delivery methods. It is imperative to develop logistics strategies that are not only economically viable but also environmentally responsible. The ultimate challenge is restructuring the logistic framework to tackle these multifaceted issues effectively.

    3. Agent Based Techniques

    In response to these challenges, a utility agent-based modeling serves as a powerful tool for simulating complex systems. Each agent in the system, representing an airline, package, or routing node operates under set rules, allowing for the modeling of intricate interactions and emergent behaviors. This approach is particularly effective in scenarios where multiple entities with varying objectives must operate cohesively within a shared environment. Agent-based models provide the flexibility and granularity required for analyzing and optimizing complex logistic operations.

    4. Optimization Algorithm Chosen

    The Ant Colony Optimization (ACO) algorithm was chosen for its proven effectiveness in solving complex routing problems such as the TSP. By mimicking the pheromone trail-laying and following behavior of ants, the ACO algorithm facilitates the discovery of optimized paths through a probabilistic search process. This algorithm excels in environments where the search space is too vast for exhaustive exploration and traditional optimization methods are impractical. ACO's ability to adapt to changes in real-time makes it well-suited to address the dynamic and multifaceted nature of e-commerce logistics.

    5. Solution Approach

    The proposed solution synthesizes agent-based techniques with the ACO algorithm to address the challenges detailed above. Below is the detailed explanation of the approach used.

    Distance Calculation with Cost & Path Construction

    • For airport and route data loading, two global datasets provided by OpenFlights are used. The airports dataset contains 6072 airports and crucial information such as 'Airport_ID', 'Name', 'City', 'Country', 'IATA', 'ICAO', 'Latitude', 'Longitude', 'Altitude', 'Timezone', 'DST', 'Tz_database_time_zone', 'Type', and ‘Source'. The routes dataset includes information such as 'Airline', 'Airline_ID', 'Source_Airport', 'Source_Airport_ID', 'Destination_Airport', 'Destination_Airport_ID', 'Codeshare', 'Stops', and 'Equipment'.
    • For data filtering, only entries classified as airport and having a non-null IATA code from the airports dataset is included and this selection is done either using a defined number of nodes or using a Boolean variable to include at least one airport from each country from the dataset. The routes data is filtered to remove any entries where the airport destination ID is null.
    • For list construction, the algorithm ensures that each route is processed only once.
    • For distance calculation and path construction, the algorithm identifies all the routes originating from the different airports. A haversine formula which takes into consideration the curvature of earth, is then used to calculate the distance between the origin and the different destinations and a cost function is used to compute the cost based on the distance and a constant K=1.15. The cost of each route increases exponentially with distance so that to prioritize the agent to take the shortest route.
    • For data structuring, a JSON object is constructed with each key corresponding to an airport ID, containing details about all the airport and a list of all the paths (with its cost and distance) leading from it. The entire object is then written in a JSON file.

    Optimization Algorithm & Multipackage Handling

    • The algorithm initializes the pheromone levels and a distance matrix for the selected airports. Each package has a pheromone level. There is also a global pheromone level which nudges the ants to take a path where the packages converge.
    • Each ant constructs a tour by visiting nodes using probability distribution that is a function of the distance and cost to the next node and the amount of pheromone on the connecting edge.
    • The algorithm then keeps track of the best tour found so far and maintains a list of top 10 tours, ensuring diversity in the solutions and avoiding early convergence to suboptimal paths.
    • After all the ants have completed their tours, the pheromone levels are updated with higher levels of pheromones deposited on shorter and more desirable paths while pheromones on other paths evaporate based on the decay rate.
    • The simulation is repeated for a set number of iterations with which the algorithm will converge to a near-optimal solution.
    • To account for multiple packages that may have different start and end nodes, separate tours are created for each package ensuring that the algorithm finds an optimal route for each one given that the starting point for each package could be different.

    The final output will include the best tour and its length. The results of the algorithm will be illustrated in the experimental evaluation section.

    6. Implementation

    The algorithm was implemented in Python and hosted on Google Colab with detailed annotations for each code segment, which collectively perform the tasks of node generation, distance calculation, ACO processing, multipackage handling, and result visualization.

    To access Google Colab notebook, click on the following link: Google Colab Notebook

    7. Experimental Evaluation

    7.1 Effect of Number of Ants

    The number of ants affects the rate at which the most optimal path (according to the algorithm) is found. More ants result in better paths earlier during the iterations.

    7.2 Effect of Number of Iterations

    One of the routes had an improvement in one of the last iterations. This indicates that there may be better paths for the ants to find. No improvement in the path length after 150 iterations and no changes for 350 further iterations, hence we can safely assume that there will be no more improvement.

    7.3 Effect of Pheromone Decay

    The rate at which the pheromones decrease over time. As it decreases, the ants are encouraged to explore new paths.

    7.4 Effect of Alpha

    The variable alpha controls the importance of local pheromone trails while making decisions. Compared to gamma (which controls the importance of global pheromone), this will ensure that trails of other packages are maintained.

    7.5 Effect of Beta

    The variable beta controls the importance of distance while making decisions. It will try to prioritize paths with shorter distances.

    8. Conclusion

    Through the implementation of ACO algorithm to tackle an NP-hard problem, the final solution achieved a certain level of optimization that streamlined deliveries in terms of cost and time. The ACO algorithm is able to handle packages from different origins and at different priorities. ACO can be further extended by exploring other parameters or experimenting with other optimization algorithms such as PSO and Genetic Algorithm and adapting it to suit its context to achieve the desired outcome. Moreover, additional data sets can be introduced for optimization algorithms, thus, allowing the decision making of the algorithm to be more accurate based on various distances and costs involved to satisfy multiple objectives.

  • MyLawer, developed for the University of Mauritius App Cup 2023, features an AI powered chatbot to assist users with Mauritian laws, including filling insurance claims, and offers a comprehensive legal database.

    MyLawer: Empowering Individuals Through Technology

    In a remarkable feat, MyLawer was developed for the University of Mauritius App Cup 2023 in just two days. This innovative application aims to empower individuals by simplifying the complexities of Mauritian law, making it more accessible to the general public.

    Chatbot Assistance

    At the heart of MyLawer is a powerful chatbot designed to assist users in navigating various legal scenarios. Whether it’s filling out insurance claims or understanding specific legal processes, the chatbot provides step-by-step guidance to ensure users can accurately and efficiently complete necessary forms. This feature not only saves time but also reduces the stress often associated with dealing with legal documentation.

    User-Friendly Interface

    MyLawer boasts a user-friendly interface that enhances the overall experience, facilitating seamless navigation through the intricacies of the Mauritian legal system. Users can explore a comprehensive database of laws, regulations, and legal procedures, equipping them with the knowledge to search for specific legislation or browse through various categories. This functionality encourages a deeper understanding of the legal framework, placing vital information at users' fingertips.

    Conclusion

    In summary, MyLawer stands as a valuable resource for anyone seeking clarity in Mauritian law, demonstrating how technology can bridge the gap between legal complexities and user accessibility.

  • MyRecipes is an innovative recipe app that has been developed as part of our semester one university project. With a user-friendly interface and a vast collection of delicious recipes, it's the perfect tool to explore and enhance your culinary journey.

    Due to the software's closed-source status, descriptions of its functionality and design are not included.

  • The Intelligent Traffic Light System dynamically adjusts light timings using real-time traffic data, reducing congestion, emissions, and delays. Interconnected systems enhance traffic flow, safety, and environmental sustainability in urban areas.

    Due to the software's closed-source status, descriptions of its functionality and design are not included.

  • The purpose of Harmony is to provide an online platform for consumers to easily access and purchase eco-friendly products that are better for the environment and their personal health.

    Introduction

    Description

    In the dynamic world of e-commerce, maintaining and keeping the website up-to-date is essential for a successful business. This project empowers individuals within our organization to actively contribute to expanding the online inventory. This ensures that all restocked products are entered into our system with complete and accurate details, providing up-to-date information for our customers.

    Scope

    The primary objective of this project is to enhance the functionality of our online shop by utilizing powerful markup languages, such as XML, XSD, XSLT, and AJAX. In this updated version, product information can be added to the database either by uploading an XML file or manually. When a product ID is clicked, all related information is displayed on the page, allowing for streamlined access and management of product data.

    Technology Used

    • XML
    • XSD
    • XSLT
    • AJAX
    • HTML
    • PHP
    • PICO CSS

    Requirements

    Functional Requirements

    • The system shall display a list of products on the inventory page.
    • The system shall display the product details on the product page.
    • The system shall allow the website administrator to enter product inventories.
    • The system shall allow the website administrator to add an entire XML file to include products.
    • The system shall allow the website administrator to add an XML file to include product details.
    • The system shall enable the website administrator to enter product attributes such as brand name, name, category, description, thumbnail, weight, barcode, quantity, unit, production date, expiry date, cost price, and selling price.

    Non-Functional Requirements

    • The system shall use a MySQL database.
    • The system should be adaptable to different platforms with minimal modifications.
    • The system should be reliably available whenever needed.
    • The system should upload new product listings in under 3 seconds.
    • The system should have a graphical user interface.
    • The system should retrieve and display details from the database in under 3 seconds.

    Code Repository

    The entire code is available on GitHub.
    Link: Admin Side

    Link: User Side

  • In the game Infinite Loop, we have implemented a game environment that features 3D views of objects with a moving camera, lighting and material modifications, and texture mapping. To improve the gaming environment, we used many libraries to integrate sophisticated features such as collision detection, AI, and sound effects. The fundamental aspect of our game is its mysterious environment, which makes it unique, and game features such as help and interacting with NPC, as well as background music, make the game more challenging while remaining intriguing and entertaining to play with. Thanks to all of our amazing team members, we managed to complete this game in due time.

    Due to the software's closed-source status, descriptions of its functionality and design are not included.

Fun Facts

  • 15+

    Projects Completed
  • 300+

    Contributions (2024)
  • 100K+

    Lines of Code

Most Used Languages

Python 337.62%
CSS 320.08%
JavaScript17.28%
Dart11.56%
C++7.38%
HTML 56.09%
Writings

Top Writings

Contact

Get in Touch

embedgooglemap.net