Menu

Welcome to

Andrea Lombardo Site


About Me

Not available
I'm a 28-year-old computer science engineer, based in Rome. I'm a IT enthusiast with a specific passion on data science (AI branches) that it's grown during my accademical path. In the master's degree I've developed several projects with several teams to frame and solve problems mathematically in accademical courses. I'm self-motivated with strong analytical and problem-solving skills, as well as a firm background in engineering, programming, mathematics, and statistics. I can describe myself as a self-motivated, team player person with strong organizational and interpersonal skills.

I graduated from high school Liceo Scientifico Cavour in 2014 and then I enrolled to the university. I received the Laurea degree in Computer Science Engineering in 2019 at University of Tor Vergata. I recevied the Master's degree in Computer Science engineering at the La Sapienza University, placed in Rome, on 20 October 2021.

I'm a creative learner always desiring to figure out things which I've never seen like new technologies and tools. I'm a determined, positive person focus on details with the ability to adapt into several different environment with ability to hear, interact and share informations and ideas with the team. I'm able to analyze problems, to solve solutions according the system's capabilities and to build algorithms for the occurence. I like discussing and comparing myself with other people about my ideas. I’m passionated about creating truly beautiful, efficient digital products like cyber physical system, datalake and data warehouse more in general digital infrastructure, Machine Learning system but also applications to make people’s life-better with technology

I’m looking forward to grow my management skills. In my spare time I really enjoy exploring the landscapes' beauty in place of the world I 've never seen yet.

Work & Education

November 2022- Present

Not available Enel Group

Data & Devops Engineer

As Data engineer I've supported teams in several activites:

Project full stack application taking care metrics like latency of used to interaction user, maintance of AWS solution.

Creation of libraries for specific projects.

Creation of custom containers using frameworks like FastApi which allow to serve functionalities hosting rest APIs.

Maintance and monitoring application hosted in kubernetes clusters and pipelines flows.

December 2021- November 2022

Not available EY - Ernst & Young

Junior Information Technology

As Solution Developer I've supported the following activities:

Planning, development, maintance of cloud solution based on MS PowerPlatform

Support of the Cloud Operating Model design and implementation roadmap definition, according to the EY framework.

February 2019- October 2021

Not available University of La Sapienza

Master's Degree on Engineering in Computer Science

Master's thesis: Smart manufacturing in aerospace industries: analysis and prediction in Ruag case study
Time range: 02/2021-5/11/2021

The task of the project is to improve the line production of Ruag company using Smart manufacturing approach. ESA tries to find out new experimental project for increasing the production of small satellites production for launching constellation as done by billioners like E.Mask and J.Bezos. The architecture let to digitalzie the system inserting a system able to control an huge amount of data through Flink and inspect it through data analytics methodologies. In particular Kibana is used for generating dashboards and with eland altair libraries installed with pip we are able to create single lucene json visualization. According to them we can make filtering of the single panels with comboboxs. At the same time we have create a Markov decision process able to predict the next step of the automatic machine paneling created bu RUAG in order to guess if there is the need of human intervation or not in production line. The idea is to help analyzing what are the critical point but at the same time have the possibility to predict some issue errors and have the possibility to correct in line production some errors or some miss of resourse. The idea starts because ESA would like to launch a costellation of satellites and RUAG has got the contect to product these panels.

September 2014 - Februrary 2019

Not available University of TorVergata

Bachelor's Degree on Engineering in Computer Science
Address: Software & Web System

September 2009 - July 2014

Liceo Scientifico Cavour

Diploma Maturità scientifica

Portfolio


Workshop done during MASTER'S DEGREE:


  • The task of the project is to create S-shaped Rectified Linear Units(SReLU) from scratch and make a comparison with others activation functions, analyzing performance and behaviour of defferent convolutional neural network. The comparison is done between different non saturated activation function like ReLU and others Leaky ReLU, PReLU and SReLU, in addition to this also exponential activation functions have been tested.

    Presentation
    Paper

  • In the link above there's the reference documentation of the tasks of the app.

    The idea is to create a Mobile app able to give a contribution to readers that join reading books and would like to save book sentences and woul like to sign the number of books have read until now in a way to compare with their friends. There's also the possibility to set a preference list, the book genres you prefer and for each book you read which is the best according your opinon. The app is developed using Android Studio. After making an authentication using Firebase for doing the most common login form (email and password and Google Login) the user enter into a new fragment, that belong into a set of fragment of navigation drawer,(HOME) where there's a ListView where you've all books in user's preference list. Now you 've the possibility to click into hamburgher for having possibility to LOGOUT( come back into authentication form),SETTING( manage LIGHT SENSOR, MEDIA PLAYER and other information of books preference),PROFILE(resume of the user preference). In the Home fragment we've the possibility to interact with an element of the book for insert new information of the user like sentences or if user 've already read the book, otherwise you can add a book using GOOGLE-BOOK-API based on TITLE and AUTHOR. The communication with the database is done through a server developped using NODE which is able to implement the REST protocol and manage the request done by the app and let to respond making queries to a cloud db in MONGODB.

    Backend NodeJS
    Develop project


  • The idea is to focus to a Seminar topic and try to explain something new like in my case: recent paper based on that topic: Remote Core Locking: Migrating Critical-Section Execution to Improve the Performance of Multithreaded Applications Written by Jean-Pierre Lozi Florian David Gaël Thomas Julia Lawall Gilles Muller Presented
    Some applications that work well on a small number of cores do not scale to the number of cores available in today's multicore architectures. Performance in lock algorithms is influenced: 1. access contention • Solution: Reduce the number of threads that, simultaneously, require the access to the critical section 2. cache misses • Solution: improve locality
    EXAMPLE OF THE PROBLEM MEMCACHE IS AN EXAMPLE OF APPLICATION WHICH HAS THIS PROBLEM WHERE WORKS IN BEST PERFORMANCE : •FOR A GET OPERATION WITH 10 CORES •FOR A SET OPERATION WITH 2 CORES ONE OF THE BOTTLENECK OF THIS APPLICATION ARE CRITICAL SECTIONS, WHICH THE INFORMATION SHOULD BE ACCESSED IN ATOMIC WAY AND THEY’RE PROTECTED BY LOCKS. HIGH CONTENTION MEANS MORE PROCESSING TIME AND SO IT ‘S MORE EXPENSIVE. IT’S A PROBLEM WHEN THE NUMBER OF CORES START INCREASING .. Test system: Opteron 6172 with 48-core running at 3.0.0 Linux kernel with glibc 2.13 1)Time spent in critical section 2)number of cache miss 3)Others measurements. RCL performance are better than other lock algorithms in the case of increasing number of clients Memcached application performance: -no Flat combining because it periodically blocks on condition variables, which Flat Combining does not support. Other studies for optimizing the execution of critical sections on multicore architectures Software solution where the server is an ordinary client thread and the role of server is distributed between client threads, approach produces overhead for the management of the server Hardware-based solution whose introduces new instructions to perform the transfer of control, and uses a special fast core to execute critical sections Insert a fast transfer of control from other client cores to the server, to reduce access contention Execution a succession of critical sections on a single server core to improve cache locality In the last 20 years, several approaches have been developped for optimizing critical sections execution on multi-core architectures: Real motivations of low performance of lock-based approaches: 1) Cache misses when execute critical section 2) Bus saturation caused by spinlock because induces frequent broadcast on bus. RCL is introduced to address both issues simultaneously Solution: Design better locks RCL key features: Goal: Improve performance execution of critical section into legacy applications that run on top of multicore architectures. 1 Developed entirely in software on x86 architecture 2 Works better than other kind of locks works better. -POSIX -CAS SPINLOCK -MCS -FLAT COMBINING 3 Replace the management of critical section with an optimized remote procedure which call to a dedicated server core. Shared information in the server core’s cache No need to transfer data between a core to another core How it works Overview Transfer the execution and management of the critical section to a server core, choosen according through profiler, the client with the most frequent lock usage Client as an handler locks implemented as a remote procedure calls of critical sections Core algorithm • The remote call is transformed into a clients and server communicate done through an array of request structures of CL dimension which is unique for each server . • C is the max number of clients • L is the size of the hardware cache line and represents a request done by a client to the server • Each request is mapped into a single cache line Each request contain in order: 1. Address of the lock associated with the critical section 2. Address of the structure refered to the context 3. Address of the function that include critical section 1. Client has requested the access 2. NULL not request.

    From Server side • A thread analyze all the request and wait those that have an address refers to a critical section. • Iterate for each entry: • If function value is an address and lock is free, server thread acquires the lock and executes the critical • server reset the element • resume the iteration. • After writing the entry cache line with all the informations • it waits that the address of the function is point to NULL. • In case the number of client is less than the number of cores available: it’s used SSE3 monitor/mwait routine for sleeping the client sleeps until the server answer. From Client side Profiler: it’s developed by authors to detect the information locks: Lock frequency usage Time spent in critical section These information are used for identify the core in which running the server and locks need replacing from POSIX to RCL A tool Coccinelle used for transforming critical section to remote procedure call. Critical section looks like separate functions: PROBLEMS: Shared variable Additional elements:
    Implementation of the RCL Runtime(supported by Posix thread) The runtime ensure responsiveness and liveness respectively avoiding the block of thread at OS level or inversion priority and managing at run time a pool of threads for each server : -if the servicing thread is blocked/waited, replace it with another in the pool. The management thread used for management the pool of threads: - Highest priority - Check the progress threads every time is woken up 1) modify the priority 2) nothing change The backup thread used when all threads are blocked at OS so it woke up the management thread. 1) The runtime implement a POSIX FIFO scheduling policy to execute a thread until blocked by processor: 1.1) could induce priority inversion between threads 2)Reduce the delay minimizing the length FIFO queue There’re situations to avoid which generate a deadlock because the server is unable to execute critical section of other locks. Core algorithm is applied to a thread and it requires that the thread is never blocked at the OS level and never spin into a waitloop. Now we focus on runtime RCL liveness and responsiveness: different situation. The thread could be blocked at the OS level The thread could spin if the critical section try to acquire a spinlock The thread could be preempted at the OS level Critical sections every time is executed in all cores, execept one that manages the lifecyle of the thread • Vary the degree of contention on the lock by varying the delay between the execution of the critical section • Locality of the critical section varying the number of shared cache lines each one accesses. • Cache access line are not pipelined: construct the address of the next memory access from the previously read value.Comparison when varying degree contention average of 30 runs False serialization • For adapting Berkley DB application to the usage of RCL you need to allocate the 2 most used lock and then other 9. All 11 locks should be implemented as RCLs on the same server. Their critical sections (refer to 11 locks) are artificially serialized • Now we focus the impact of the serialization with two metrics: • Use rate: • The use rate measures the server workload. • False serialization rate: • The false serialization rate a ratio of the number of iterations over the request array • It’s important how change the rate between one or 2 different server: • High rate with 1 and elimination of false serialization and increasing throughput of an amount 50 % Analysis of performance • Execution time incurred when each critical section accesses 5 cache lines. • The average number of L2 cache misses(top) • The average execution time (bottom) • When a critical section over 5000 iterations when critical sections access one shared cache line Rcl is techinque focus on reducing lock acquisition time and improving execution speed of critical sections through increased data locality and the migration of execution to the server core. • RCL powerful is when an application relies on highly conteded locks
    Future work DESIGN NEW APPLICATION WITH THESE STRATEGIES CONSIDER THE DESIGN AND THE IMPLEMENTATION OF AN ADAPTIVE RCL RUNTIME. SYSTEM ABLE TO DYNAMICALLY SWITCH BETWEEN LOCKING STRATEGIES CAPABILITY TO MIGRATE LOCKS BETWEEN MULTIPLE SERVERS FOR BALANCING DYNAMICALLY THE LOAD AND AVOID FALSE SERIALIZATION.



  • In the links below we can see there' re project elaborations written in Latex using Overleaf with a first attempt of the project that we modify during the development. The app serves as a medium to connectt each other all the people who share the same passion for a sport and in general for physical activities. The main objective of the app is to give the possibility for the user to connect with others joining sport events or creating them. In addition to this we also thought to arrange the app in a wway to allow, also through future developments for other ativities like sharing their workout and their statistics make a public challenge with the possibility to take them diretly from their favourites sport app.
    Presentation of the project


  • In the links below we can see there' re my elaborations written in Latex using Overleaf with the relative Python code where I used Pycharm for the first homework instead for the second I decide to use Google Colab. The code allows to me to estabilish how can I obtain my results, so from them I'm able to deal my results.
    Data sets contains information through which I can make experiments, these data sets are explained in seminars, instead blind set, if presented, let us to analyze our results in terms of accurancy , precision and recall.In first dataset we've only a set of pairs of labels instead in the second dataset we've a collection of picture that we should classify.

    Project development of 1 homework
    Solve the two classification problems: A) optimization prediction, B) compiler prediction. For each classification problem, realize at least two variants (varying feature extraction, learning algorithm, learning hyper-parameters, etc.). Note: Use any method at your choice, except neural networks that will be subject of the second homework. Evaluate each variant in a proper way. Find the best model and motivate the choice. For each classification problem, apply the best model to predict output for the blind test set, comment all result with a report explaining all the work done: design and implementation choices, evaluation procedure and results.

    Data set
    Blind test set
    Report
    Code
    Project development of 2 homework
    Solve the image classification problem in two modes:
    A) define a CNN and train it from scratch,
    B) apply transfer learning and fine tuning from a pre-trained model. For training you can use any subset of the MWI dataset.
    Evaluate the two models in a proper way. Discuss the best model and motivate the choice. Testing can be done either with cross-validation on MWI or with the SMART-I weather test set.
    Write a report explaining all the work done: design and implementation choices, evaluation procedure and results.
    Submit a set of images to be used to define a new test set and your best model

    Data set
    Report
    Code

  • In the links below we can see there' re my elaborations of the tree homeworks delivered in the format .gz with the respective requirments. The aim of the course is to give the necessary knowledge to configure and manage LANs & WANs under Unix-like OSs using Netkit framework, developed by Uniroma3, which allows to emulate a switching environment under Linux.

    The first homework deals different topics like :Netkit Round-Up, Physical interfaces and MAC addressing,Static IP addressing & DHCP and NAT

    Requirement of 1 homework                     Solution
    The second homework deals same topics of the first homework but with the adding of new concepts like: Network debug tools, Static IP routing & OSPF
    iptables, ,SSH and VPN

    Requirement of 2 homework                   Solution
    The third homework deals same topics with different requirments but with the adding of new concept like: x509 & VPN and DNS.

    Requirement of 3 homework                   Solution
    In this part I would like to mention same notes which I had written during the course.

    Question and Answer for written part
    Notes of pratical part

  • In the links above we can see there' re my elaborations written in Latex using Overleaf.

    The first homework deals following topics: Dynamic Programming, Shortest Path, Network Flow, Unio-Find structure, Minum Spanning Tree with the algorithm as Kruskal that let me to find out the information required, diffference from a P, NP, NP hard and NP-complete problem.

    Requirement of each exercise
    Solution of 1 homework
    The second homework, made with a colleague F.Di Spazio, deals different topics in comparison with first one, in fact we've seen applications of linear programming, approximation algorithm and randomized algorithm with the technique that let me to unrandomize the algorithm as (expected conditional vale), applications of game theory, applications of Chernoff Bound in two variantes for the 4 and 5 exercises.

    Requirement of each exercise
    Solution of 2 homework


  • Using Social Media to Enhance Emergency Situation Awareness

    The project wants to detect an emergency situation in real time through tweets flow scanning using machine learning algorithms and users as sensors.

    Link to the Slideshare presentation Scientific on the repository.
    Authors: Daniele Davoli, Danilo Marzilli, Andrea Lombardo

    Dataset: For training and validating our machine learning system, we have used a dataset of 5,642 manually annotated tweets in the Italian language. The tweets are related to 4 different natural disasters occurred in Italy between 2009 and 2014.
    For each tweet is reported:
    1)tweet ID;
    2)text;
    3)source;
    4)nickname of user;
    5)ID of author;
    6)latitude and longitude (if available);
    7)time;
    8)disaster ID (see below);
    9)class

    Tweets have been manually annotated by humans and divided among 3 classes according to the information they convey:
    damage class: tweets related to the disaster and carrying information about damages to the infrastructures or on the population;
    no damage class:
    tweets related to the disaster but not carrying relevant information for the assessment of damages;
    not relevant class:
    tweets collected while building the dataset, but not related to any disaster (noise). We process our dataset in this order:
    1)Import data from the .csv file;
    2)Preprocessing our tweets in order to remove punctuation, stop words and digits and to implement the stemming algorithm;
    3)Trasform the tweets in vectors in a space vector where the axis are the vocabulary terms and give to each vector a TF-IDF (Term Frequency and Inverted Document Frequency) weight;
    4)Cluster our tweets (now vectors) in main topics;
    5)Train a SVM classifier in order to distinguish the tweets in relevant and not relevant.

    To the following link is possible to download the paper wrote where we describes our experiment related on experiments ' paper that we were following.



Workshop done during BACHELOR'S DEGREE:




  • It's made with IDLE Clion into a Linux environment-the testing process is made by Ubuntu bash.
    The main goal of this project is to build an application client-server with program language C. It is able to interchange secure information, through a reliable transfer which used the transfer protocol TCP. The communication of clients with the server takes place through API Berkeley's socket. In this application I should develop a system of booking of a cinema room. I've made a server multithread, which is able to respond to clients requests at the same time. Here after establishing a stable connection:
    -I should return the map of avaiable place
    -send to the server place which I would like to book with a unique code of response which let me to cancel a reservation
    -cancel a reservation
    -lose of the connection
    The way to build the map is a linked list, which let me to reognised row and column. The way to make persistent the system is to use of the file.




  • It's made with IDLE Clion into a Linux environment-the testing process is made by Ubuntu bash.
    The main goal of this project is to build an application client-server with program language C. It is able to interchange secure information,more precisely a file, through a reliable transfer which used the transfer protocol UDP. The communication of clients with the server takes place through API Berkeley's socket. Furthermore, as required, the implementation of the reliable service,that is not guaranteed by UDP, is managed by the selective-repeat protocol with the condition probability that we could lose some packet The software should be managed to:
    -connect the two process client-server without login in, but with request and response messages, -through "list" command: message sends by client to the server-The goal of it is to return the list of all available files from server's directory.
    -through "get" command: message sends by client to the server-It lets to the client to download the file according condition it is present into server's diretory.
    -through "put" command: message sends by client to server- It's able to upload a file present into the client's directiory to server's directory.






  • The main goal of this project is to build a web-application and a laptop application with an architecture pattern BCE.
    The task is to define the managment and maintenance of resourse, first booking of events(like Exams,Test , degree session or conference) then definition since the starting of academic year of the exams sessions.
    The application should allow to the user to loggin in with the profiles of users(Professor-Secretary). The different of them is about the the type and number of operations(actions) they could do. The communication between application client and database is made with a JDBC approach. The patterns used into this project are: for the architecture BCE and for creation database and controller we use the pattern Singleton.
    I've done 4 use case :
    a) visualization of the active booking prenotazioni attive
    b) booking an exam
    c) booking an event
    d) visualization of the historical booking
    Furthermore through Junit we've tested if everything works well.
    In this part I've developmented a thread which is able to use different class of this part but it's able to make a random reservation of one Exams or Event. Here it's important the logic of the reservation of the room, a particular class with all the options and comfort that a professor would like for doing a lecture I chose to make the web part of the Servlet programming technology, which uses Java for the development of the presentation logic, according to the BCE pattern, of web applications while providing dynamic content in HTML format, markup language. An additional tool used in the implementation was MATERIALIZE, a collection of templates for the design of websites . this give to the system a way to touch of graphics and make the system not only able to respond and conform to specifications but also aesthetically beautiful.


  • The main goal of this project during this cours is to develop a mobile application for Android made through Android Studio, the official Integrated Development Environment (IDE) for Android app development.
    This app is made for final mark, and we should develop only one part, that related to the description of races.
    At the beginning we should log in in the first activity only with username and password,recognized by the app.
    Then acording to Rest Api like post and get we are able to take or send information with JSON object to a server(WampServer) which is able to accept request by the app,this is doing with a php file.
    The task is to make a booking of a runner for a race.




  • It' s made with Eclipse and SceneBuilder for the implementation, StarUML for the documentation, POSTGRES for the managment of the database,JUNIT fot the testing process:
    The main goal of this Java project is to build a laptop stand alone application which let me import information from CSV files into a database. The data collected into CSV files should come up from Istituto Nazionale di Astrofisica, infact we have information of different celestial bodies(more precisesly Stars and filaments) by more satellites. The documentation is inside of this directory and there are:
    -the diagrams Entity-Relationship
    -the logic model(dictionary of data, business rules, class diagram, test case and database's dump made on PostgresSQL).
    The application should allow to the user to loggin in with the profiles of users(Administrator-User).
    The different of them is about the the type and number of operations(actions) they could do.
    The communication between application client and database is made with a JDBC approach.
    The patterns used into this project are: for the architecture MVC and for creation database and controller we use the pattern Singleton


Photo by Yeshi Kangrang on Unsplash

Skills Aquired

All ideas grow out of other ideas

Pubblications:
  • Not available An industry 4.0 approach to large scale production of satellite constellations. The case study of composite sandwich panel manufacturing

    Elsevier-Acta Astronautica March 2022 V.192 p.276-290
    referenceassignment

Certification:
  • Not available Profile Credly certifications

    Here you can see all my certification updated
    Certificatesbadge



Skills

Techinical skills
- Lifecycle of the software:(analysis and specification of requirments, design,implementation, testing..),
-Capability to design a Machine Learning and analize the solution based on occurence data deliver
- Operating systems: Linux, Unix, Windows
-Knowledge of algorithm techinques and concepts obtained by the two courses of G. P. Italiano in TorVergata during the first degree and the second one of S. Leonardi in Sapienza during the Master in Sapienza
-Application Web: protocol TCP,IP , UDP, server HTTP (Apache, IIS)
-DBMS: (relational) postgres,MySQL (not relational) Firebase,Mongodb
-Router protocol: DHCP,NAT,RIP,OSPF
-Network Infrastructure concept: Access,telephone Core networks
Programming SKILLS
-knowledge of Machine Learning technique (classification, regression, unsupervised algorithm, Reinforcment learning, basic concept of Neural Network)
-Knowledge of SOA and WebServices(REST and SOAP)
-knowledge of different development approaches: CMMI, SCRUM,AGILE and waterfall and iterative
-Knowledge of microservices
-Knowledge of Web Information retrival alghorithm-NLP-(SVM,Rocchio,KNN)
-Skills to create and manage the Web page: XML, JSP, HTML
-Query language: SQL,NOSQL
-Assembler on architecture MIPS
-Mobile Programming(Android) with ANDROID STUDIO
-UML, Design Pattern
-C e programming using Posix
-Python, Jython
-Java, JavaFx with usage of the JDBC pattern for connecting to relational DB, usage of Servlet for managing the request to a server

Get In Touch

l.andrea195@live.it

I'm happy to connect, listen and help. Let's work together and build something awesome. Let's turn your idea and work together. Email Me.