Panagiotis Christakakis
Resume | LinkedIn | GitHub | Google Scholar
Working as Research Assistant at CERTH - The Centre for Research & Technology
Currently pursuing a Master's degree in Artificial Intelligence and Data Analytics at University of Macedonia.
Thessaloniki, Central Macedonia, Greece
Authors: D Kapetas, P Christakakis, V Goglia, EM Pechlivani
Journal: IEEE Xplore (IPAS2025)
Authors: EM Pechlivani, G Gkogkos, P Christakakis, D Kapetas, I Hadjigeorgiou, D Ioannidis
Journal: IEEE Xplore (IPAS2025)
Authors: D Kapetas, P Christakakis, S Faliagka, N Katsoulas, EM Pechlivani
Journal: AgriEngineering - MDPI
Authors: D Kapetas, E Kalogeropoulou, P Christakakis, C Klaridopoulos, EM Pechlivani
Journal: Agriculture - MDPI
Authors: P Christakakis, N Giakoumoglou, D Kapetas, D Tzovaras, EM Pechlivani
Journal: AI - MDPI
Authors: P Christakakis, G Papadopoulou, G Mikos, N Kalogiannidis, D Ioannidis, D Tzovaras, EM Pechlivani
Journal: Technologies - MDPI
Authors: N Giakoumoglou, E Kalogeropoulou, C Klaridopoulos, EM Pechlivani, P Christakakis, E Markellou, N Frangakis, D Tzovaras
Journal: Smart Agricultural Technology - Elsevier
Authors: D Kapetas, E Kalogeropoulou, P Christakakis, C Klaridopoulos, EM Pechlivani
Journal: Smart Agricultural Technology - Elsevier
Authors: CS Kouzinopoulos, EM Pechlivani, N Giakoumoglou, A Papaioannou, S Pemas, P Christakakis, D Ioannidis, D Tzovaras
Journal: Journal of Low Power Electronics and Applications - MDPI
Smoking Recommendations with Item-Item CF: This is personal project of a collaborative filtering-based recommendation system focused on analyzing smoking habits and providing personalized recommendations.
 | User_1 | User_2 | User.. | User_N |
---|---|---|---|---|
Tobacco Brand_1 | 4 |  | … |  |
Tobacco Brand_2 |  | 1 | … |  |
Tobacco Brand_3 | 3 | 5 | … | 4 |
Tobacco Brand_.. | … | … | … | … |
Tobacco Brand_M | 2 |  | … | 2 |
Classification using Computer Vision: Implemenation of Bag of Visual Words (BoVW) technique for an image dataset (Mammals Classication).
Building a simple chat-bot using Microsoft’s MetaLWOz dataset
• (1) Data pre-processing. Prepare the training data (pairs of sentences from the provided data set).
• (2) Neural Network Structure. Choosing an appropriate neural network structure that can model the problem.
• (3) Loss Function. Selecting an appropriate loss function.
• (4) Training. Training of the model on sentence pairs ([input, output]).
• (5) Testing - Inference. A txt as well as a gif are provided with some test conversations with the chatbot.
Unsupervised learning in Image Clustering: Develop and combine deep learning models / clustering techniques on Fashion-MNIST dataset.
Simple Classification: Comparing different models on numerical data of financial indicators for businesses in order to classify them as bankrupt or not.
NLP: Sentence creation from bigrams and trigrams generated by Project Gutenberg books. Classifier training to find positive and negative movie reviews.
CNNs: Comparing and seeking the best result of various convolutional neural network architectures for CIFAR-10 dataset, while trying different loss functions.
R in simple use cases: Using the simplest R functions to transform the dataset into tidy format.
R in simple use cases combined with ggplot2: Plotting the simplest possible R plots. Dataset queen.csv contains characteristics for each song of the Queen albums as derived from Spotify. Mcdonalds.csv contains the price of Big-Mac in local currency for various countries and years. In general, this R Markdown contains Faceted ScatterPlots, BoxPlots, Histograms and BarPlots.
Editrules, Correction and Imputing missing values with R: With the use of deducorrect, editrules and VIM the dataset is transofrmed into tidy. In general, this R Markdown contains numerical and caterogical rules, violations, hotdeck imputation and lastly some plots.
Olympic Games with R: Dataset results.csv contains results of the track and field events of all the Olympic Games events until 2016. In general, this R Markdown contains the preprocessing steps as well as univariate and multivariate analysis, time series analysis techniques and interesting graphs for this dataset.
Using interactive maps with R: Manage and visualize geographic data with world datset from cshapes. This folder contains both R Markdown and Shiny Dashboard. Distance thresholds, buffer from capitals, distance from country’s centroid are included.
Network Data with R: Manage and visualize network data again with world datset from cshapes. This folder contains also both R Markdown and Shiny Dashboard. Directed graph of capitals and their distances, shortest path between capitals considering weight distance or total number of nodes are included.
Music Albums Interactive Visualizations: Managing data about music albums, their genres, titles, year and artists.
NYPD shooting incidents: Managing data about NYC shootings and trying to plot something interesting.
Production and Network Measurements: Studying and analyzing social networks. Comparison of the properties of the networks, commenting on the results and the parameter choices for the synthetic networks, and also the reasoning that was used to arrive at them. Finaly, plots are added and commented.
Detect communities with different techniques: Working with polblogs network, a directed graph from hyperlinks between blogs on US politics recorded in 2005 by Adamic and Glance, a comparison of the performance of different community detection techniques-algorithms is held, with respect to the ground-truth communities given.
Production and evaluation of Node Embeddings: Working with polbooks network, a directed graph from Books about US Politics Dataset, we produce node embeddings using Node2Vec and then evaluate their performance using them for Link Prediction and K-Means Clustering, with respect to the ground-truth communities given.
Data Insertion and Retrieval Queries using Neo4j: Data from an online lending library that records information about the books it has, its users-readers and the borrowing of books by users. The library provides search services for books to users as well as recommendations of books they may find interesting. At the same time, users are able to rate books and lists of the most popular and highest rated books can be provided to users. The online library uses a relational database to store and manage relevant information.
Network Traffic Analysis from PCAP file: Analyzing network traffic in data centers based on trace from PCAP file and extraction of traffic characteristics in the form of distributions. The extracted results are plotted in the form of distributions (e.g. CDF). For the implementation DPTK library is used.
Flow Migration Technique: Implementation of a flow migration technique for transferring a network flow to another path of a network topology, while assessing any communication complications, such as increase in delay, packet loss and packet reordering. The flow transfers are done using OpenFlow and D-ITG.
CSR: Representation of a matrix A by three (one-dimensional) arrays, that respectively contain nonzero values, the extents of rows, and column indices.
CSC: Representation of a matrix A by three (one-dimensional) arrays, that respectively contain nonzero values, the extents of columns, and row indices.
Equilibration: Rows and columns of a matrix A are multiplied by positive scalars and these operations lead to non-zero numerical values of similar magnitude.
Arithmetic Mean: This method aims to decrease the variance between the nonzero elements in the coefficient matrix A. Each row is divided by the arithmetic mean of the absolute value of the elements in that row and each column is divided by the arithmetic mean of the absolute value of the elements in that column.
k-ton: Identifying and eliminating singleton, doubleton, tripleton, and more general k-ton equality constraints in order to reduce the size of the problem and discover whether a LP is unbounded or infeasible.
Exterior Point Algorithm: An implementation of Exterior Point Algorithm.
TSP Parser: With the help of tsplib95 a complete parser was made to read instances of type TSP, HCP, ATSP, SOP, CVRP. Also it supports Edge_Weight_Types of EXPLICIT, EUC_2D, EUC_3D, XRAY1, XRAY2, GEO, ATT, UPPER_ROW, LOWER_ROW and many more. Main goal of this parser is to return important information about a selected problem in order to apply heuristics and metaheuristics later. It is important to mention that this work was part of a group project and my part was about Hamiltonian Cycle Problems (HCP). Contributors are mentioned inside the files.
TSP solver: With the help of elkai library and TSP parser from TSP Parser, Lin-Kernighan-Helsgaun heuristic algorithm is applied to HCP, TSP, ATSP, SOP files to find optimal tour and plot them.
CVRP solver: With the help of VRPy python framework and TSP parser from code (6), best routes for CVRP files are found, as well as their weights.