Job offers / Offres de visites, stages, thĂšses et emploi
Feel free to contact us at contact@algomus.fr if you are interested in joining our team, even if the subject is not detailed below. You can also have a look on job offers in the previous years to see the kind of subjects we like to work on.
Permanent position (French required, with French qualification). Un poste de MdC en section 27 dont le profil recherche inclut “Music Information Retrieval, analyse et gĂ©nĂ©ration musicale par ordinateur, perception musicale” sera publiĂ© dans le cadre de la session synchronisĂ©e de recrutement EC 2026 chez nos collaborat·rices de l’UPJV Ă Amiens. (soon)
Visits of PhD students or post-docs (between 2-3 weeks and 2-3 months) can be also arranged through mobility fundings. Contacts should be taken several months in advance. (open)
2026-27 Engineer position (15 months, months), music structure on Dezrann, full-stack web development (open)
2026 Internship positions are available (2-6 months), this year including the following themes:
- Co-creativity, Machine learning: Machine Listening for Reinforcement Learning with Agent-based Music Performance Systems (K. Déguernel, C. Panariello) (open)
- Music structure: Similarities across scales and musical dimensions (F. Levé, in Amiens, Y. Teytaud) (open)
- Pedagogy / Music training: GĂ©nĂ©ration d’exercices mĂ©lodiques et rythmiques de haute qualitĂ© (M. Giraud, F. LevĂ©, with Solfy) (closed)
- Real-time Analysis / Music games: Analyse et évaluation en temps réel de flux musical pour la pédagogie (M. Giraud, Y. Teytaut, with Dobox) (open)
- Video Game Music / Corpus / Music Perception, Loop in Video Game Music: building and analyzing the UFO-50 corpus (Y. Teytaut, F. Levé) (open)
PhD positions, 2026-29 will be published soon.
Stages 2026 de recherche et/ou développement en informatique musicale
Machine Listening for Reinforcement Learning with Agent-based Music Performance Systems
- Final year of Master’s degree internship
- Duration: 4-6 months, with legal “gratification” (550âŹ-600âŹ/month)
- Location: Lille (Villeneuve d’Ascq, Laboratoire CRIStAL, mĂ©tro 4 Cantons); partial remote work is possible
- Supervisors and contacts: Ken Déguernel (CR CNRS) & Claudio Panariello (postdoc Univ. Lille)
- Full offer, and links: https://www.algomus.fr/jobs
- Candidatures ouvertes - Open applications
Context
This internship takes place in the scope of the MICCDroP project, which aims at integrating mechanism of continual learning for long-term partnerships in AI-human musical interactions with agent-based music performance systems. In particular, one aspect of this project is the use of methods based on reinforcement learning and curiosity-driven learning to equip an AI agent with mechanisms for adaptive engagement in creative processes across multiple practice sessions.
This project sits at the intersection of several key areas, primarily Music Information Retrieval (MIR), Lifelong Learning for Long-Term Human-AI Interaction (LEAP-HRI), and New Interfaces for Musical Expression (NIME). It will be supervised by Ken Déguernel (CNRS Researcher) and Claudio Panariello (Univ. Lille postdoc and composer).
Objective
When using reinforcement learning, user feedback can easily be gathered during the interaction through external means (button, pedal, gesture) to indicate whether the interaction is good or not, or a posteriori during reflective session. The goal of this internship, however, is to develop machine listening methods in order to gather the feedback through the musical interaction itself. Several dynamics of interaction will be tested: level of engagement, consistency of play, a/synchronicity…
Tasks:
- Explore and implement different machine listening methodologies.
- Integrate the developed machine listening system into existing MICCDroP musical agent.
- Test the system in situ with professional performers.
Qualifications
Needed:
- Last year of Master’s degree in Machine Learning, Music Computing
- Strong background in Signal Processing for Audio
Preferred:
- Experience with music programming languages (Max/MSP, SuperCollider, …)
- Personal music practice
References
- Jordanous (2017). Co-creativity and perceptions of computational agents in co-creativity. International Conference on Computational Creativity.
- Nika et al. (2017). DYCI2 agents: merging the âfreeâ, âreactiveâ and âscenario-basedâ music generation paradigms. International Computer Music Conference.
- Collins, N. (2014). Automatic Composition of Electroacoustic Art Music Utilizing Machine Listening. Computer Music Journal 36(3).
- Tremblay, P.A. et al. (2021). Enabling Programmatic Data Mining as Musicking: The Fluid Corpus Manipulation Toolkit. Computer Music Journal, 45(2).
- Scurto et al. (2021). Designing deep reinforcement learning for human parameter exploration. ACM Transactions on Computer-Human Interaction, 28(1).
- Parisi et al. (2019). Continual lifelong learning with neural networks: A review. Neural networks, 113.
- Small, C. (1998). Musicking: The meanings of performing and listening. Wesleyan University Press.
Stage de recherche (M2) 2026: Similarities across scales and musical dimensions
- Final year of Master’s degree internship
- Duration: 5-6 months, with legal âgratificationâ (550âŹ-600âŹ/month)
- Themes: MIR, symbolic music
- Lieu: Amiens (laboratoire MIS, UPJV, collaboration avec Algomus)
- Encadrement et contacts: Florence Levé (MIS) et Yann Teytaut (CRIStAL)
- Annonce et liens: https://www.algomus.fr/jobs
- Candidatures ouvertes - Open applications
Context
Musical data represent a considerable amount of information of different nature. However, despite the impressive results of recent generative models, the structural properties of music are underutilized, due to the lack of a generic paradigm that can account for the similarity relationships between elements, temporal sequences, sections of one or several pieces at various levels of representation, across different musical dimensions (sound objects, melody, harmony, texture, etc.). Current tools fail to render this multiscale structure in musical data, in particular they donât allow a real control on the parameters of the music that is composed (or generated). The goal of the ANR Project MUSISCALE is to develop methods and software tools that can account for the relationships between similar elements at different scales, finely enough to be able to recompose a complete musical object from these elements, or to create variations of it.
Objectives
The goal of this internship is
- to explore different notions of similarity between symbolic objects (not only related to pitches, but rhythm, texture, or other similarity criteria on different scale levels will be considered)
- to model and implement algorithms to automatically analyze music scores
- to study the relation between those segmentations and the global form of the pieces
- to propose transformations of the objects for creative purposes, and study the impact of those transformations on the global form.
Although this subject is mostly on symbolic music, timbre or audio analysis/transformation are part of some extensions that could be considered.
Qualifications
- Last year of Master in Computer Science or Music Computing
- Knowledge in music theory is recommended,
- Ideally, musical practice
- Candidates at ease both with symbolic and audio analysis are welcome
Opportunities
The ANR funding includes opportunities to pursue a PhD in our lab on this topic or related topics.
Références
-
ALLEGRAUD P ., BIGO L., FEISTHAUER L., GIRAUD M., GROULT R., LEGUY E., LEVĂ F. âLearning Sonata Form Structure on Mozartâs String Quartetsâ. Transactions of the International Society for Music Information Retrieval (TISMIR), 2(1):82â96, 2019.
-
BHANDARI, K., and COLTON, S. “Motifs, phrases, and beyond: The modelling of structure in symbolic music generation.” In International Conference on Computational Intelligence in Music, Sound, Art and Design (Part of EvoStar), pp. 33-51. Cham: Springer Nature Switzerland, 2024.
-
BUISSON M., McFEE B., ESSID S., and CRAYENCOUR H-C. âLearning Multi-level Representations for Hierarchical Music Structure Analysisâ. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), 2022.
-
CALANDRA, J., CHOUVEL, J. M., & DESAINTHE-CATHERINE, M. “Hierarchisation algorithm for MORFOS: a music analysis software.” In Proceedings of the International Computer Music Conference (ICMC), 2025.
-
COUTURIER L., BIGO L., and LEVĂ F. âComparing Texture in Piano Scoresâ. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), 2023.
-
NIETO O., MYSORE G.J., WANG C. et al. âAudio-Based Music Structure Analysis: Current Trends, Open Challenges,and Applicationsâ. Transactions of the International Society for Music Information Retrieval (TISMIR), 3(1):246â263, 2020
Stage de recherche: GĂ©nĂ©ration d’exercices mĂ©lodiques et rythmiques de haute qualitĂ©
- Stage M2 2026, avec gratification (550âŹ-600âŹ/mois), 4-5 mois
- ThÚmes: informatique musicale, pédagogie, solfÚge, mélodie, rythme
- Lieu: Lille (Villeneuve d’Ascq, Laboratoire CRIStAL, mĂ©tro 4 Cantons), tĂ©lĂ©travail partiel possible.
- Encadrement et contacts: Mathieu Giraud, Algomus, Florence Levé, UPJV, Anicet Bart, Solfy,
- Annonce et liens: https://www.algomus.fr/jobs
- Candidatures fermées. Mail et CV à envoyer aux encadrants.
Contexte
Solfy est une startup EdTech innovante qui dĂ©veloppe une application ludique dâapprentissage du solfĂšge. BasĂ©e Ă La Plaine Images (Tourcoing), l’Ă©cosystĂšme numĂ©rique dĂ©diĂ© aux industries crĂ©atives, nous sommes convaincus que la meilleure façon dâapprendre câest en sâamusant. Rejoindre Solfy, câest intĂ©grer une Ă©quipe passionnĂ©e au service de lâĂ©veil et lâapprentissage musical.
Algomus est l’Ă©quipe d’informatique musicale du laboratoire CRIStAL, Ă l’UniversitĂ© de Lille. L’Ă©quipe Ă©tudie la modĂ©lisation de haut-niveau, lâanalyse et la gĂ©nĂ©ration co-crĂ©ative de musique, et enfin l’interaction avec les partitions et d’autres donnĂ©es musicales. Elle collabore avec des musicien·nes, des enseignant·es, des artistes et des entreprises sur des sujets liant musique et technologie.
Solfy propose, dans son application, des exercices de lecture générés à partir de modÚles probabilistes. Solfy et Algomus collaborent depuis début 2025 sur la génération de contenus de haute qualité, notamment autour de la plateforme Ur.
Le but du stage est d’aboutir Ă une gĂ©nĂ©ration de lecture de notes et de lecture de rythme de ‘hautĂ© qualitĂ©’ musicale et pĂ©daogique. On cherchera en particulier Ă obtenir une certaine cohĂ©rence mĂ©lodique, harmonique, rythmique, tout en fixant des critĂšres de difficultĂ© (notes, figures rythmiques).
ConcrĂštement, le stage commencera par un Ă©tat de l’art en informatique musicale, plus particuliĂšrement en estimation de difficultĂ© et en gĂ©nĂ©ration de mĂ©lodies et de rythmes, Ă l’apprentissage de la plateforme Ur et du code prĂ©cĂ©demment rĂ©alisĂ©, ainsi qu’avec des Ă©changes avec Solfy et des professeurs de musique pour mieux modĂ©liser les enjeux. Il sera aussi pertinent de consulter un ensemble de mĂ©thodes de formation musicale.
Le stage confortera et proposera de nouvelles mĂ©thodes de gĂ©nĂ©ration et/ou de filtrage de mĂ©lodies, et testera ces mĂ©thodes par un prototype au sein de la plateforme Ur. Le stage contiendra aussi du gĂ©nie logiciel sur cette plateforme. Les mĂ©thodes dĂ©veloppĂ©es ont vocation Ă ĂȘtre Ă la fois publiĂ©es en open-source, tout comme Ă ĂȘtre intĂ©grĂ©es dans l’application Solfy.
Profil recherché
Master dâinformatique, compĂ©tences en programmation, et algorithmique. Connaissances et pratique musicales apprĂ©ciĂ©es.
Débouchés
Des opportunitĂ©s de poursuite en thĂšse Ă CRIStAL pourraient aussi ĂȘtre envisagĂ©es sur ce sujet via une thĂšse CIFRE avec l’entreprise Solfy, ou d’autres opportunitĂ©s sur d’autres sujets avec l’Ă©quipe ou ses collaborateurs
Références, ressources
- La plateforme Ur
- Solfy: application élÚve, application enseignant
Stage de recherche: Analyse et évaluation en temps réel de flux musical pour la pédagogie
- Stage L3, M1 ou M2 2026, 3 à 5 mois, gratifié
- ThÚmes: informatique musicale, pédagogie, mélodie, rythme, harmonie, évaluation temps réel
- Lieu: Villeneuve d’Ascq, Laboratoire CRIStAL, mĂ©tro 4 Cantons
- Encadrement: Mathieu Giraud, et Yann Teytaut, Univ. Lille, en collaboration avec Olivier Sacré, Dobox
- Candidatures ouvertes, CV et mail aux encadrants
Contexte
Ce stage s’effectue au sein de l’Ă©quipe d’informatique musicale Algomus, en lien avec la jeune entreprise Dobox. Cette collaboration porte sur l’analysee et l’Ă©valuation temps-rĂ©el de flux MIDI par rapport Ă une rĂ©fĂ©rence via la plateforme open-source Nadi.
LâĂ©quipe dâinformatique musicale Algomus (www.algomus.fr) du laboratoire CRIStAL (UniversitĂ© de Lille, CNRS) possĂšde une expertise reconnue en modĂ©lisation, analyse et gĂ©nĂ©ration haut-niveau de la musique, tant pour la recherche acadĂ©mique (60+ publications dans des journaux et confĂ©rences internationales) que pour des applications artistiques ou industrielles, notamment dans le domaine pĂ©dagogique. Lâentreprise Dobox propose une plateforme innovante pour la pĂ©dagogie musicale, reposant sur de nouveaux dispositifs matĂ©riels et logiciels.
Le sujet du stage concerne de nouveaux modĂšles d’Ă©valuation. Ă partir dâune partition ou dâune grille cible, lâobjectif est de pouvoir Ă©valuer en temps rĂ©el un flux MIDI. Il sâagit dâaller au-delĂ dâune simple Ă©valuation « note Ă note » pour, par exemple, analyser le jeu par rapport Ă un accord ou Ă une structure musicale (accprds, texture, motifsâŠ) plus globale. ConcrĂštement, le srage contiendra :
- Une veille bibliographique et un Ă©tat de lâart sur lâĂ©valuation de rĂ©ussite musicale et sur des reprĂ©sentations musicales ;
- La constitution dâun jeu de test de flux MIDI suivant plus ou moins prĂ©cisĂ©ment une musique cible
- Des proposition de mĂ©thodes dâĂ©valuation, leur prototypage et leur test
Ce travail se fera dans le cadre du dĂ©veloppement open-source (LGPLv3+) de la plateforme dâanalyse musicale temps-rĂ©el Nadi.
Profil recherché
Master voire License dâinformatique. CompĂ©tences en programmation, et algorithmique. Connaissances et pratique musicales apprĂ©ciĂ©es.
Débouchés
Des opportunitĂ©s de poursuite en thĂšse ou d’autres opportunitĂ©s sur d’autres sujets avec l’Ă©quipe ou ses collaborateurs pourraient ĂȘtre envisagĂ©es.
Références, ressources
Research Internship – “Looping Structure in Video Game Music: Corpus Analysis and Annotation”
- Internship L3/M1/M2 (2026), 2-3 months
- ThĂšmes : video game music, music structure, MIR
- Lieu : Lille (Villeneuve d’Ascq, Laboratoire CRIStAL, mĂ©tro 4 Cantons); TĂ©lĂ©travail partiel possible
- Encadrement et contacts: Yann Teytaut (CRIStAL), Florence Levé (MIS + CRIStAL)
- Annonce et liens: https://www.algomus.fr/jobs/#loop
- Candidatures ouvertes - Open applications
Context
Video Game Music (VGM) refers to the musical genre associated with soundtracks accompanying interactive game-plays with the aim to deepen immersion within virtual worlds and enhance the playerâs overall gaming experience [Gibbons24]. Back in the 70s-80s, early game soundtracks were limited by hardware constraints, and relied on chiptune melodies that became iconic through their memorable and repetitive structure [Collins08]. Following the progress in computer music and music technology, VGM has evolved to intricate compositions that now play a crucial role in intensifying emotions and supporting storytelling via true orchestral scores or even original songs [Phillips14].
Today, from its presence on audio streaming platforms, to specialized training courses in musical conservatories, as well as CD/vinyl releases and themed concerts, VGM fully contributes to the broader cultural landscape, showcasing the unique capabilities of interactive media, and has therefore become a concrete area of study in digital humanities known as âludomusicologyâ [Lipscomb04; Kamp16]. Yet, it remains only marginally explored in the Music Information Retrieval (MIR) community.
Additionally, one of the key features of VGM is its âseamless loopâ structure: most VGM is composed in repeating patterns, or loops, designed to repeat continuously and as subtly as possible so that the listener hardly notices the transition [Margulis13]. The analysis of musical structure is still a subject of research in MIR [Nieto20], so it seems natural to study VGM, which is inherently looping in nature.
The purpose of this internship is thus twofold. On one hand, it aims to provide the MIR community with a resource for analyzing VGM by studying a representative VGM corpus and, on the other hand, derive structural annotations from it.
Related works and objectives
While there already exist several VGM datasets, these offer only a partial view of the diverse audio landscape in modern video games. Indeed, available corpora lack sufficient diversity as they may (1) provide solely MIDI data (e.g., VGMIDI, Lakh MIDI); (2) be focused on one specific game (e.g., GameSound) or aesthetic (e.g., NES-MDB); or (3) include non-official arrangements instead of original works (e.g., VGMix Archive, VGMIDI). As a result, they fall short in representing the full spectrum of game audio.
One corpus, however, seems particularly promising: UFO 50 [May25], a collection of 50 diverse, unreleased games (RPGs, shooters, arcade, etc.). This selection ensures: (1) consistency, as a single composer minimizes stylistic variation, isolating gameplay as the primary variable; (2) prototypical game design, encouraging the music to follow expected genre conventions; (3) retro-gaming aesthetics, associated with a strong cultural sedimentation; and (4) validated genre tags for each game.
The first part of this internship will be dedicated to reflect on and identify relevant factors (e.g., video game genres, years, stations, etc.) to study/understand UFO 50âs aesthetics. The corpus will then be annotated in terms of structural patterns, following conventions established on other music structure datasets [Nieto20] or analysis platform [Giraud18].
Organization
During the course of this internship, the candidate will be incited to understand and get familiar with Video Game Music (VGM), study and discover proximity literature on this genre (i.e., ludomusicology) and existing datasets, reflect on and study the UFO 50 dataset and find ways to annotate its structural patterns.
Duration: 2-3 months â March 2026 or later
Environment: The intern will be integrated in the Algorithmic Musicology (Algomus) Team at the CRIStAL Lab of the University of Lille, and will profit from the team knowledge on both music digital humanities and Music Information Retrieval. The annotated corpus could notably be integrated in the Dezrann platform [Giraud18].
Desired profile: Master of Research in either (computational) musicology, sound and audio, or computer science; interest in video game music and musical structure. Previous experience in musical algorithmics would be appreciated but is not necessary.
Bibliography
[Gibbons24] Gibbons, William et Grimshaw-Aagaard, Mark (ed.). The Oxford Handbook of Video Game Music and Sound. Oxford University Press, 2024.
[Collins08] Collins, Karen. Game sound: an introduction to the history, theory, and practice of video game music and sound design, MIT Press, 2008.
[Lipscomb04] Lipscomb, Scott D. and Zehnder, Sean M. Immersion in the virtual environment: The effect of a musical score on the video gaming experience. Journal of Physiological Anthropology and Applied Human Science, vol. 23, no 6, p. 337-343, 2004.
[Kamp16] Kamp, Michiel, Summers, Tim, Sweeney, Mark, et al. Ludomusicology: Approaches to video game music. Intersections: Canadian Journal of Music/Revue Canadienne de Musique, vol. 36, no 2, p. 117-124, 2016.
[Collins07] Collins, Karen. In the loop: Creativity and constraint in 8-bit video game audio. Twentieth-century music, 2007, vol. 4, no 2, p. 209-227, 2007
[Margulis13] Margulis, Elizabeth Hellmuth. On repeat: How music plays the mind. Oxford University Press, 2013.
[Nieto20] Nieto, O., Mysore, G.J., Wang, C.-. i ., Smith, J.B.L., SchlĂŒter, J., Grill, T. and McFee, B., 2020. Audio-Based Music Structure Analysis: Current Trends, Open Challenges, and Applications. Transactions of the International Society for Music Information Retrieval, 3(1), pp.246â263. DOI: https://doi.org/10.5334/tismir.54
[May25] UFO 50: A low-res, high-concept anthology of imaginary retro games. The Guardian, ISSN 0261-3077, accessed January 5, 2026.
[Giraud18] Giraud, Mathieu, Groult, Richard, et Leguy, Emmanuel. Dezrann, a web framework to share music analysis. In : TENOR 2018, pp. 104-110. 2018
VGMIDI - https://github.com/lucasnfe/VGMIDI
VGMix Archive - https://vgmixarchive.com/
Lakh MIDI - https://colinraffel.com/projects/lmd/
GameSound - https://michaeliantorno.com/gamesound/
NES-MDB - https://github.com/chrisdonahue/nesmdb