Job offers / Offres de visites, stages, thĂšses et emploi
Feel free to contact us at contact@algomus.fr if you are interested in joining our team, even if the subject is not detailed below. You can also have a look on job offers in the previous years to see the kind of subjects we like to work on.
Visits of PhD students or post-docs (between 2-3 weeks and 2-3 months) can be also arranged through mobility fundings. Contacts should be taken several months in advance. (open)
2025 post-doc position on co-creativity (K. DĂ©guernel): Lifelong learning for co-creative music generation (open) .
2025 Internship positions will be available (2-6 months), this year including the following themes:
- Co-creativity, Machine learning: Curiosity-Driven Adaptive AI for Co-Creative Music Generation (K. DĂ©guernel, F. Berthaut (MINT)) (open)
- Music analysis / Harpsichord: Modeling basso continuo realization on the harpsichord (M. Giraud, Y. Teytaut) (closed)
- Corpus / Web development: Création générique de corpus musicaux sur la plateforme web Dezrann (E. Leguy, M. Giraud) (open)
- Video Game Music / Corpus / Music Perception, Loop perception in Video Game Music: corpus constitution and experimental study (Y. Teytaut, C. Canonne (IRCAM), F. Levé) (open)
- Difficulty modeling / Guitar: Suggestion personnalisĂ©e de parcours pĂ©dagogiques pour l’apprentissage de la guitare (A. D’Hooge, M. Giraud, with Guitar Social Club) (open)
PhD positions, 2025-28 will be published soon.
Stages 2025 de recherche et/ou développement en informatique musicale
Research Internship : Curiosity-Driven Adaptive AI for Co-Creative Music Generation
- Final year of Master’s degree internship
- Duration: 4-6 months, with legal “gratification”
- Location: Lille (Villeneuve d’Ascq, Laboratoire CRIStAL, mĂ©tro 4 Cantons); partial remote work is possible
- Supervisors and contacts: Ken DĂ©guernel (CR CNRS) & Florent Berthaut (MCF Univ. Lille)
- Open for applications
I. Context
Recent advancements in Human-AI co-creative systems, especially within Mixed Initiative Co-Creative (MICC) frameworks, have begun to reshape musical creation, enabling new processes and interactions [1-2]. However, current systems often fall short in adapting to users over time, typically requiring unilateral adaptation â either through the user learning to operate the AI or the system being manually updated by an engineer-musician [3-4]. This internship will focus on laying the foundation for curiosity-driven learning [5] within MICC musicking [6], equipping the AI with mechanisms for adaptive, long-term engagement in creative processes [7-8].
This project sits at the intersection of several key areas, primarily Music Information Retrieval (MIR), Lifelong Learning for Long-Term Human-AI Interaction, and New Interfaces for Musical Expression (NIME). Together, these fields provide a foundation for implementing and exploring AI-driven curiosity in musical settings, ultimately facilitating expressive, intuitive engagement between musicians and adaptive AI systems.
This internship takes place in the scope of the National Research Agency project MICCDroP led by Ken DĂ©guernel, and is part of a collaboration between the Algomus and MINT teams at CRIStAL.
II. Objective
The goal of this internship is to explore and implement curiosity-driven learning mechanisms in an AI music generation model, laying a groundwork for sustained adaptive interactions within a co-creative musical setting. This will involve:
-
Implementing basic curiosity-driven algorithms to encourage the AI to explore its generative space.
-
Testing and refining these algorithms to balance novelty and musical coherence in generated outputs.
-
Conducting preliminary user interaction tests to assess how the AIâs adaptive behavior aligns with user preferences, guiding future refinement.
III. Internship Work Plan
The internship will be structured in three phases, allowing for iterative development and experimentation within a manageable scope for a 4 to 6-month project.
Phase 1: Research and Model Familiarization
The intern will begin by conducting a literature review focused on music generation, music interaction, curiosity-driven learning and novelty search methods. They will then become familiar with the technical tools and frameworks relevant to this project, and gain hands-on experience with existing music generation models. During this phase, the intern will implement a basic curiosity-driven mechanism, such as novelty search or diversity maximization, within an AI-based music generation model.
Phase 2: Implementing and Testing Curiosity-Driven Learning
Building on Phase 1, the intern will now extend the model by refining curiosity-driven exploration methods, specifically focusing on maximizing novelty in generated music. The intern will develop simple evaluation metricsâsuch as novelty scores and coherence scoresâto quantitatively measure the modelâs performance.
Phase 3: User Interaction Testing and Analysis
In the final phase, the intern will design a basic interactive setup, allowing musicians to experiment with and provide feedback on the curiosity-driven AI model. They will conduct initial testing sessions, collecting qualitative insights on how well the AIâs outputs align with user creativity and aesthetic preferences, thereby evaluating its effectiveness in a co-creative context. Following these sessions, the intern will analyze the data to assess how curiosity-driven adaptation affects user engagement and identify areas for potential improvements in personalization and co-creative alignment.
IV. Learning Outcomes for the Intern
This project will give the intern foundational experience in adaptive AI, curiosity-driven learning, and interactive system design for music. They will also gain practical experience in music generation, user interaction testing, and iterative model refinement, key competencies for further research in the field. Opportunities to pursue a Ph.D. in our lab on this topic will be strongly considered.
Qualifications
Needed:
- Last year of Master’s in Computer Science, Artificial Intelligence, or Music Computing
- Strong background in Machine Learning
Preferred:
- Experience with music programming languages (Max/MSP, PureData…)
- Personal musical practice
References
- [1] Jordanous (2017). Co-creativity and perceptions of computational agents in co-creativity. International Conference on Computational Creativity.
- [2] Herremans et al. (2017). A functional taxonomy of music generation systems. ACM Computing Surveys, 50(5).
- [3] Nika et al. (2017). DYCI2 agents: merging the âfreeâ, âreactiveâ and âscenario-basedâ music generation paradigms. International Computer Music Conference.
- [4] Lewis (2021). Co-creation: Early steps and future prospects. *Artisticiel/Cyber-Improvisations.
- [5] Colas et al. (2020). Language as a cognitive tool to imagine goals in curiosity driven exploration. NeurIPS.
- [6] Small, C. (1998). Musicking: The meanings of performing and listening. Wesleyan University Press.
- [7] Parisi et al. (2019). Continual lifelong learning with neural networks: A review. Neural networks, 113.
- [8] Scurto et al. (2021). Designing deep reinforcement learning for human parameter exploration. ACM Transactions on Computer-Human Interaction, 28(1).
Research Internship - “Loop Perception in Video Game Music: Corpus Constitution and Experimental Study”
- Stage M1/M2 2025, avec gratification, 4-6 mois
- ThÚmes : informatique musicale, jeux vidéo, perception
- Lieu : Lille (Villeneuve d’Ascq, Laboratoire CRIStAL, mĂ©tro 4 Cantons); TĂ©lĂ©travail partiel possible
- Encadrement et contacts: Yann Teytaut (CRIStAL), Clément Canonne (IRCAM), Florence Levé (MIS + CRIStAL)
- Annonce et liens: https://www.algomus.fr/jobs/#loop
- Open for applications
Context
Video Game Music (VGM) refers to the musical genre associated with soundtracks accompanying interactive game-plays with the aim to deepen immersion within virtual worlds and enhance the playerâs overall gaming experience [Gibbons24].
Back in the 70s-80s, early game soundtracks were limited by hardware constraints, and relied on chiptune melodies that became iconic through their memorable and repetitive structure [Collins08]. Following the progress in computer music and music technology, VGM has evolved to intricate compositions that now play a crucial role in intensifying emotions and supporting storytelling via true orchestral scores or even original songs [Phillips14].
Today, from its presence on audio streaming platforms, to specialized training courses in musical conservatories, as well as CD/vinyl releases and themed concerts, VGM fully contributes to the broader cultural landscape, showcasing the unique capabilities of interactive media, and has therefore become a concrete area of study in digital humanities [Lipscomb04; Kamp16]. Yet, it remains only marginally explored in the Music Information Retrieval (MIR) community.
Additionally, one of the key features of VGM is its âseamless loopâ structure: most VGM is composed in repeating patterns, or loops, designed to repeat continuously and as subtly as possible so that the listener hardly notices the transition [Margulis13]. However, exploring whether listeners are actually sensitive to the âseamâ in looping remains an unexplored research area.
The purpose of this internship is thus twofold. On one hand, it aims to provide the MIR community with a resource for analyzing VGM by creating a representative VGM corpus with structural annotations. On the other hand, it seeks to study participantsâ ability to accurately predict the looping point in VGM based on the constituted database.
Related works and objectives
While there already exist several VGM datasets, these offer only a partial view of the diverse audio landscape in modern video games. Indeed, available corpora lack sufficient diversity as they may (1) provide solely MIDI data (e.g., VGMIDI, Lakh MIDI); (2) be focused on one specific game (e.g., GameSound) or aesthetic (e.g., NES-MDB); or (3) include non-official arrangements instead of original works (e.g., VGMix Archive, VGMIDI). As a result, they fall short in representing the full spectrum of game audio, particularly the high-quality recordings of recent orchestral and/or pop/rock soundtracks.
In order to bridge this gap, the first part of this internship will be dedicated to reflect on and identify relevant factors (e.g., video game genres, years, stations, etc.) to build both a diverse and representative VGM database (about 100 pieces). The corpus will then be annotated in terms of structural patterns, following conventions established on other music structure datasets [Smith11].
Finally, experimental studies will be conducted, aiming at assessing the extent to which listeners are sensitive to the âloop-basedâ nature of most VGM, by contrasting, first, several types of VGM relying on various compositional techniques, and, second, a passive (listening only) vs an interactive (listening while playing the game, or a proxy) reception.
Organization
During the course of this internship, the candidate will be incited to understand and get familiar with Video Game Music (VGM), study and discover proximity literature on this genre (i.e., ludomusicology) and existing datasets, reflect on and identify relevant factors to constitute a dedicated dataset, annotate the structural patterns, and conduct experimental studies from a perception perspective.
Environment: The intern will be integrated in the Algorithmic Musicology (Algomus) Team at the CRIStAL Lab of the University of Lille, and will profit from the team knowledge on both music digital humanities and Music Information Retrieval. The annotated corpus could notably be integrated in the Dezrann platform [Giraud18]. For both the theoretical framework and perceptual studies, the intern will also benefit from the expertise of IRCAM’s Musical Practices Analysis (APM) Team.
Desired profile:
- Master of Research (M2) in either computer science, (audio) signal processing or computational musicology;
- interest in video game music and musical structure;
- previous experience in musical algorithmics or perception would be appreciated but is not necessary.
Bibliography
- [Gibbons24] Gibbons, William et Grimshaw-Aagaard, Mark (ed.). The Oxford Handbook of Video Game Music and Sound. Oxford University Press, 2024.
- [Collins08] Collins, Karen. Game sound: an introduction to the history, theory, and practice of video game music and sound design, MIT Press, 2008.
- [Lipscomb04] Lipscomb, Scott D. and Zehnder, Sean M. Immersion in the virtual environment: The effect of a musical score on the video gaming experience. Journal of Physiological Anthropology and Applied Human Science, vol. 23, no 6, p. 337-343, 2004.
- [Kamp16] Kamp, Michiel, Summers, Tim, Sweeney, Mark, et al. Ludomusicology: Approaches to video game music. Intersections: Canadian Journal of Music/Revue Canadienne de Musique, vol. 36, no 2, p. 117-124, 2016.
- [Collins07] Collins, Karen. In the loop: Creativity and constraint in 8-bit video game audio. Twentieth-century music, 2007, vol. 4, no 2, p. 209-227, 2007
- [Margulis13] Margulis, Elizabeth Hellmuth. On repeat: How music plays the mind. Oxford University Press, 2013.
- [Smith11] Smith, Jordan Bennett Louis, Burgoyne, John Ashley, Fujinaga, Ichiro, et al. Design and creation of a large-scale database of structural annotations. In : ISMIR 2011, p. 555-560, 2011.
- [Giraud18] Giraud, Mathieu, Groult, Richard, et Leguy, Emmanuel. Dezrann, a web framework to share music analysis. In : TENOR 2018, pp. 104-110. 2018
- VGMIDI - https://github.com/lucasnfe/VGMIDI
- VGMix Archive - https://vgmixarchive.com/
- Lakh MIDI - https://colinraffel.com/projects/lmd/
- GameSound - https://michaeliantorno.com/gamesound/
- NES-MDB - https://github.com/chrisdonahue/nesmdb
Stage de recherche: Suggestion personnalisĂ©e de parcours pĂ©dagogiques pour lâapprentissage de la guitare
- Stage M1/M2 2025, avec gratification, environ 4/5 mois (M1/M2)
- ThÚmes: prédiction de difficulté, informatique musicale
- Lieu: Lille (Villeneuve d’Ascq, Laboratoire CRIStAL, mĂ©tro 4 Cantons), tĂ©lĂ©travail partiel possible.
- Encadrement et contacts: Mathieu Giraud, Alexandre D’Hooge
- Annonce et liens: https://www.algomus.fr/jobs
- Candidatures ouvertes
Contexte
Ce stage sâinscrit dans le cadre dâune collaboration entre lâĂ©quipe Algomus (UniversitĂ© de Lille, CRIStAL) et la start-up Guitar Social Club (GSC). LâĂ©quipe dâinformatique musicale Algomus sâintĂ©resse Ă la modĂ©lisation informatique de la musique, que ce soit des partitions, des tablatures ou encore, dans le cadre de ce stage, des donnĂ©es sur l’apprentissage de la musique. La start-up GSC a Ă©tĂ© incubĂ©e Ă la Plaine Images Ă Lille et dĂ©veloppe une application pour faciliter l’apprentissage de la guitare pour la musique pop/rock, Ă travers des suggestions automatiques de chansons et exercices.
La collaboration entre l’Ă©quipe Algomus et GSC consiste au dĂ©veloppement d’un modĂšle pour suggĂ©rer Ă l’utilisatrice les prochains morceaux Ă travailler Ă partir de ses souhaits et de son niveau actuel. L’application est dĂ©jĂ disponible en version bĂȘta et le dĂ©ploiement complet est prĂ©vu courant 2025.
Objectifs
De nombreux facteurs influencent la difficultĂ© d’une partition ou d’une tablature [1, 2, 3], comme par exemple, pour la guitare, la complexitĂ© du placement des doigts dans une position d’accord. Fort des donnĂ©es d’analyse fournies par GSC, l’algorithme permet dĂ©jĂ des suggestions satisfaisantes pour un niveau donnĂ©. En revanche, ces suggestions sont pour le moment dĂ©nuĂ©es d’intention pĂ©dagogique. Les suggestions sont pertinentes pour aider les guitaristes Ă apprendre de nouvelles chansons Ă leur niveau, mais elles ne garantissent pas une progression sur le moyen/long terme telles que celles que peut concevoir un professeur de guitare. Le but du stage est ainsi de concevoir un chemin pĂ©dagogique qui pourrait ĂȘtre vu, plus gĂ©nĂ©ralement, comme un graphe de progression.
La premiĂšre Ă©tape du stage consistera en la lecture de l’Ă©tat de l’art sur l’analyse de difficultĂ© musicale (par exemple [1, 2, 3]) et l’Ă©tude du principe de l’application GSC [4] afin de se familiariser avec le contexte de ce projet. Dans un second temps, la lecture d’articles sur les systĂšmes de recommandation [5] et la construction de parcours pĂ©dagogiques [6] sera nĂ©cessaire. Le stage consistera notamment en l’Ă©laboration de parcours pĂ©dagogiques test Ă©laborĂ©s en collaboration avec GSC, ainsi que le traitement de donnĂ©es d’utilisation rĂ©elles.
Profil recherché
Master dâinformatique, avec compĂ©tences en programmation, algorithmique, graphes, apprentissage machine. Connaissances et pratique musicales apprĂ©ciĂ©es (notamment en guitare).
Débouchés
Des opportunitĂ©s de poursuite en thĂšse pourraient ĂȘtre envisagĂ©es sur ce sujet via une thĂšse CIFRE avec l’entreprise Guitar Social Club. De plus, il existe d’autres possibilitĂ©s sur d’autres sujets proches en informatique musicale, que ce soit dans Ă CRIStAL ou bien dans notre rĂ©seau de collaborateurs acadĂ©miques ou industriels en France et Ă l’Ă©tranger.
Références
- [1] VĂ©lez VĂĄsquez, M. A., Baelemans, M., Driedger, J., Zuidema, W., & Burgoyne, J. A. (2023). Quantifying the Ease of Playing Song Chords on the Guitar.
- [2] Ramoneda, P., Jeong, D., Valero-Mas, J. J., Serra, X. (2023). Predicting Performance Difficulty from Piano Sheet Music Images.
- [3] Rodriguez, R. C., Marone, V. (2021). Guitar Learning, Pedagogy, and Technology: A Historical Outline.
- [4] D’Hooge, A., Giraud M., Abbou, Y., Guillemain, G. Suggestions PĂ©dagogiques PersonnalisĂ©es pour la Guitare.
- [5] Hafsa, M., Wattebled, P., Jacques, J., Jourdan, L. A Multi-Objective E-learning Recommender System at Mandarine Academy.
- [6] Siren, A., Tzerpos, V. Automatic Learning Path Creation Using OER: A Systematic Literature Mapping.
Stage R&D: Création générique de corpus musicaux sur la plateforme web Dezrann
- Stage M1/M2 2025, avec gratification, de 3 Ă 5 mois
- ThÚmes: informatique musciale, manipulation de données, développement web back (et/ou full stack) agile, Node.js, TypeScript
- Lieu: Lille (Villeneuve d’Ascq, Laboratoire CRIStAL, mĂ©tro 4 Cantons), tĂ©lĂ©travail partiel possible
- Encadrement et contacts: Emmanuel Leguy et Mathieu Giraud (CRIStAL)
- Annonce et liens: https://www.algomus.fr/jobs
- Candidatures ouvertes
Contexte et objectifs
L’Ă©quipe Algomus dĂ©veloppe Dezrann, une application web fullstack open-source (TypeScript, Vue.js, node.js) pour lire et annoter des partitions musicales. L’annotation se fait en ajoutant des Ă©tiquettes (labels), Ă©lĂ©ments graphiques sur la partition, pour dĂ©crire par exemple la structure, l’harmonie, le rythme, la mĂ©lodie ou la texture.
Dezrann est utilisĂ©e d’un cĂŽtĂ© par des classes de collĂšges pour dĂ©couvrir activement la musique, et de l’autre, par des musicologues annotant des corpus. C’est un vĂ©ritable service permettant d’accĂ©der Ă plusieurs centaines de piĂšces musicales. Aujourd’hui plus de dix corpus musicaux sont ainsi disponibles, produits par l’Ă©quipe comme par des collaborateurs internationaux.
Travail Ă effectuer
Afin de rendre la plateforme utilisable plus largement, des fonctionnalités back sont nécessaires pour avoir une plus grande souplesse dans la gestion des corpus et des utilisatrices et utilisateurs.
-
L’objectif premier est d’avoir une meilleure solution reproductible pour gĂ©rer la crĂ©ation de piĂšces/corpus, via une API robuste, permettant d’ĂȘtre utilisĂ©e aussi bien par des scripts externes que par une interface web dĂ©veloppĂ©e avec Vue.js.
-
Un objectif secondaire pourrait ĂȘtre d’amĂ©liorer la gestion des profils d’utilisatrices et d’utilisateurs, en particulier pour la gestion de leurs piĂšces et des permissions.
DĂ©veloppement en TypeScript/node.js, et possiblement Python, dans un cadre de dĂ©veloppement agile (conception, tests, intĂ©gration continue, documentation) et reproductible. Le ou la stagiaire sera aussi en contact avec nos utilisateurs, français et Ă©trangers, notamment avec les classes du secondaire de la rĂ©gion utilisant Dezrann. Elle ou il participera enfin aux projets de mĂ©diation montĂ©s par l’Ă©quipe.
Références
-
Garczynski et al., Modeling and Editing Cross-Modal Synchronization on a Label Web Canvas, MEC 2022, https://hal.science/hal-03583179
-
Balke et al., Bridging the Gap: Enriching YouTube Videos with Jazz Music Annotations, 2018 https://www.frontiersin.org/articles/10.3389/fdigh.2018.00001/full
-
Weiss et al., Schubert Winterreise Dataset: A Multimodal Scenario for Music Analysis, 2021 https://dl.acm.org/doi/10.1145/3429743