Home
/
Blog
/
Tech Tutorials
/
Composing Jazz Music with Deep Learning

Composing Jazz Music with Deep Learning

Author
Shubham Gupta
Calendar Icon
May 29, 2018
Timer Icon
15 min read
Share

Deep Learning is on the rise, extending its application in every field, ranging from computer vision to natural language processing, healthcare, speech recognition, generating art, addition of sound to silent movies, machine translation, advertising, self-driving cars, etc. In this blog, we will extend the power of deep learning to the domain of music production. We will talk about how we can use deep learning to generate new musical beats.

The current technological advancements have transformed the way we produce music, listen, and work with music. With the advent of deep learning, it has now become possible to generate music without the need for working with instruments artists may not have had access to or the skills to use previously. This offers artists more creative freedom and ability to explore different domains of music.

Recurrent Neural Networks

Since music is a sequence of notes and chords, it doesn’t have a fixed dimensionality. Traditional deep neural network techniques cannot be applied to generate music as they assume the inputs and targets/outputs to have fixed dimensionality and outputs to be independent of each other. It is therefore clear that a domain-independent method that learns to map sequences to sequences would be useful.

Recurrent neural networks (RNNs) are a class of artificial neural networks that make use of sequential information present in the data.

recurrent neural network, deep learning, character based learning,
Fig. 1 A basic RNN unit.

A recurrent neural network has looped, or recurrent, connections which allow the network to hold information across inputs. These connections can be thought of as memory cells. In other words, RNNs can make use of information learned in the previous time step. As seen in Fig. 1, the output of the previous hidden/activation layer is fed into the next hidden layer. Such an architecture is efficient in learning sequence-based data.

In this blog, we will be using the Long Short-Term Memory (LSTM) architecture. LSTM is a type of recurrent neural network (proposed by Hochreiter and Schmidhuber, 1997) that can remember a piece of information and keep it saved for many timesteps.

Dataset

Our dataset includes piano tunes stored in the MIDI format. MIDI (Musical Instrument Digital Interface) is a protocol which allows electronic instruments and other digital musical tools to communicate with each other. Since a MIDI file only represents player information, i.e., a series of messages like ‘note on’, ‘note off, it is more compact, easy to modify, and can be adapted to any instrument.

Before we move forward, let us understand some music related terminologies:

  • Note: A note is either a single sound or its representation in notation. Each note consist of pitch, octave, and an offset.
  • Pitch: Pitch refers to the frequency of the sound.
  • Octave: An octave is the interval between one musical pitch and another with half or double its frequency.
  • Offset: Refers to the location of the note.
  • Chord: Playing multiple notes at the same time constitutes a chord.

Data Preprocessing

We will use the music21 toolkit (a toolkit for computer-aided musicology, MIT) to extract data from these MIDI files.

  1. Notes Extraction

     def get_notes():  
         notes = []  
         for file in songs:  
           # converting .mid file to stream object  
           midi = converter.parse(file)  
           notes_to_parse = []  
           try:  
             # Given a single stream, partition into a part for each unique instrument  
             parts = instrument.partitionByInstrument(midi)  
           except:  
             pass  
           if parts: # if parts has instrument parts   
             notes_to_parse = parts.parts[0].recurse()  
           else:  
             notes_to_parse = midi.flat.notes  
           for element in notes_to_parse:   
             if isinstance(element, note.Note):  
               # if element is a note, extract pitch   
               notes.append(str(element.pitch))  
             elif(isinstance(element, chord.Chord)):  
               # if element is a chord, append the normal form of the   
               # chord (a list of integers) to the list of notes.   
               notes.append('.'.join(str(n) for n in element.normalOrder))  
         with open('data/notes', 'wb') as filepath:  
           pickle.dump(notes, filepath)  
         return notes  
      

    The function get_notes returns a list of notes and chords present in the .mid file. We use the converter.parse function to convert the midi file in a stream object, which in turn is used to extract notes and chords present in the file. The list returned by the function get_notes() looks as follows:

     Out:  
         ['F2', '4.5.7', '9.0', 'C3', '5.7.9', '7.0', 'E4', '4.5.8', '4.8', '4.8', '4', 'G#3',  
         'D4', 'G#3', 'C4', '4', 'B3', 'A2', 'E3', 'A3', '0.4', 'D4', '7.11', 'E3', '0.4.7', 'B4', 'C3', 'G3', 'C4', '4.7', '11.2', 'C3', 'C4', '11.2.4', 'G4', 'F2', 'C3', '0.5', '9.0', '4.7', 'F2', '4.5.7.9.0', '4.8', 'F4', '4', '4.8', '2.4', 'G#3',  
        '8.0', 'E2', 'E3', 'B3', 'A2', '4.9', '0.4', '7.11', 'A2', '9.0.4', ...........]  

    We can see that the list consists of pitches and chords (represented as a list of integers separated by a dot). We assume each new chord to be a new pitch on the list. As letters are used to generate words in a sentence, similarly the music vocabulary used to generate music is defined by the unique pitches in the notes list.

  2. Generating Input and Output Sequences

    A neural network accepts only real values as input and since the pitches in the notes list are in string format, we need to map each pitch in the notes list to an integer. We can do so as follows:

     # Extract the unique pitches in the list of notes.   
       pitchnames = sorted(set(item for item in notes))  
       # create a dictionary to map pitches to integers  
       note_to_int = dict((note, number) for number, note in enumerate(pitchnames))  
      

    Next, we will create an array of input and output sequences to train our model. Each input sequence will consist of 100 notes, while the output array stores the 101st note for the corresponding input sequence. So, the objective of the model will be to predict the 101st note of the input sequence of notes.

     # create input sequences and the corresponding outputs  
       for i in range(0, len(notes) - sequence_length, 1):  
         sequence_in = notes[i: i + sequence_length]  
         sequence_out = notes[i + sequence_length]  
         network_input.append([note_to_int[char] for char in sequence_in])  
         network_output.append(note_to_int[sequence_out])  
      

    Next, we reshape and normalize the input vector sequence before feeding it to the model. Finally, we one-hot encode our output vector.

     n_patterns = len(network_input)  
       # reshape the input into a format compatible with LSTM layers   
       network_input = np.reshape(network_input, (n_patterns, sequence_length, 1))  
       # normalize input  
       network_input = network_input / float(n_vocab)  
       # One hot encode the output vector  
       network_output = np_utils.to_categorical(network_output)  
      

Model Architecture

Machine learning challenge, ML challenge

We will use keras to build our model architecture. We use a character level-based architecture to train the model. So each input note in the music file is used to predict the next note in the file, i.e., each LSTM cell takes the previous layer activation (a⟨t−1⟩) and the previous layers actual output (y⟨t−1⟩) as input at the current time step tt. This is depicted in the following figure (Fig 2.).

LSTM, Long term short architecture, Recurrent neural network, music generation, neural network,
Fig 2. One to Many LSTM architecture

Our model architecture is defined as:

 model = Sequential()  
   model.add(LSTM(128, input_shape=network_in.shape[1:], return_sequences=True))  
   model.add(Dropout(0.2))  
   model.add(LSTM(128, return_sequences=True))  
   model.add(Flatten())  
   model.add(Dense(256))  
   model.add(Dropout(0.3))  
   model.add(Dense(n_vocab))  
   model.add(Activation('softmax'))  
   model.compile(loss='categorical_crossentropy', optimizer='adam')  
  

Our music model consists of two LSTM layers with each layer consisting of 128 hidden layers. We use ‘categorical cross entropy‘ as the loss function and ‘adam‘ as the optimizer. Fig. 3 shows the model summary.

LSTM, Long short term memory, model architecture, music generation, rnn, recurrent neural netowrk
Fig 3. Model summary

Model Training

To train the model, we call the model.fit function with the input and output sequences as the input to the function. We also create a model checkpoint which saves the best model weights.

 from keras.callbacks import ModelCheckpoint  
   def train(model, network_input, network_output, epochs):   
     """  
     Train the neural network  
     """  
     filepath = 'weights.best.music3.hdf5'  
     checkpoint = ModelCheckpoint(filepath, monitor='loss', verbose=0, save_best_only=True)  
     model.fit(network_input, network_output, epochs=epochs, batch_size=32, callbacks=[checkpoint])  
   def train_network():  
     epochs = 200  
     notes = get_notes()  
     print('Notes processed')  
     n_vocab = len(set(notes))  
     print('Vocab generated')  
     network_in, network_out = prepare_sequences(notes, n_vocab)  
     print('Input and Output processed')  
     model = create_network(network_in, n_vocab)  
     print('Model created')  
     return model  
     print('Training in progress')  
     train(model, network_in, network_out, epochs)  
     print('Training completed')  
  

The train_network method gets the notes, creates the input and output sequences, creates a model, and trains the model for 200 epochs.

Music Sample Generation

Now that we have trained our model, we can use it to generate some new notes. To generate new notes, we need a starting note. So, we randomly pick an integer and pick a random sequence from the input sequence as a starting point.

 def generate_notes(model, network_input, pitchnames, n_vocab):  
     """ Generate notes from the neural network based on a sequence of notes """  
     # Pick a random integer  
     start = np.random.randint(0, len(network_input)-1)  
     int_to_note = dict((number, note) for number, note in enumerate(pitchnames))  
     # pick a random sequence from the input as a starting point for the prediction  
     pattern = network_input[start]  
     prediction_output = []  
     print('Generating notes........')  
     # generate 500 notes  
     for note_index in range(500):  
       prediction_input = np.reshape(pattern, (1, len(pattern), 1))  
       prediction_input = prediction_input / float(n_vocab)  
       prediction = model.predict(prediction_input, verbose=0)  
       # Predicted output is the argmax(P(h|D))  
       index = np.argmax(prediction)  
       # Mapping the predicted interger back to the corresponding note  
       result = int_to_note[index]  
       # Storing the predicted output  
       prediction_output.append(result)  
       pattern.append(index)  
       # Next input to the model  
       pattern = pattern[1:len(pattern)]  
     print('Notes Generated...')  
     return prediction_output  
  

Next, we use the trained model to predict the next 500 notes. At each time step, the output of the previous layer (ŷ⟨t−1⟩) is provided as input (x⟨t⟩) to the LSTM layer at the current time step t. This is depicted in the following figure (see Fig. 4).

sampling, sampling from rnn, LSTM, architecture, music sampling, music generation
Fig 4. Sampling from a trained network.

Since the predicted output is an array of probabilities, we choose the output at the index with the maximum probability. Finally, we map this index to the actual note and add this to the list of predicted output. Since the predicted output is a list of strings of notes and chords, we cannot play it. Hence, we encode the predicted output into the MIDI format using the create_midi method.

 ### Converts the predicted output to midi format  
   create_midi(prediction_output)  
  

To create some new jazz music, you can simply call the generate() method, which calls all the related methods and saves the predicted output as a MIDI file.

 #### Generate a new jazz music   
   generate()  
   Out:   
     Initiating music generation process.......  
     Loading Model weights.....  
     Model Loaded  
     Generating notes........  
     Notes Generated...  
     Saving Output file as midi....  
  

To play the generated MIDI in the Jupyter Notebook you can import the play_midi method from the play.py file or use an external MIDI player or convert the MIDI file to the mp3. Let’s listen to our generated jazz piano music.

 ### Play the Jazz music  
   play.play_midi('test_output3.mid')  
“Generated Track 1” Deep Learning Recurrent Neural Network
Audio Player

Conclusion

Congratulations! You can now generate your own jazz music. You can find the full code in this Github repository. I encourage you to play with the parameters of the model and train the model with input sequences of different sequence lengths. Try to implement the code for some other instrument (such as guitar). Furthermore, such a character-based model can also be applied to a text corpus to generate sample texts, such as a poem.

Also, you can showcase your own personal composer and any similar idea in the World Music Hackathonby HackerEarth.

Have anything to say? Feel free to comment below for any questions, suggestions, and discussions related to this article. Till then, happy coding.

Subscribe to The HackerEarth Blog

Get expert tips, hacks, and how-tos from the world of tech recruiting to stay on top of your hiring!

Author
Shubham Gupta
Calendar Icon
May 29, 2018
Timer Icon
15 min read
Share

Hire top tech talent with our recruitment platform

Access Free Demo
Related reads

Discover more articles

Gain insights to optimize your developer recruitment process.

Vibe Coding: Shaping the Future of Software

A New Era of CodeVibe coding is a new method of using natural language prompts and AI tools to generate code. I have seen firsthand that this change makes software more accessible to everyone. In the past, being able to produce functional code was a strong advantage for developers. Today,...

A New Era of Code

Vibe coding is a new method of using natural language prompts and AI tools to generate code. I have seen firsthand that this change makes software more accessible to everyone. In the past, being able to produce functional code was a strong advantage for developers. Today, when code is produced quickly through AI, the true value lies in designing, refining, and optimizing systems. Our role now goes beyond writing code; we must also ensure that our systems remain efficient and reliable.

From Machine Language to Natural Language

I recall the early days when every line of code was written manually. We progressed from machine language to high-level programming, and now we are beginning to interact with our tools using natural language. This development does not only increase speed but also changes how we approach problem solving. Product managers can now create working demos in hours instead of weeks, and founders have a clearer way of pitching their ideas with functional prototypes. It is important for us to rethink our role as developers and focus on architecture and system design rather than simply on typing code.

The Promise and the Pitfalls

I have experienced both sides of vibe coding. In cases where the goal was to build a quick prototype or a simple internal tool, AI-generated code provided impressive results. Teams have been able to test new ideas and validate concepts much faster. However, when it comes to more complex systems that require careful planning and attention to detail, the output from AI can be problematic. I have seen situations where AI produces large volumes of code that become difficult to manage without significant human intervention.

AI-powered coding tools like GitHub Copilot and AWS’s Q Developer have demonstrated significant productivity gains. For instance, at the National Australia Bank, it’s reported that half of the production code is generated by Q Developer, allowing developers to focus on higher-level problem-solving . Similarly, platforms like Lovable enable non-coders to build viable tech businesses using natural language prompts, contributing to a shift where AI-generated code reduces the need for large engineering teams. However, there are challenges. AI-generated code can sometimes be verbose or lack the architectural discipline required for complex systems. While AI can rapidly produce prototypes or simple utilities, building large-scale systems still necessitates experienced engineers to refine and optimize the code.​

The Economic Impact

The democratization of code generation is altering the economic landscape of software development. As AI tools become more prevalent, the value of average coding skills may diminish, potentially affecting salaries for entry-level positions. Conversely, developers who excel in system design, architecture, and optimization are likely to see increased demand and compensation.​
Seizing the Opportunity

Vibe coding is most beneficial in areas such as rapid prototyping and building simple applications or internal tools. It frees up valuable time that we can then invest in higher-level tasks such as system architecture, security, and user experience. When used in the right context, AI becomes a helpful partner that accelerates the development process without replacing the need for skilled engineers.

This is revolutionizing our craft, much like the shift from machine language to assembly to high-level languages did in the past. AI can churn out code at lightning speed, but remember, “Any fool can write code that a computer can understand. Good programmers write code that humans can understand.” Use AI for rapid prototyping, but it’s your expertise that transforms raw output into robust, scalable software. By honing our skills in design and architecture, we ensure our work remains impactful and enduring. Let’s continue to learn, adapt, and build software that stands the test of time.​

Ready to streamline your recruitment process? Get a free demo to explore cutting-edge solutions and resources for your hiring needs.

Guide to Conducting Successful System Design Interviews in 2025

What is Systems Design?Systems Design is an all encompassing term which encapsulates both frontend and backend components harmonized to define the overall architecture of a product.Designing robust and scalable systems requires a deep understanding of application, architecture and their underlying components like networks, data, interfaces and modules.Systems Design, in its...

What is Systems Design?

Systems Design is an all encompassing term which encapsulates both frontend and backend components harmonized to define the overall architecture of a product.

Designing robust and scalable systems requires a deep understanding of application, architecture and their underlying components like networks, data, interfaces and modules.

Systems Design, in its essence, is a blueprint of how software and applications should work to meet specific goals. The multi-dimensional nature of this discipline makes it open-ended – as there is no single one-size-fits-all solution to a system design problem.

What is a System Design Interview?

Conducting a System Design interview requires recruiters to take an unconventional approach and look beyond right or wrong answers. Recruiters should aim for evaluating a candidate’s ‘systemic thinking’ skills across three key aspects:

How they navigate technical complexity and navigate uncertainty
How they meet expectations of scale, security and speed
How they focus on the bigger picture without losing sight of details

This assessment of the end-to-end thought process and a holistic approach to problem-solving is what the interview should focus on.

What are some common topics for a System Design Interview

System design interview questions are free-form and exploratory in nature where there is no right or best answer to a specific problem statement. Here are some common questions:

How would you approach the design of a social media app or video app?

What are some ways to design a search engine or a ticketing system?

How would you design an API for a payment gateway?

What are some trade-offs and constraints you will consider while designing systems?

What is your rationale for taking a particular approach to problem solving?

Usually, interviewers base the questions depending on the organization, its goals, key competitors and a candidate’s experience level.

For senior roles, the questions tend to focus on assessing the computational thinking, decision making and reasoning ability of a candidate. For entry level job interviews, the questions are designed to test the hard skills required for building a system architecture.

The Difference between a System Design Interview and a Coding Interview

If a coding interview is like a map that takes you from point A to Z – a systems design interview is like a compass which gives you a sense of the right direction.

Here are three key difference between the two:

Coding challenges follow a linear interviewing experience i.e. candidates are given a problem and interaction with recruiters is limited. System design interviews are more lateral and conversational, requiring active participation from interviewers.

Coding interviews or challenges focus on evaluating the technical acumen of a candidate whereas systems design interviews are oriented to assess problem solving and interpersonal skills.

Coding interviews are based on a right/wrong approach with ideal answers to problem statements while a systems design interview focuses on assessing the thought process and the ability to reason from first principles.

How to Conduct an Effective System Design Interview

One common mistake recruiters make is that they approach a system design interview with the expectations and preparation of a typical coding interview.
Here is a four step framework technical recruiters can follow to ensure a seamless and productive interview experience:

Step 1: Understand the subject at hand

  • Develop an understanding of basics of system design and architecture
  • Familiarize yourself with commonly asked systems design interview questions
  • Read about system design case studies for popular applications
  • Structure the questions and problems by increasing magnitude of difficulty

Step 2: Prepare for the interview

  • Plan the extent of the topics and scope of discussion in advance
  • Clearly define the evaluation criteria and communicate expectations
  • Quantify constraints, inputs, boundaries and assumptions
  • Establish the broader context and a detailed scope of the exercise

Step 3: Stay actively involved

  • Ask follow-up questions to challenge a solution
  • Probe candidates to gauge real-time logical reasoning skills
  • Make it a conversation and take notes of important pointers and outcomes
  • Guide candidates with hints and suggestions to steer them in the right direction

Step 4: Be a collaborator

  • Encourage candidates to explore and consider alternative solutions
  • Work with the candidate to drill the problem into smaller tasks
  • Provide context and supporting details to help candidates stay on track
  • Ask follow-up questions to learn about the candidate’s experience

Technical recruiters and hiring managers should aim for providing an environment of positive reinforcement, actionable feedback and encouragement to candidates.

Evaluation Rubric for Candidates

Facilitate Successful System Design Interview Experiences with FaceCode

FaceCode, HackerEarth’s intuitive and secure platform, empowers recruiters to conduct system design interviews in a live coding environment with HD video chat.

FaceCode comes with an interactive diagram board which makes it easier for interviewers to assess the design thinking skills and conduct communication assessments using a built-in library of diagram based questions.

With FaceCode, you can combine your feedback points with AI-powered insights to generate accurate, data-driven assessment reports in a breeze. Plus, you can access interview recordings and transcripts anytime to recall and trace back the interview experience.

Learn how FaceCode can help you conduct system design interviews and boost your hiring efficiency.

How Candidates Use Technology to Cheat in Online Technical Assessments

Impact of Online Assessments in Technical Hiring In a digitally-native hiring landscape, online assessments have proven to be both a boon and a bane for recruiters and employers. The ease and...

Impact of Online Assessments in Technical Hiring


In a digitally-native hiring landscape, online assessments have proven to be both a boon and a bane for recruiters and employers.

The ease and efficiency of virtual interviews, take home programming tests and remote coding challenges is transformative. Around 82% of companies use pre-employment assessments as reliable indicators of a candidate's skills and potential.

Online skill assessment tests have been proven to streamline technical hiring and enable recruiters to significantly reduce the time and cost to identify and hire top talent.

In the realm of online assessments, remote assessments have transformed the hiring landscape, boosting the speed and efficiency of screening and evaluating talent. On the flip side, candidates have learned how to use creative methods and AI tools to cheat in tests.

As it turns out, technology that makes hiring easier for recruiters and managers - is also their Achilles' heel.

Cheating in Online Assessments is a High Stakes Problem



With the proliferation of AI in recruitment, the conversation around cheating has come to the forefront, putting recruiters and hiring managers in a bit of a flux.



According to research, nearly 30 to 50 percent of candidates cheat in online assessments for entry level jobs. Even 10% of senior candidates have been reportedly caught cheating.

The problem becomes twofold - if finding the right talent can be a competitive advantage, the consequences of hiring the wrong one can be equally damaging and counter-productive.

As per Forbes, a wrong hire can cost a company around 30% of an employee's salary - not to mention, loss of precious productive hours and morale disruption.

The question that arises is - "Can organizations continue to leverage AI-driven tools for online assessments without compromising on the integrity of their hiring process? "

This article will discuss the common methods candidates use to outsmart online assessments. We will also dive deep into actionable steps that you can take to prevent cheating while delivering a positive candidate experience.

Common Cheating Tactics and How You Can Combat Them


  1. Using ChatGPT and other AI tools to write code

    Copy-pasting code using AI-based platforms and online code generators is one of common cheat codes in candidates' books. For tackling technical assessments, candidates conveniently use readily available tools like ChatGPT and GitHub. Using these tools, candidates can easily generate solutions to solve common programming challenges such as:
    • Debugging code
    • Optimizing existing code
    • Writing problem-specific code from scratch
    Ways to prevent it
    • Enable full-screen mode
    • Disable copy-and-paste functionality
    • Restrict tab switching outside of code editors
    • Use AI to detect code that has been copied and pasted
  2. Enlist external help to complete the assessment


    Candidates often seek out someone else to take the assessment on their behalf. In many cases, they also use screen sharing and remote collaboration tools for real-time assistance.

    In extreme cases, some candidates might have an off-camera individual present in the same environment for help.

    Ways to prevent it
    • Verify a candidate using video authentication
    • Restrict test access from specific IP addresses
    • Use online proctoring by taking snapshots of the candidate periodically
    • Use a 360 degree environment scan to ensure no unauthorized individual is present
  3. Using multiple devices at the same time


    Candidates attempting to cheat often rely on secondary devices such as a computer, tablet, notebook or a mobile phone hidden from the line of sight of their webcam.

    By using multiple devices, candidates can look up information, search for solutions or simply augment their answers.

    Ways to prevent it
    • Track mouse exit count to detect irregularities
    • Detect when a new device or peripheral is connected
    • Use network monitoring and scanning to detect any smart devices in proximity
    • Conduct a virtual whiteboard interview to monitor movements and gestures
  4. Using remote desktop software and virtual machines


    Tech-savvy candidates go to great lengths to cheat. Using virtual machines, candidates can search for answers using a secondary OS while their primary OS is being monitored.

    Remote desktop software is another cheating technique which lets candidates give access to a third-person, allowing them to control their device.

    With remote desktops, candidates can screen share the test window and use external help.

    Ways to prevent it
    • Restrict access to virtual machines
    • AI-based proctoring for identifying malicious keystrokes
    • Use smart browsers to block candidates from using VMs

Future-proof Your Online Assessments With HackerEarth

HackerEarth's AI-powered online proctoring solution is a tested and proven way to outsmart cheating and take preventive measures at the right stage. With HackerEarth's Smart Browser, recruiters can mitigate the threat of cheating and ensure their online assessments are accurate and trustworthy.
  • Secure, sealed-off testing environment
  • AI-enabled live test monitoring
  • Enterprise-grade, industry leading compliance
  • Built-in features to track, detect and flag cheating attempts
Boost your hiring efficiency and conduct reliable online assessments confidently with HackerEarth's revolutionary Smart Browser.
Top Products

Explore HackerEarth’s top products for Hiring & Innovation

Discover powerful tools designed to streamline hiring, assess talent efficiently, and run seamless hackathons. Explore HackerEarth’s top products that help businesses innovate and grow.
Frame
Hackathons
Engage global developers through innovation
Arrow
Frame 2
Assessments
AI-driven advanced coding assessments
Arrow
Frame 3
FaceCode
Real-time code editor for effective coding interviews
Arrow
Frame 4
L & D
Tailored learning paths for continuous assessments
Arrow
Get A Free Demo