Up to now, we have successfully generated shapes, numbers, images, and text. In this chapter and the next, we will explore two different ways of generating lifelike music. This chapter will apply the techniques from image GANs, treating a piece of music as a multidimensional object akin to an image. The generator will produce a complete piece of music and submit it to the critic (serving as the discriminator because we use the Wasserstein distance with gradient penalty, as discussed in chapter 5) for evaluation. The generator will then modify the music based on the critic’s feedback until it closely resembles real music from the training dataset. In the next chapter, we will treat music as a sequence of musical events, employing natural language processing (NLP) techniques. We will use a GPT-style Transformer to predict the most probable musical event in a sequence based on previous events. This Transformer will generate a long sequence of musical events that can be converted into realistic-sounding music.