• Q1: how many synsets are there in your wordnet ?
  • Q2: draw on paper the hypernym graph above “car.n.01”; visually map on this graph the outputs of both methods hypernym_paths() and tree()
  • Q3:
    • find another synset that has several hypernym paths
    • is the car closer to the bike than to the truck ?
    • is the car closer to the bike than to the airplane ?
  • Q4: print the hypernyms of ‘person.n.01’ with hypernym_paths(), tree() and the transitive closure; What kinds of structures are returned in each case ? What about duplicates ?
  • Q5: by looking at examples, what is the difference between the following relations between synsets ?
    • similar_tos()
    • also_sees()
  • Q6: print the tree and closure of americana.n.01 through relation topic_domains()
  • Q7: Define a function that returns frequent hypernym lemmas
  • Q8: Define a function that returns hypernyms of a synset but not too general ones (min depth of 5)
  • Q9: Define a function that returns common hypernyms of two synsets but not too general ones (min depth of 5)
  • Q10: Define a function neighbors(synset,k,m) that returns the top k nearest neighbors of synset according to similarity metric m
  • Q11: list all synonyms and antonyms of “evil”
  • Q12: find a way to query the NLTK wordnet with french words (without external translator !)
  • How many synsets do the french word “fanion” belongs to and how is it translated in english ?
  • How many french lemmas are there in wordnet ?
  • Q13 (optional - difficult)
    • Use in python any contextual word embedding you can (gpt-j, gpt-neo-x, gpt2, T5)
    • Check whether the embeddings encode correctly Wordnet’s synonyms by:
      • extract all synonyms from wordnet, compute their embeddings
      • compute mean and variance of distance between these embeddings
      • same thing for words that are not synonyms
      • compare both gaussians
    • is it possible to differentiate synonyms from antonyms in the embedding space ?