N01-1010 a clustering technique called tree-cut . We compare our lexicon to WordNet
J98-2002 should employ to select the best tree-cut model . We adopt the Minimum
N01-1010 Systematic Polysemy Using the tree-cut technique described above , our
N01-1010 distribution of the clusters in the tree-cut F . It is calculated as k L (
N01-1010 work . 2.1 Tree-cut Models The tree-cut technique is applied to data
N01-1010 abstract semantic classes . A tree-cut is a partition of a thesaurus
N01-1010 best model is the one with the tree-cut -LSB- AIRCRAFT , ball , kite
N01-1010 shows the MDL lengths for all five tree-cut models . The best model is the
N01-1010 as a cluster .3 Clusters in a tree-cut exhaustively cover all leaf nodes
N01-1010 In ( Li and Abe , 1998 ) , the tree-cut technique was applied to the
N01-1010 of the description length for a tree-cut model is as follows . Given a
N01-1010 MDL Principle To select the best tree-cut model , ( Li and Abe , 1998 )
N01-1010 . 2 The Tree-cut Technique The tree-cut technique is an unsupervised
N01-1010 problem of selecting the best tree-cut model that estimates the true
N01-1010 model M is a pair consisting of a tree-cut F and a probability parameter
N01-1010 sample corpus data . Formally , a tree-cut model M is a pair consisting
N01-1010 we give a brief summary of this tree-cut technique using examples from
N01-1010 thesaurus tree and one possible tree-cut -LSB- AIRCRAFT , ball , kite
N01-1010 Figure 2 shows parts of the final tree-cuts for the ARTIFACT and MEASURE
N01-1010 third step , clusters in those two tree-cuts are matched up , and the pairs
hide detail