other,11-5-C04-1147,bq We apply it in combination with a <term> terabyte corpus </term> to answer <term> natural language tests </term> , achieving encouraging results .
other,17-2-C04-1147,bq The framework is composed of a novel algorithm to efficiently compute the <term> co-occurrence distribution </term> between pairs of <term> terms </term> , an <term> independence model </term> , and a <term> parametric affinity model </term> .
model,20-2-C04-1147,bq The framework is composed of a novel algorithm to efficiently compute the <term> co-occurrence distribution </term> between pairs of <term> terms </term> , an <term> independence model </term> , and a <term> parametric affinity model </term> .
model,22-3-C04-1147,bq In comparison with previous <term> models </term> , which either use arbitrary <term> windows </term> to compute <term> similarity </term> between <term> words </term> or use <term> lexical affinity </term> to create <term> sequential models </term> , in this paper we focus on <term> models </term> intended to capture the <term> co-occurrence patterns </term> of any pair of <term> words </term> or <term> phrases </term> at any distance in the <term> corpus </term> .
model,4-3-C04-1147,bq In comparison with previous <term> models </term> , which either use arbitrary <term> windows </term> to compute <term> similarity </term> between <term> words </term> or use <term> lexical affinity </term> to create <term> sequential models </term> , in this paper we focus on <term> models </term> intended to capture the <term> co-occurrence patterns </term> of any pair of <term> words </term> or <term> phrases </term> at any distance in the <term> corpus </term> .
model,25-2-C04-1147,bq The framework is composed of a novel algorithm to efficiently compute the <term> co-occurrence distribution </term> between pairs of <term> terms </term> , an <term> independence model </term> , and a <term> parametric affinity model </term> .
model,9-1-C04-1147,bq We present a framework for the fast computation of <term> lexical affinity models </term> .
lr,50-3-C04-1147,bq In comparison with previous <term> models </term> , which either use arbitrary <term> windows </term> to compute <term> similarity </term> between <term> words </term> or use <term> lexical affinity </term> to create <term> sequential models </term> , in this paper we focus on <term> models </term> intended to capture the <term> co-occurrence patterns </term> of any pair of <term> words </term> or <term> phrases </term> at any distance in the <term> corpus </term> .
other,44-3-C04-1147,bq In comparison with previous <term> models </term> , which either use arbitrary <term> windows </term> to compute <term> similarity </term> between <term> words </term> or use <term> lexical affinity </term> to create <term> sequential models </term> , in this paper we focus on <term> models </term> intended to capture the <term> co-occurrence patterns </term> of any pair of <term> words </term> or <term> phrases </term> at any distance in the <term> corpus </term> .
other,12-2-C04-1147,bq The framework is composed of a novel algorithm to efficiently compute the <term> co-occurrence distribution </term> between pairs of <term> terms </term> , an <term> independence model </term> , and a <term> parametric affinity model </term> .
other,13-3-C04-1147,bq In comparison with previous <term> models </term> , which either use arbitrary <term> windows </term> to compute <term> similarity </term> between <term> words </term> or use <term> lexical affinity </term> to create <term> sequential models </term> , in this paper we focus on <term> models </term> intended to capture the <term> co-occurrence patterns </term> of any pair of <term> words </term> or <term> phrases </term> at any distance in the <term> corpus </term> .
model,31-3-C04-1147,bq In comparison with previous <term> models </term> , which either use arbitrary <term> windows </term> to compute <term> similarity </term> between <term> words </term> or use <term> lexical affinity </term> to create <term> sequential models </term> , in this paper we focus on <term> models </term> intended to capture the <term> co-occurrence patterns </term> of any pair of <term> words </term> or <term> phrases </term> at any distance in the <term> corpus </term> .
other,36-3-C04-1147,bq In comparison with previous <term> models </term> , which either use arbitrary <term> windows </term> to compute <term> similarity </term> between <term> words </term> or use <term> lexical affinity </term> to create <term> sequential models </term> , in this paper we focus on <term> models </term> intended to capture the <term> co-occurrence patterns </term> of any pair of <term> words </term> or <term> phrases </term> at any distance in the <term> corpus </term> .
other,42-3-C04-1147,bq In comparison with previous <term> models </term> , which either use arbitrary <term> windows </term> to compute <term> similarity </term> between <term> words </term> or use <term> lexical affinity </term> to create <term> sequential models </term> , in this paper we focus on <term> models </term> intended to capture the <term> co-occurrence patterns </term> of any pair of <term> words </term> or <term> phrases </term> at any distance in the <term> corpus </term> .
other,15-3-C04-1147,bq In comparison with previous <term> models </term> , which either use arbitrary <term> windows </term> to compute <term> similarity </term> between <term> words </term> or use <term> lexical affinity </term> to create <term> sequential models </term> , in this paper we focus on <term> models </term> intended to capture the <term> co-occurrence patterns </term> of any pair of <term> words </term> or <term> phrases </term> at any distance in the <term> corpus </term> .
lr,7-5-C04-1147,bq We apply it in combination with a <term> terabyte corpus </term> to answer <term> natural language tests </term> , achieving encouraging results .
other,10-3-C04-1147,bq In comparison with previous <term> models </term> , which either use arbitrary <term> windows </term> to compute <term> similarity </term> between <term> words </term> or use <term> lexical affinity </term> to create <term> sequential models </term> , in this paper we focus on <term> models </term> intended to capture the <term> co-occurrence patterns </term> of any pair of <term> words </term> or <term> phrases </term> at any distance in the <term> corpus </term> .
other,18-3-C04-1147,bq In comparison with previous <term> models </term> , which either use arbitrary <term> windows </term> to compute <term> similarity </term> between <term> words </term> or use <term> lexical affinity </term> to create <term> sequential models </term> , in this paper we focus on <term> models </term> intended to capture the <term> co-occurrence patterns </term> of any pair of <term> words </term> or <term> phrases </term> at any distance in the <term> corpus </term> .
hide detail