Datasets:
What is the Right way to use discofuse dataset
Below is the following way as per my understanding , Is it correct?????
The columns/features from DiscoFuse dataset that will be the input to the encoder and decoder are:
coherent_first_sentence
coherent_second_sentence
incoherent_first_sentence
incoherent_second_sentence
The encoder will take these four columns as input and encode them into a sequence of hidden states. The decoder will then take these hidden states as input and decode them into a new sentence that fuses the two original sentences together.
The discourse type, connective_string, has_coref_type_pronoun, and has_coref_type_nominal columns will not be used as input to the encoder or decoder. These columns are used to provide additional information about the dataset, but they are not necessary for the task of sentence fusion.
please correct me if i am wrong otherwise, if this understanding is right how shall i implement this task practically?
Yes it looks correct to me. Maybe you can train a seq2seq model to prefict coherent_first_sentence + coherent_second_sentence
from incoherent_first_sentence + incoherent_second_sentence
?