Use BERT to answer sentence questions, extract fixed feature vectors, and analyze sentence semantics.
[Operation steps and instructions]
BERT APP is divided into three parts. SQuAD2.0 conducts sentence question training and answer, ELMo extracts fixed feature vectors and MRPC conducts sentence semantic analysis.
Press "1.Fine_Tuning" to train data/SQuAD2.0/train-v2.0.json.
After pressing training, you will wait for a while in the "impossible example" line, which is normal.
After training, you can choose the training model to answer the sentence question. The question file is from data/SQuAD2.0/dev-v2.0.json, and the output answer is in data/output/SQuAD2.0/nbest_predictions.json.
If there is no answer to the question, the threshold of the question will be recorded in data/output/SQuAD2.0/null_odds.json.
When you select model file, three files, model.ckpt-XXX.data-00000-of-00001, model.ckpt-XXX.index, and model.ckpt-XXX.meta, will be read. Do not delete the extension data and. index file.
The position of the orange line in nbest_predictions.json corresponds to the id of the question in the source file data/SQuAD2.0/dev-v2.0.json, and the answer to the question is in [ ].
If you need to adjust the threshold of the inference answer, you can execute "3. Evaluate" and adjust the threshold of null_score_diff_threshold according to the value of best_f1_thresh in the execution result.
Press "Extract Fixed Feature" to extract the fixed feature vector from input file data/glue-data/ELMo/input.txt, and store the extraction result in data/output/ELMo/output.json.
After pressing "1. Fine_Tuning", train the files in the data/glue-data/MRPC folder.
After training, confirm the model file and press "2.Inference" to perform sentence semantic analysis on the "data/glue-data/MRPC/test.tsv" file to determine the probability that the two sentences belong to the same meaning.
The green line of test.tsv is sentence 1, and the blue line is sentence 2. After analysis, at the orange line of result_test.tsv, the former number (0.29910564) indicates the probability that the two sentences have different meanings, and the latter number (0.70089436) indicates The probability that two sentences have the same meaning.
Contact Us and How to Buy
Welcome to contact us. Please refer to the following link: