For the purpose of our application, we relied mostly on the GPT 3.5 model. After testing out a few different local models, we came to the conclusion that they needed to provide more effective answers. In order to improve the answers that GPT-3.5 produces, we made adjustments to the parameters that govern temperature and top-p generation. Because the top-k is not capable of being modified in GPT- 3.5, we were unable to make any changes to it. Generally speaking, the replies that were produced by GPT 2 had little to do with the discussion that we were asking about. We attempted to improve the accuracy of its replies by modifying the generation parameters, such as the temperature, top-p, and top-k, but this did not prove to be effective. As a result, we made the decision to discontinue the use of GPT 2. We put a number of Huggingface's text-generating models through their paces, but they either required an excessive amount of time to run or did not provide satisfactory results. We were able to run AI models from Huggingface on a local level by using the Transformers package. To construct the user interface, we also made use of the pyqt5 package.