E-MAIL TEMPLATE GENERATION

Email is a part of daily communication and people invest a lot of time over mail. Generation of mail it’s content and also verifying the text written in the mail.

Wondering? Why the need of template? 🤔
Well, there are several reasons check the below list some might fit for your case as well.
1. Content generation is a repetitive task
2. Very time consuming ⏳⏰
3. Difficult to generate bulk mail
4. No randomness in the text used
5. Always need to be thoroughly checked ✔ 🎯
So, to save all the time to create a custom template for mail. Here is a demo model to create text and align it in a format of the mail template
So to approach the SOLUTION did training on a small dataset using the GPT-2 model.
The results achieved were not very good as the dataset was limited. 😞
GPT-2 is a large transformer-based language model with 1.5 billion parameters, trained on a dataset of 8 million web pages. GPT-2 is trained with a simple objective: predict the next word, given all of the previous words within some text. The diversity of the dataset causes this simple goal to contain naturally occurring demonstrations of many tasks across diverse domains. GPT-2 is a direct scale-up of GPT, with more than 10X the parameters and trained on more than 10X the amount of data.
GPT-2 displays a broad set of capabilities, including the ability to generate conditional synthetic text samples of unprecedented quality, where we prime the model with input and have it generated a lengthy continuation. Also, GPT-2 outperforms other language models trained on specific domains (like Wikipedia, news, or books) without needing to use these domain-specific training datasets. On language tasks like question answering, reading comprehension, summarization, and translation, GPT-2 begins to learn these tasks from the raw text, using no task-specific training data. While scores on these downstream tasks are far from state-of-the-art, they suggest that the tasks can benefit from unsupervised techniques, given sufficient (unlabelled) data and compute.
I have trained a custom BERT model for the classification of the mail content.
BERT and other Transformer encoder architectures have been wildly successful on a variety of tasks in NLP (natural language processing). They compute vector-space representations of natural language that are suitable for use in deep learning models. The BERT family of models uses the Transformer encoder architecture to process each token of input text in the full context of all tokens before and after, hence the name: Bidirectional Encoder Representations from Transformers.
BERT models are usually pre-trained on a large corpus of text, then fine-tuned for specific tasks.
Steps taken to perform the action were as below:
1. Loading the model from TensorFlow hub
2. Choosing the model
3. pre-processing the model
4. BERT implementation
5. Defining the model with dense and dropout layers
6. Model training
7. Optimising the model
8. Evaluating the model

The flow chart followed for achieving results is as shown below:

Test Case of Predefined Text Model
INPUT
This critical condition has deteriorated my health a lot. I cannot eat properly and have a really bad body rash. I can’t concentrate on anything. I can attest to
OUTPUT
This critical condition has deteriorated my health a lot. I cannot eat properly and have a really bad body rash. I can’t concentrate on anything. I can attest to it. Just keep drinking it, but at no point will that give you my rash. This, after all, is why I don’t want to know a little better. We’re dealing with a disease that is in remission — a brain. I just finished up the last question of what went wrong and how to fix it. I would like to leave you with the following things: I don’t have a chance to let go of the “I’ve never, ever had this”. I’m not a smart person, I don’t think I’m a bad person. It’s very hard to get out of bed and have the same condition. I have about a million other symptoms and I take it for granted. People are all saying “I’ve never had this.” But is there anything better? I do know many people don’t believe in this disease, but unfortunately, the only explanation given for it is that it lives in your brain. The brain is plastic. It’s not only a plastic being, but it’s plasticized. So here in the United States, it’s a very expensive organ to administer — a few weeks a week, perhaps the same person is getting sick — to treat that. The doctors prescribe what treatment I offer to patients, they go right in. It’s also a very big problem, of course, because we’re getting death comes from this disease. But the most important thing for me is to figure out how to help and treat the patient. I’ve been in the hospital for about a year but this problem has gotten me into a lot of different issues in the past several years. I had to have a broken bone all over my body, then had to take over it all until I came down and my legs had to retract them.
So the pretrained model does give the output in context to the mail is being aligned to. Hence a rigorous training on a large dataset is very much needed to achieve an excellent result on text generation in context to the mail being written.
Text Analytics Performance
The text was being analysed with various parameter as mentioned below:
1. Sentiment Analysis
2. Language Detection
3. Key Phase Extraction
4. Entity Recognition
5. Personal Identifying Information recognition
6. Opinion mining
The out of document key phrases is mention below:
the firm, dedicated employee, month, employee regularity record, additional leave, leave of absence, evidence, health, appearance, critical condition, complete recovery, bad body rash, Dear Sir, ma’am, attentive inclination, lot, consideration, office, work, years, sudden illness, kindness, plea, claim, Cancer
Keyword Extraction Implemented using RAKE
You could find the colab file here
The results achieved were as keywords were interesting and helpful for the generation of text.
The output achieved after the completion of training of the model is shown below:
The UI is created using a Quill JS and the content for the mail is added dynamically with reference to the person writing it and stating the reason.
Output Achieved after generation of the template is

References
- Fig 1: Template Gif (https://images.app.goo.gl/LdTgzTcTb2qXSBkV6 )
- Fig 2: Text Gif (https://images.app.goo.gl/urHNRnBzqs83sFWG6)
- GPT-2 (https://openai.com/blog/better-language-models/)
- Text Generation (OpenAI) (https://deepai.org/machine-learning-model/text-generator)