Job
Description
As a General AI Engineer at Nallas, a software services and product company, you will be responsible for developing, implementing, and maintaining advanced AI systems to drive innovation and solve complex problems. Collaborating with cross-functional teams, your role will involve designing and deploying AI solutions that enhance products and services, pushing the boundaries of what AI can achieve. Your role will require expertise in Python, Data Structures, and API Calls, providing a solid foundation for working with generative AI models and frameworks. Strong communication skills, both in documentation and presentations, will be essential for effectively conveying complex technical concepts to technical and non-technical audiences. In addition, you will need to excel in teamwork and solo work, collaborating on large projects while independently driving research and development efforts. Proficiency in data mining and text processing to extract valuable insights from various data sources for training and improving generative models will be crucial. Experience with building RAG Pipelines is highly desired, showcasing your ability to construct retrieval-augmented generation pipelines, a key technique in generative AI. Your expertise should also cover Machine Learning (ML), Natural Language Processing (NLP), Generative Adversarial Networks (GANs), Transformers & BERT flavors, Hugging Face Model Implementation, and Fine-Tuning, with a solid understanding of these core concepts for generative AI development. Hands-on experience with Vector Databases such as Chroma DB, PineCone, Milvus, FAISS, Arango DB for data storage and retrieval relevant to generative AI projects is expected. Collaboration skills are essential to work effectively with Business Analysts (BAs), Development Teams, and DevOps teams to bring generative AI solutions to fruition. Familiarity with deploying generative models on resource-constrained devices, like the Open AI Ada Embedding 002 model, can be advantageous. Proficiency with POC Tools like Streamlit and Gradio is required for rapid prototyping and showcasing generative AI concepts. Cloud experience, particularly with AWS Bedrock or similar platforms, is crucial for managing and deploying large-scale generative AI models. Basic knowledge of cloud services like EC2, ECS, ECR, S3, and Sagemaker is expected. Moreover, expertise in specific LLMs (such as OpenAI, Jurassic-1 Jumbo, LLAMA, mistral, Mixtral, Gemma, Gemini Pro) may be necessary based on the project focus. Your adeptness in these areas will contribute to the successful development and deployment of generative AI solutions at Nallas. If you are passionate about working on cutting-edge technologies and eager to advance your career in AI and ML, this role at Nallas presents an exciting opportunity for your personal and professional growth. For further consideration, you may share your resume with karthicc@nallas.com.,