CentricaSoft

2 Job openings at CentricaSoft
Lead AI Engineer Chatbot & NLP karnataka 5 - 9 years INR Not disclosed On-site Full Time

As an experienced and hands-on Chatbot Development Lead, you will spearhead the design, development, and deployment of advanced AI chatbot solutions. Your proficiency in natural language processing (NLP), text-to-SQL conversion, and leading full-stack development using Streamlit and ReactJS will be key in this role. You will need strong expertise in AWS Bedrock, ECS deployment, and secure authentication integration using AWS SSO. This leadership position involves both technical execution and mentoring a cross-functional development team. - Lead end-to-end development of AI-powered chatbot solutions with Text-to-SQL capabilities. - Architect and implement intuitive custom UI using Streamlit for internal tools and ReactJS for production-facing applications. - Build and optimize NLP pipelines integrated with AWS Bedrock foundation models. - Manage containerized deployment using AWS ECS, including task definitions, load balancers, and autoscaling. - Integrate enterprise-grade security via AWS Single Sign-On (SSO). - Drive sprint planning, code reviews, and delivery milestones for the chatbot team. - Collaborate with product managers, data scientists, and DevOps engineers to ensure seamless delivery. - Stay updated on the latest in AI, LLM, and cloud technologies to drive innovation. **Required Skills:** - Strong hands-on experience in Python, Streamlit, ReactJS, and full-stack development. - Expertise in Text-to-SQL modeling, fine-tuning LLMs, and natural language interfaces. - Proficiency with AWS Bedrock and deploying generative AI applications on AWS. - Experience with Docker and ECS (Elastic Container Service) for scalable deployment. - Knowledge of IAM roles, security best practices, and AWS SSO integration. - Proven ability to lead and mentor a team, manage priorities, and deliver high-quality solutions. - Familiarity with CI/CD pipelines, logging/monitoring, and Git-based workflows. **Preferred Qualifications:** - Experience with vector databases and semantic search (e.g., OpenSearch, Pinecone). - Prior experience building internal developer tools or enterprise chat interfaces. - Understanding of RAG (Retrieval-Augmented Generation) and LangChain or similar frameworks. - AWS Certifications (e.g., AWS Certified Solutions Architect, Machine Learning Specialty) is a plus. In this role, you will have the opportunity to work with cutting-edge LLM and AI technologies, hold a leadership position with autonomy and decision-making power, receive competitive compensation and benefits, and gain exposure to the full cloud-native deployment lifecycle on AWS.,

Data Engineer bengaluru,karnataka,india 2 - 5 years INR Not disclosed Remote Full Time

Title: Data Engineer - PySpark About CentricaSoft: CentricaSoft is a data-driven technology partner delivering end-to-end data solutions for our clients. We design, build, and scale modern data platforms that empower business decisions. We're growing our data engineering team and seeking a hands-on PySpark Developer who thrives in a fast-paced, collaborative environment. Role Summary We are looking for a PySpark Developer with solid AWS or Azure or GCP experience and in-depth knowledge of CI/CD pipelines with Parquet/JSON/Avro/Iceberg/Databricks. The candidate will work on multiple ETL data processing integrations, including API data pulls, Database extracts, and handling semi-structured data. Strong SQL skills and excellent communication are essential. Atleast 1 Cloud certification from AWS (preferable) or Azure or GCP in Data Engineer area, Key Responsibilities Develop, optimize, and maintain PySpark-based data processing pipelines for large-scale data workloads. Design and implement ETL/ELT processes, including API data ingestion, database extracts, and ingestion of semi-structured data (JSON, Parquet, Avro, etc.). Build and maintain CI/CD pipelines for data engineering workloads (code, tests, deployments, and monitoring) with data quality checks, logging, and error handling to ensure robust data pipelines. Optimize SQL queries and data models for performance and scalability. Contribute to documentation, best practices, and knowledge sharing within the team. Qualifications 2-5 years of hands-on experience in data engineering. Proficiency in PySpark and Spark-based data processing with Parquet/JSON/Avro/Iceberg/Databricks. Solid cloud experience (AWS or Azure or GCP) with data services (e.g., AWS Glue/SageMaker, EMR/Redshift; Azure Data Factory/Databricks; GCP BigQuery/Dataflow/Dataproc, AirFlow, etc). Strong understanding of CI/CD concepts and experience implementing pipelines (e.g., Git, CI servers, containerization, automated testing, deployment automation). Deep SQL expertise (query tuning, performance optimization, complex joins, window functions). Excellent communication skills and ability to collaborate with cross-functional teams. Familiarity with big data processing frameworks, data modeling, and data governance concepts. Certifications: Atleast 1 Cloud certifications from AWS or Azure in Data Engineer area , Nice-to-have Experience with streaming data (Kafka/Kinesis), and real-time processing. Knowledge of data visualization or BI tools (Power BI, Tableau) is a plus. What We Offer Hybrid work culture with a mix of on-site and remote work Competitive salary and comprehensive benefits Flexible work arrangements and a supportive, collaborative team Opportunities to work on impactful, scalable data platforms Professional development support and certification encouragement