Jobs
Interviews
11 Job openings at Bot Consulting
About Bot Consulting

BOT CONSULTING is a management consulting company based out of 16 RUE RABELAIS, ANGERS, France.

Data Scientist

Kolkata, Mumbai, New Delhi, Hyderabad, Pune, Chennai, Bengaluru

2 - 5 years

INR 4.0 - 7.0 Lacs P.A.

Work from Office

Full Time

We are seeking a skilled Data Scientist with 2 to 5 years of experience, specializing in Machine Learning, PySpark, and Data bricks, with a proven track record in long-range demand and sales forecasting. This role is crucial for the development and implementation of an automotive OEM\u2019s next-generation Intelligent Forecast Application. The position will involve building, optimizing, and deploying large-scale machine learning models for complex, long-term forecasting challenges using distributed computing frameworks, specifically PySpark on the Data bricks platform. The work will directly support strategic decision-making across the automotive value chain, including areas like long-term demand planning, production scheduling, and inventory optimization. The ideal candidate will have hands-on experience developing and deploying ML models for forecasting, particularly long-range predictions, in a production environment using PySpark and Data bricks. This role requires strong technical skills in machine learning, big data processing, and time series forecasting, combined with the ability to work effectively within a technical team to deliver robust and scalable long-range forecasting solutions. Role & Responsibilities: Machine Learning Model Development & Implementation for Long-Range Forecasting: Design, develop, and implement scalable and accurate machine learning models specifically for long-range demand and sales forecasting challenges. Data Processing and Feature Engineering with PySpark: Build and optimize large-scale data pipelines for ingesting, cleaning, transforming, and engineering features relevant to long-range forecasting from diverse, complex automotive datasets using PySpark on Data bricks. Deployment and MLOps on Data bricks: Develop and implement robust code for model training, inference, and deployment of long-range forecasting models directly within the Data bricks platform. Performance Evaluation & Optimization: Evaluate long-range forecasting model performance using relevant metrics (e.g., MAE, RMSE, MAPE, considering metrics suitable for longer horizons) and optimize models and data processing pipelines for improved accuracy and efficiency within the PySpark/Data bricks ecosystem. Work effectively as part of a technical team, collaborating with other data scientists, data engineers, and software developers to integrate ML long-range forecasting solutions into the broader forecasting application built on Data bricks. Communicate technical details and forecasting results effectively within the technical team. Requirements Bachelors or Masters degree in Data Science, Computer Science, Statistics, Applied Mathematics, or a closely related quantitative field. 2 to 5 years of hands-on experience in a Data Scientist or Machine Learning Engineer role. Proven experience developing and deploying machine learning models in a production environment. Demonstrated experience in long-range demand and sales forecasting. Significant hands-on experience with PySpark for large-scale data processing and machine learning. Extensive practical experience working with the Data bricks platform, including notebooks, jobs, and ML capabilities Expert proficiency in PySpark. Expert proficiency in the Data bricks platform. Strong proficiency in Python and SQL. Experience with machine learning libraries compatible with PySpark (e.g., MLlib, or integrating other libraries). Experience with advanced time series forecasting techniques and their implementation. Experience with distributed computing concepts and optimization techniques relevant to PySpark. Hands-on experience with a major cloud provider (Azure, AWS, or GCP) in the context of using Data bricks. Familiarity with MLOps concepts and tools used in a Databricks environment. Experience with data visualization tools. Analytical skills with a deep understanding of machine learning algorithms and their application to forecasting. Ability to troubleshoot and solve complex technical problems related to big data and machine learning workflows. Preferred / Good to have : Experience with specific long-range forecasting methodologies and libraries used in a distributed environment. Experience with real-time or streaming data processing using PySpark for near-term forecasting components that might complement long-range models. Familiarity with automotive data types relevant to long-range forecasting (e.g., economic indicators affecting car sales, long-term market trends). Experience with distributed version control systems (e.g., Git). Knowledge of agile development methodologies. Preferred Location is Kolkata + Should be open to travel to Jaipur & Bangalore.

Lead Data Scientist

Kolkata, Mumbai, New Delhi, Hyderabad, Pune, Chennai, Bengaluru

5 - 10 years

INR 7.0 - 12.0 Lacs P.A.

Work from Office

Full Time

[{"Salary":null , "Remote_Job":false , "Posting_Title":"Lead Data Scientist" , "Is_Locked":false , "City":"Kolkata" , "Industry":"IT Services","Job_Description":" We are seeking an exceptional and highly motivated Lead Data Scientist with a PhD in Data Science, Computer Science, Applied Mathematics, Statistics, or a closely related quantitative field, to spearhead the design, development, and deployment of an automotive OEM\u2019s next-generation Intelligent Forecast Application. This pivotal role will leverage cutting-edge machine learning, deep learning, and statistical modeling techniques to build a robust, scalable, and accurate forecasting system crucial for strategic decision-decision-making across the automotive value chain, including demand planning, production scheduling, inventory optimization, predictive maintenance, and new product introduction. The successful candidate will be a recognized expert in advanced forecasting methodologies, possess a strong foundation in data engineering and MLOps principles, and demonstrate a proven ability to translate complex research into tangible, production-ready applications within a dynamic industrial environment. This role demands not only deep technical expertise but also a visionary approach to leveraging data and AI to drive significant business impact for a leading automotive OEM. Role & Responsibilities: Strategic Leadership & Application Design: Lead the end-to-end design and architecture of the Intelligent Forecast Application, defining its capabilities, modularity, and integration points with existing enterprise systems (e.g., ERP, SCM, CRM). Develop a strategic roadmap for forecasting capabilities, identifying opportunities for innovation and the adoption of emerging AI/ML techniques (e.g., generative AI for scenario planning, reinforcement learning for dynamic optimization). Translate complex business requirements and automotive industry challenges into well-defined data science problems and technical specifications. Advanced Model Development & Research: Design, develop, and validate highly accurate and robust forecasting models using a variety of advanced techniques, including: Time Series Analysis: ARIMA, SARIMA, Prophet, Exponential Smoothing, State-space models. Machine Learning: Gradient Boosting (XGBoost, LightGBM), Random Forests, Support Vector Machines. Deep Learning: LSTMs, GRUs, Transformers, and other neural network architectures for complex sequential data. Probabilistic Forecasting: Quantile regression, Bayesian methods to capture uncertainty. Hierarchical & Grouped Forecasting: Managing forecasts across multiple product hierarchies, regions, and dealerships. Incorporate diverse data sources, including historical sales, market trends, economic indicators, competitor data, internal operational data (e.g., production schedules, supply chain disruptions), external events, and unstructured data. Conduct extensive exploratory data analysis (EDA) to identify patterns, anomalies, and key features influencing automotive forecasts. Stay abreast of the latest academic research and industry advancements in forecasting, machine learning, and AI, actively evaluating and advocating for their practical application within the OEM. Application Development & Deployment (MLOps): Architect and implement scalable data pipelines for ingestion, cleaning, transformation, and feature engineering of large, complex automotive datasets. Develop robust and efficient code for model training, inference, and deployment within a production environment. Implement MLOps best practices for model versioning, monitoring, retraining, and performance management to ensure the continuous accuracy and reliability of the forecasting application. Collaborate closely with Data Engineering, Software Development, and IT Operations teams to ensure seamless integration, deployment, and maintenance of the application. Performance Evaluation & Optimization: Define and implement rigorous evaluation metrics for forecasting accuracy (e.g., MAE, RMSE, MAPE, sMAPE, wMAPE, Pinball Loss) and business impact. Perform A/B testing and comparative analyses of different models and approaches to improve forecasting performance continuously. Identify and mitigate sources of bias and uncertainty in forecasting models. Collaboration & Mentorship: Work cross-functionally with various business units (e.g., Sales, Marketing, Supply Chain, Manufacturing, Finance, Product Development) to understand their forecasting needs and integrate solutions. Communicate complex technical concepts and model insights clearly and concisely to both technical and non-technical stakeholders. Provide technical leadership and mentorship to junior data scientists and engineers, fostering a culture of innovation and continuous learning. Potentially contribute to intellectual property (patents) and present findings at internal and external conferences. Requirements PhD in Data Science, Computer Science, Statistics, Applied Mathematics, Operations Research, or a closely related quantitative field. 5+ years of hands-on experience in a Data Scientist or Machine Learning Engineer role, with a significant focus on developing and deploying advanced forecasting solutions in a production environment. Demonstrated experience designing and developing intelligent applications, not just isolated models. Experience in the automotive industry or a similar complex manufacturing/supply chain environment is highly desirable. Technical Skills: Expert proficiency in Python (Numpy, Pandas, Scikit-learn, Stats models) and/or R. Strong proficiency in SQL. Machine Learning/Deep Learning Frameworks: Extensive experience with TensorFlow, PyTorch, Keras, or similar deep learning libraries. Forecasting Specific Libraries: Proficiency with forecasting libraries like Prophet, Stats models, or specialized time series packages. Data Warehousing & Big Data Technologies: Experience with distributed computing frameworks (e.g., Apache Spark, Hadoop) and data storage solutions (e.g., Snowflake, Data bricks, S3, ADLS). Cloud Platforms: Hands-on experience with at least one major cloud provider (Azure, AWS, GCP) for data science and ML deployments. MLOps: Understanding and practical experience with MLOps tools and practices (e.g., MLflow, Kubeflow, Docker, Kubernetes, CI/CD pipelines). Data Visualization: Proficiency with tools like Tableau, Power BI, or similar for creating compelling data stories and dashboards. Analytical Prowess: Deep understanding of statistical inference, experimental design, causal inference, and the mathematical foundations of machine learning algorithms. Problem Solving: Proven ability to analyze complex, ambiguous problems, break them down into manageable components, and devise innovative solutions. Preferred Location is Kolkata + Should be open to travel to Jaipur & Bangalore \u200b Preferred / Good to have : Publications in top-tier conferences or journals related to forecasting, time series analysis, or applied machine learning. Experience with real-time forecasting systems or streaming data analytics. Familiarity with specific automotive data types (e.g., telematics, vehicle sensor data, dealership data, market sentiment). Experience with distributed version control systems (e.g., Git). Knowledge of agile development methodologies. Signs you may be a great fit : Impact : Play a pivotal role in shaping a rapidly growing venture studio. Culture : Thrive in a collaborative, innovative environment that values creativity and ownership. Growth : Access professional development opportunities and mentorship. Benefits : Competitive salary, health/wellness packages, and flexible work options. ","Work_Experience":"5+ years","Job_Type":"Full time","Job_Opening_Name":"Lead Data Scientist","State":"West Bengal" , "Country":"India" , "Zip_Code":"700001" , "id":"121722000001216035" , "Publish":true , "Date_Opened":"2025-05-21" , "Keep_on_Career_Site":false}]

Senior Technical Support Engineer

Jaipur

5 - 10 years

INR 5.0 - 9.0 Lacs P.A.

Work from Office

Full Time

We are building a team that is responsible for the developer experience. On a daily basis we work with our customer and the developer community to ensure they are building software that is efficient and secure across the whole supply chain. Our team helps customers troubleshoot and investigate issues, diagnosing and resolving core product-related issues. We expand and enhance their experience through the development and maintenance of 3rd party integration that extends our API, CLI and Webhook capabilities, while curating a public knowledge base to support customers to independently resolve common issues. Roles Responsibilities: Technical Support: Provide top-notch technical support to our customers, assisting them with any product-related issues, inquiries, or challenges they may encounter. Problem Solving: Diagnose and troubleshoot technical issues, replicating them in local environments and collaborating closely with customers to identify root cause and provide effective solutions. Documentation: Create and maintain detailed documentation, including knowledge base articles and FAQs, to assist customers in resolving common issues independently. Software Expertise: Leverage your expertise in application programming to build and maintain third party integrations like Terraform, Github Actions, CircleCI etc Customer Advocacy: Act as a customer advocate within the company, sharing customer feedback, feature requests, and insights with the product development team to improve our offerings. Working Hours Flexibility: Be prepared to work in shifts or on-call availability (over the weekends) to accommodate our global customer base across different time zones, including the AMER, APAC, and EMEA. Requirements Qualifications Skills : Bachelors degree in Computer Science, Information Technology, or a related field. 5+ years in Technical Product Support, a similar role, or a background in software development Strong application programming experience, proficient in Java, Python, or JavaScript languages. Knowledge of software packaging and distribution concepts and tools. Excellent problem-solving and troubleshooting skills. An enthusiastic and effective communicator (using English): you should be able to appeal to and communicate with both technical and non-technical listeners alike. Ability to work independently and as part of a team. Willingness to work flexible shifts to support customers in different time zones. Preferred Skills Certifications: Professional experience in building, testing, deploying, debugging, and maintaining complex systems in production environments; or equivalent experience in open-source ecosystems and projects Familiarity with UNIX-like systems (Linux/WSL/MacOS) Familiarity with Artifact and Package management, or the software supply chain. Familiarity with Cloud-based infrastructure, data structures algorithms, storage systems, source control, and continuous integration Amazon Web Services (AWS) / AWS Certifications Significant contributions to open-source projects ReST, GraphQL, gRPC API Design Docker, OCI, ORAS, Kubernetes (k3s, k8s) Sigstore, Cosign, Keyless Signatures, Signature Attestation Grafeas, Kritis, Metadata Provenance. Signs you may be a great fit: Impact: Play a pivotal role in shaping a rapidly growing venture studio. Culture: Thrive in a collaborative, innovative environment that values creativity and ownership. Growth: Access to professional development opportunities and mentorship Benefits: Competitive salary, health/we'llness packages, and flexible work options

AWS Solution Architect

Jaipur

7 - 12 years

INR 13.0 - 18.0 Lacs P.A.

Work from Office

Full Time

As an AWS Managed Services Architect, you will play a pivotal role in architecting and optimizing the infrastructure and operations of a complex Data Lake environment for BOT clients. You\u2019ll leverage your strong expertise with AWS services to design, implement, and maintain scalable and secure data solutions while driving best practices. You will work collaboratively with delivery teams across the U.S., Costa Rica, Portugal, and other regions, ensuring a robust and seamless Data Lake architecture. In addition, you\u2019ll proactively engage with clients to support their evolving needs, oversee critical AWS infrastructure, and guide teams toward innovative and efficient solutions. This role demands a hands-on approach, including designing solutions, troubleshooting, optimizing performance, and maintaining operational excellence. Roles & Responsibilities: AWS Data Lake Architecture: Design, build, and support scalable, high-performance architectures for complex AWS Data Lake solutions. AWS Services Expertise: Deploy and manage cloud-native solutions using a wide range of AWS services, including but not limited to. Amazon EMR (Elastic MapReduce): Optimize and maintain EMR clusters for large-scale big data processing. AWS Batch: Design and implement efficient workflows for batch processing workloads. Amazon SageMaker: Enable data science teams with scalable infrastructure for model training and deployment. AWS Glue: Develop ETL/ELT pipelines using Glue to ensure efficient data ingestion and transformation. AWS Lambda: Build serverless functions to automate processes and handle event-driven workloads. AM Policies: Define and enforce fine-grained access controls to secure cloud resources and maintain governance. AWS IoT & Timestream: Design scalable solutions for collecting, storing, and analyzing time-series data. Amazon DynamoDB: Build and optimize high-performance NoSQL database solutions. Data Governance & Security: Implement best practices to ensure data privacy, compliance, and governance across the data architecture. Performance Optimization: Monitor, analyze, and tune AWS resources for performance efficiency and cost optimization. Develop and manage Infrastructure as Code (IaC) using AWS CloudFormation, Terraform, or equivalent tools to automate infrastructure deployment. Client Collaboration: Work closely with stakeholders to understand business objectives and ensure solutions align with client needs. Team Leadership & Mentorship: Provide technical guidance to delivery teams through design reviews, troubleshooting, and strategic planning. Continuous Innovation: Stay current with AWS service updates, industry trends, and emerging technologies to enhance solution delivery. Documentation & Knowledge Sharing: Create and maintain architecture diagrams, SOPs, and internal/external documentation to support ongoing operations and collaboration. Requirements Qualifications & Skills 7+ years of hands-on experience in cloud architecture and infrastructure (preferably AWS). 3+ years of experience specifically in architecting and managing Data Lake or big data solutions on AWS. Bachelor\u2019s Degree in Computer Science, Information Systems, or a related field (preferred). AWS Certifications such as Solutions Architect Professional or Big Data Specialty. Experience with Snowflake, Matillion, or Fivetran in hybrid cloud environments. Familiarity with Azure or GCP cloud platforms. Understanding of machine learning pipelines and workflows. Technical Skills: Expertise in AWS services such as EMR, Batch, SageMaker, Glue, Lambda, IAM, IoT TimeStream, DynamoDB, and more. Strong programming skills in Python for scripting and automation. Proficiency in SQL and performance tuning for data pipelines and queries. Experience with IaC tools like Terraform or CloudFormation. Knowledge of big data frameworks such as Apache Spark, Hadoop, or similar. Data Governance & Security: Proven ability to design and implement secure solutions, with strong knowledge of IAM policies and compliance standards. Problem-Solving: Analytical and problem-solving mindset to resolve complex technical challenges. Collaboration: Exceptional communication skills to engage with technical and non-technical stakeholders. Ability to lead cross-functional teams and provide mentorship.

AWS Admin

Jaipur

10 - 15 years

INR 3.0 - 7.0 Lacs P.A.

Work from Office

Full Time

We are seeking a highly skilled and experienced AWS Administrator to join a long-term project (12+ months), fully allocated and 100% hands-on. This role will backfill a senior AWS Admin with 10\u201315 years of experience and requires deep technical capability across AWS infrastructure services. This is not a team leadership role \u2014 the ideal candidate will operate independently, take full ownership of AWS administration tasks, and contribute directly to maintaining and optimizing cloud operations. Roles & Responsibilities: AWS Infrastructure Management: Provision, configure, and maintain AWS services such as EC2, S3, IAM, VPC, Lambda, RDS, CloudWatch, CloudTrail, and more. Monitoring & Incident Response: Set up monitoring, logging, and alerting solutions. Respond to and resolve infrastructure issues proactively. Security & IAM: Manage IAM roles, policies, and user access with a strong focus on security best practices and compliance requirements. Automation & Scripting: Automate routine tasks using scripting (Bash, Python) and AWS CLI/SDK. Infrastructure as Code (IaC): Use tools like Terraform or CloudFormation to manage and automate infrastructure deployments and changes. Cost Optimization: Monitor resource usage and implement cost-control strategies to optimize AWS spending. Backup & Disaster Recovery: Manage backup strategies and ensure systems are resilient and recoverable. Documentation: Maintain detailed and up-to-date documentation of AWS environments, standard operating procedures, and runbooks. Qualifications & Skills 10+ years of hands-on AWS administration experience. Strong understanding of AWS core services (EC2, S3, IAM, VPC, Lambda, RDS, etc). Experience with scripting (Python, Bash, or PowerShell) and automation tooling. Proven expertise in using Terraform or CloudFormation. Deep knowledge of IAM policy creation and security best practices. Experience with monitoring tools such as CloudWatch, Prometheus, or third-party APM tools. Familiarity with CI/CD pipelines and DevOps principles. Strong troubleshooting skills with the ability to resolve complex infrastructure issues independently. Excellent communication skills with the ability to work effectively with remote teams. Comfortable working during US Eastern Time zone hours. Preferred Qualifications AWS Certifications (eg, SysOps Administrator Associate, Solutions Architect Associate/Professional). Experience in hybrid environments or with other cloud platforms (Azure, GCP). Familiarity with Snowflake, GitLab, or similar DevOps tooling. Work Environment Full-time remote, with preference for nearshore profiles to ensure full overlap with US ET hours. Direct individual contribution to team management responsibilities. Collaborative work with internal and client teams across multiple regions. Signs You May Be a Great Fit Impact: Play a pivotal role in shaping a rapidly growing venture studio. Culture: Thrive in a collaborative, innovative environment that values creativity and ownership. Growth: Access professional development opportunities and mentorship. Benefits: Competitive salary, health/we'llness packages, and flexible work options

Lead Architect

Jaipur

6 - 11 years

INR 20.0 - 25.0 Lacs P.A.

Work from Office

Full Time

We are looking for people experienced with data architecture, design, and development of database mapping and migration processes. This person will have direct experience optimizing new and current databases, data pipelines and implementing advanced capabilities while ensuring data integrity and security. Ideal candidates will have strong communication skills and the ability to guide clients and project team members, acting as a key point of contact for direction and expertise. Roles & Responsibilities: Design, develop, and optimize database architectures and data pipelines. Ensure data integrity and security across all databases and data pipelines. Lead and guide clients and project team members, acting as a key point of contact for direction and expertise. Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions. Manage and support large-scale technology programs, ensuring they meet business objectives and compliance requirements. Develop and implement migration, dev/ops, and ETL/ELT ingestion pipelines using tools such as DataStage, Informatica, and Matillion. Utilize project management skills to work effectively within Scrum and Agile Development methods. Create and leverage metrics to develop actionable and measurable insights, influencing business decisions. Qualifications & Skills 7+ years of proven work experience in data warehousing, business intelligence (BI), and analytics. 3+ years of experience as a Data Architect. 3+ years of experience working on Cloud platforms (AWS, Azure, GCP). Bachelors Degree (BA/BS) in Computer Science, Information Systems, Mathematics, MIS, or a related field. Strong understanding of migration processes, dev/ops, and ETL/ELT ingestion pipelines. Proficient in tools such as DataStage, Informatica, and Matillion. Excellent project management skills and experience with Scrum and Agile Development methods. Ability to develop actionable and measurable insights and create metrics to influence business decisions. Previous consulting experience managing and supporting large-scale technology programs. Nice to Have 6 to 12 months of experience working with Snowflake. Understanding of Snowflake design patterns and migration architectures. Knowledge of Snowflake roles, user security, and capabilities like Snowpipe. Proficiency in SQL scripting. Cloud experience on AWS (Azure and GCP are also beneficial). Python scripting skills. Signs You May Be a Great Fit Impact: Play a pivotal role in shaping a rapidly growing venture studio. Culture: Thrive in a collaborative, innovative environment that values creativity and ownership. Growth: Access professional development opportunities and mentorship. Benefits: Competitive salary, health/we'llness packages, and flexible work options.

Lead Data Analyst - Power BI

Jaipur

5 - 8 years

INR 10.0 - 14.0 Lacs P.A.

Work from Office

Full Time

We are seeking an experienced and proactive Lead Data Analyst \u2013 Power BI to lead the development of scalable analytics solutions and guide our growing data team in Jaipur. The ideal candidate will bring strong expertise in Power BI, SQL, Python, and experience with cloud data platforms such as Snowflake. You will be responsible for designing data models, leading dashboard/reporting initiatives, mentoring junior analysts, and enabling business stakeholders to make data-driven decisions. Roles & Responsibilities: Lead the development and enhancement of interactive Power BI dashboards, reports, and data visualizations tailored to business requirements. Architect and optimize data models in Power BI for performance and scalability (including DAX and Power Query transformations). Build and manage robust end-to-end data pipelines, including data extraction, transformation, and loading (ETL/ELT). Collaborate with cross-functional stakeholders to translate business needs into technical solutions and actionable insights. Perform advanced data analysis using SQL and Python to uncover trends, patterns, and opportunities. Ensure data governance, quality, and consistency across all reporting assets and data platforms. Mentor junior analysts and contribute to best practices in reporting, documentation, and code review. Act as a bridge between business and engineering teams, ensuring alignment and impact from analytics projects. Work with cloud data warehouses such as Snowflake or similar platforms for scalable analytics. Skills & Qualifications 6+ years of experience in Data Analytics, Business Intelligence, or Data Engineering roles. Proven expertise in Power BI, including dashboard development, DAX, data modeling, and Power Query. Advanced proficiency in SQL and ability to work with large, complex datasets. Programming experience in Python for data manipulation, automation, or machine learning (preferred). Strong understanding of ETL/ELT concepts, data warehousing, and modern cloud data platforms (Snowflake preferred). Bachelors or Masters degree in Computer Science, Data Science, Engineering, or a related field. Excellent analytical thinking, problem-solving, and attention to detail. Strong communication skills and the ability to present data insights to non-technical stakeholders. Preferred Qualifications Hands-on experience with Snowflake, Redshift, or BigQuery. Familiarity with Airflow, DBT, or other orchestration tools. Power BI Certification (eg, PL-300: Microsoft Power BI Data Analyst). Experience with Agile methodologies and managing sprint-based BI deliverables. Exposure to version control (Git) and CI/CD practices in data analytics projects. Signs You May Be a Great Fit Impact: Play a pivotal role in shaping a rapidly growing venture studio. Culture: Thrive in a collaborative, innovative environment that values creativity and ownership. Growth: Access professional development opportunities and mentorship. Benefits: Competitive salary, health/we'llness packages, and flexible work options

Senior Cloud Engineer

Jaipur

3 - 6 years

INR 7.0 - 11.0 Lacs P.A.

Work from Office

Full Time

We are seeking a talented Senior Engineer with 3\u20136 years of hands-on experience to join our growing Cloud Architecture & Engineering team. The right candidate is someone who has deep expertise in building a variety of cloud solutions and is passionate about working with our customers, partners, and employees to drive the business forward. This role is suited for someone passionate about building robust, scalable solutions that drive business impact. The successful candidate will play a key role in the full software development lifecycle within the AWS ecosystem. This includes analyzing requirements, designing scalable solutions, writing clean and efficient code, integrating third-party systems, and ensuring performance and security best practices. The role involves close collaboration with cross-functional teams to ensure the delivery of impactful, user-friendly AWS cloud solutions across our diverse and innovative customer base. You'll work with the latest technologies and with disruptive customers looking to bring innovative ideas to the market. Roles & Responsibilities: Infrastructure as Code (IaC): Design and implement scalable infrastructure using Terraform, CDK, or Pulumi to support cloud-native applications and services. Continuous Integration & Delivery (CI/CD): Develop, maintain, and optimize CI/CD pipelines to automate the deployment and testing of cloud solutions. Solution Engineering: Collaborate closely with customer engineering teams to implement cloud solutions as defined in the project backlog and architecture design. Custom Automation: Create and maintain custom automation scripts and utilities using Bash, PowerShell, or Lambda functions to streamline operations. Troubleshooting & Optimization: Investigate and resolve system, deployment, and integration issues across development, staging, and production environments. Mentorship & Knowledge Sharing: Guide, train, and mentor junior engineers on best practices in cloud development, security, and DevOps tooling. Qualifications & Skills Bachelors degree in Computer Science, Engineering, or a related technical field. 3 to 6 years of experience as a Cloud Engineer, with a proven track record of delivering successful projects on the Cloud platform. Expert understanding of cloud infrastructure and implementing Infrastructure-as-Code (IaC). Expert understanding of working with, building CI/CD pipelines and GitOps methodology. Excellent knowledge of containerization, Kubernetes, or other container management platforms. Proficient in Networking and IAM security. Proficient with any of: Python, Go, C#, Node.js. Great understanding of Agile project delivery. Great verbal and written communication skills. Enthusiasm for working in a startup environment and the ability to be cross-functional. Possess a natural curiosity and excitement for learning new technology. Experience in some of the following: Migrations, CI/CD, IaC (Terraform, CloudFormation, AWS CDK, Pulumi, etc), Scripting, Containers (Kubernetes, Docker, ECS, Fargate, etc), Cloud Security Best Practices, Networking, Incident, release, problem and change management processes. Experience working in a client-facing environment. Experience working on agile projects. Consultancy/advisory experience. Willingness to work flexible shifts to support customers in different time zones. Signs You May Be a Great Fit Impact: Play a pivotal role in shaping a rapidly growing venture studio with Cloud-driven digital transformation. Culture: Thrive in a collaborative, innovative environment that values creativity, ownership, and agility. Growth: Access professional development opportunities, and mentorship from experienced peers. Benefits: Competitive salary, we'llness packages, and flexible work arrangements that support your lifestyle and goals.

Lead Cloud Engineer

Kolkata, Mumbai, New Delhi, Hyderabad, Pune, Chennai, Bengaluru

6 - 11 years

INR 8.0 - 13.0 Lacs P.A.

Work from Office

Full Time

[{"Salary":null , "Remote_Job":false , "Posting_Title":"Lead Cloud Engineer" , "Is_Locked":false , "City":"Jaipur" , "Industry":"IT Services","Job_Description":" We are seeking a talented Senior Engineer with 6+ years of hands-on experience to join our growing Cloud Architecture & Engineering team. The right candidate is someone who has deep expertise in building a variety of cloud solutions and is passionate about working with our customers, partners, and employees to drive the business forward. This role is suited for someone passionate about building robust, scalable solutions that drive business impact. The successful candidate will play a key role in the full software development lifecycle within the AWS ecosystem. This includes analyzing requirements, designing scalable solutions, writing clean and efficient code, integrating third-party systems, and ensuring performance and security best practices. The role involves close collaboration with cross-functional teams to ensure the delivery of impactful, user-friendly AWS cloud solutions across our diverse and innovative customer base. You\u2019ll work with the latest technologies and with disruptive customers looking to bring innovative ideas to the market. Roles & Responsibilities: Infrastructure as Code (IaC): Design and implement scalable infrastructure using Terraform, CDK, or Pulumi to support cloud-native applications and services. Continuous Integration & Delivery (CI/CD): Develop, maintain, and optimize CI/CD pipelines to automate the deployment and testing of cloud solutions. Solution Engineering: Collaborate closely with customer engineering teams to implement cloud solutions as defined in the project backlog and architecture design. Custom Automation: Create and maintain custom automation scripts and utilities using Bash, PowerShell, or Lambda functions to streamline operations. Troubleshooting & Optimization: Investigate and resolve system, deployment, and integration issues across development, staging, and production environments. Mentorship & Knowledge Sharing: Guide, train, and mentor junior engineers on best practices in cloud development, security, and DevOps tooling. Requirements Qualifications & Skills Bachelors degree in Computer Science, Engineering, or a related technical field. 6+ years of experience as a Cloud Engineer, with a proven track record of delivering successful projects on the Cloud platform. Expert understanding of cloud infrastructure and implementing Infrastructure-as-Code (IaC). Expert understanding of working with, building CI/CD pipelines and GitOps methodology. Excellent knowledge of containerization, Kubernetes, or other container management platforms. Proficient in Networking and IAM security. Proficient with any of: Python, Go, C#, Node.js. Great understanding of Agile project delivery. Great verbal and written communication skills. Enthusiasm for working in a startup environment and the ability to be cross-functional. Possess a natural curiosity and excitement for learning new technology. Experience in some of the following: Migrations, CI/CD, IaC (Terraform, CloudFormation, AWS CDK, Pulumi, etc), Scripting, Containers (Kubernetes, Docker, ECS, Fargate, etc), Cloud Security Best Practices, Networking, Incident, release, problem and change management processes. Experience working in a client-facing environment. Experience working on agile projects. Consultancy/advisory experience. Willingness to work flexible shifts to support customers in different time zones. Signs You May Be a Great Fit Impact: Play a pivotal role in shaping a rapidly growing venture studio with Cloud-driven digital transformation. Culture: Thrive in a collaborative, innovative environment that values creativity, ownership, and agility. Growth: Access professional development opportunities, and mentorship from experienced peers. Benefits: Competitive salary, wellness packages, and flexible work arrangements that support your lifestyle and goals. ","Work_Experience":"3-6 years","Job_Type":"Full time","Job_Opening_Name":"Lead Cloud Engineer" , "State":"Rajasthan" , "Country":"India" , "Zip_Code":"302037" , "id":"121722000001511023" , "Publish":true , "Date_Opened":"2025-06-17" , "Keep_on_Career_Site":false}]

AI Developer

Jaipur

2 - 4 years

INR 9.0 - 12.0 Lacs P.A.

Work from Office

Full Time

[{"Salary":null , "Remote_Job":false , "Posting_Title":"AI Developer" , "Is_Locked":false , "City":"Jaipur" , "Industry":"Cloud For Good (CFG)","Job_Description":" We are seeking a highly skilled and motivated AI Engineer to join our team in developing reusable AI-powered tools and components that drive automation, scalability, and efficiency across our technical delivery ecosystem. This role will focus on leveraging large language models (LLMs) to develop intelligent, context-aware automation tools, starting with a flagship tool that automates SQL script generation for data migrations and transformations. Youll work closely with solution architects and data engineers to build generative AI assets that can be integrated into repeatable client delivery work-streamsespecially for non-profit and education clients using platforms like Salesforce Non-profit Cloud, Education Cloud, Salesforce NPSP, RaiserEdge, and Ellucian Banner. Key Responsibilities Design and Develop AI-Powered Tools Build generative AI tools and services, such as automated SQL generation engines, that reduce manual coding effort and increase accuracy. Prompt Engineering and Tuning Craft and optimize prompts to drive high-quality outputs from foundation models. Fine-tune LLMs or use APIs to adapt model behaviour to specific use cases. Integrate LLMs into Business Workflows Embed LLM-powered components into scalable, reusable applications that can be integrated into delivery processes or client-facing solutions. Data Contextualization and Reasoning Develop solutions that use structured or semi-structured data inputs to generate context-aware outputs, applying reasoning over business logic, schemas, or documentation. Leverage LLMs for Contextual Understanding Develop tooling that uses large language models to interpret source data structures, target schemas, and mapping documents to generate tailored data migration scripts. Collaborate with Cross-Functional Teams Work closely with data engineers, product managers, and solution architects to identify use cases and deliver effective AI-enhanced capabilities. Ensure Model Safety and Performance Monitor and evaluate LLM-generated outputs for accuracy, safety, and consistency. Implement validation, fallback logic, and usage controls as needed. Drive Reusability and Knowledge Capture Develop modular, reusable components that can be easily adapted for new use cases and contribute to a growing library of internal IP. Perpetually contribute to the betterment of Cloud for Good Be prepared to work in shifts to accommodate our global customer base across different time zones, including the AMER, APAC, and EMEA. Requirements Qualifications & Skills: Proven experience building with LLMs or other generative AI models (e.g., OpenAI, Claude, or open-source frameworks like LLaMA). Strong background in Python , with hands-on experience using AI/ML libraries (e.g., LangChain, Hugging Face, transformers). Solid understanding of SQL and data engineering workflows, particularly related to data migration and transformation. Experience working with or integrating structured data systems like SQL, Salesforce, RaiserEdge, and Banner. Ability to build context-aware, prompt-engineered solutions that integrate with APIs or internal systems. Familiarity with MLOps or DevOps practices for deploying and monitoring AI applications is a plus. Excellent communication skills and ability to translate complex AI functionality into real-world business value. Preferred Skills: Exposure to Salesforce ecosystem tools and data models. Experience contributing to internal toolsets or reusable IP in a professional services or consulting environment. Prior experience working with data migration frameworks or ETL pipelines. Signs You May Be a Great Fit Impact: Play a pivotal role in shaping a rapidly growing venture studio. Culture: Thrive in a collaborative, innovative environment that values creativity and ownership. Growth: Access professional development opportunities and mentorship. Benefits: Competitive salary, health/wellness packages, and flexible work options.

Data Consultant

Kota, Jaipur, Bikaner, Jodhpur

2 - 7 years

INR 7.0 - 11.0 Lacs P.A.

Work from Office

Full Time

We are seeking a Data Consultant (Sr. consultant role is also available and will be determined based on experience) to manage large data migration projects with non-profit organizations and educational institutions. Successful candidates will ETL experts (i.e. Jitterbit, Informatica, Boomi, SSIS, MuleSoft) able to demonstrate expert-level skills and knowledge in designing and implementing data migrations. Experience with Salesforce, NPSP, and EDA are a plus. Candidates should have a strong understanding of relational databases, SOQL, and SQL. Because much of the work of this role is client-facing, communication skills and a genuine interest in helping people is very important. The ability to collaborate with peers and clients while building consensus is also critical for success. Key Responsibilities: Serve as the lead data resource on Salesforce implementation projects. Create and iterate on data conversion maps Evaluate, design, and implement data migration solutions for non-profit and higher education clients Plan all aspects of iterative data migration and coordinate with the project manager to align with the overall project plan Maintain up-to-date, accurate knowledge of NPC and/or EDC Assess client business requirements to design architecturally-sound solutions. Deliver on project assignments on time and within budget Perpetually contribute to the betterment of Cloud for Good Show a commitment to customer satisfaction Provide informal mentorship and facilitate knowledge-sharing and growth opportunities Be prepared to work in shifts to accommodate our global customer base across different time zones, including the AMER, APAC, and EMEA. Requirements Qualifications: Experience transforming and migrating data to Salesforce via data loading tools and ETL tools (e.g., Jitterbit, Informatica, Boomi, Mules), including creating initial data map At least 2+ years of consulting experience Strong understanding of relational database architecture, SOQL, and SQL Understanding of agile methodology Salesforce.com administrator certification (if not available will be required to complete within the onboarding period) Strong Salesforce configuration\u202fknowledge is a plus Familiarity working with nonprofits\u202fand/or higher education institutions\u200b Strong consulting skills,\u202fcommunication, and teamwork/collaboration Proven track record of continuously improving organizations Preferred Skills: Strong time management skills Strong written and verbal communication skills Intellectual curiosity Passion for\u202fcontinuous learning\u200b Mentoring skills Presentation skills Signs You May Be a Great Fit Impact: Play a pivotal role in shaping a rapidly growing venture studio. Culture: Thrive in a collaborative, innovative environment that values creativity and ownership. Growth: Access professional development opportunities and mentorship. Benefits: Competitive salary, health/wellness packages, and flexible work options.

Bot Consulting logo

Bot Consulting

|

Business Consulting and Services

Tech City

50 Employees

11 Jobs

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Job Titles Overview