Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
7.0 - 11.0 years
14 - 19 Lacs
Bengaluru
Work from Office
We are seeking an experienced Azure Cloud Engineer who specializes in migrating and modernizing applications to the cloud. The ideal candidate will have deep expertise in Azure Cloud, Terraform (Enterprise), containers (Docker), Kubernetes (AKS), CI/CD with GitHub Actions, and Python scripting . Strong soft skills are essential to communicate effectively with technical and non-technical stakeholders during migration and modernization projects. Key Responsibilities: Lead and execute the migration and modernization of applications to Azure Cloud using containerization and re-platforming. Re-platform, optimize, and manage containerized applications using Docker and orchestrate through Azure Kubernetes Service (AKS) . Implement and maintain robust CI/CD pipelines using GitHub Actions to facilitate seamless application migration and deployment. Automate infrastructure and application deployments to ensure consistent, reliable, and scalable cloud environments. Write Python scripts to support migration automation, integration tasks, and tooling. Collaborate closely with cross-functional teams to ensure successful application migration, modernization, and adoption of cloud solutions. Define and implement best practices for DevOps, security, migration strategies, and the software development lifecycle (SDLC). Infrastructure deployment via Terraform (IAM, networking, security, etc) Non-Functional Responsibilities: Configure and manage comprehensive logging, monitoring, and observability solutions. Develop, test, and maintain Disaster Recovery (DR) plans and backup solutions to ensure cloud resilience. Ensure adherence to all applicable non-functional requirements, including performance, scalability, reliability, and security during migrations. Required Skills and Experience: Expert-level proficiency in migrating and modernizing applications to Microsoft Azure Cloud services. Strong expertise in Terraform (Enterprise) for infrastructure automation. Proven experience with containerization technologies (Docker) and orchestration platforms (AKS). Extensive hands-on experience with GitHub Actions and building CI/CD pipelines specifically for cloud migration and modernization efforts. Proficient scripting skills in Python for automation and tooling. Comprehensive understanding of DevOps methodologies and software development lifecycle (SDLC). Excellent communication, interpersonal, and collaboration skills. Demonstrable experience in implementing logging, monitoring, backups, and disaster recovery solutions within cloud environments
Posted 1 week ago
4.0 - 7.0 years
9 - 13 Lacs
Tamil Nadu
Work from Office
Introduction to the Role: Are you passionate about building intelligent systems that learn, adapt, and deliver real-world value? Join our high-impact AI & Machine Learning Engineering team and be a key contributor in shaping the next generation of intelligent applications. As an AI/ML Engineer , youll have the unique opportunity to develop, deploy, and scale advanced ML and Generative AI (GenAI) solutions in production environments, leveraging cutting-edge technologies, frameworks, and cloud platforms . In this role, you will collaborate with cross-functional teams including data engineers, product managers, MLOps engineers, and architects to design and implement production-grade AI solutions across domains. If you're looking to work at the intersection of deep learning, GenAI, cloud computing, and MLOps this is the role for you. Accountabilities: Design, develop, train, and deploy production-grade ML and GenAI models across use cases including NLP, computer vision, and structured data modeling. Leverage frameworks such as TensorFlow , Keras , PyTorch , and LangChain to build scalable deep learning and LLM-based solutions. Develop and maintain end-to-end ML pipelines with reusable, modular components for data ingestion, feature engineering, model training, and deployment. Implement and manage models on cloud platforms such as AWS , GCP , or Azure using services like SageMaker , Vertex AI , or Azure ML . Apply MLOps best practices using tools like MLflow , Kubeflow , Weights & Biases , Airflow , DVC , and Prefect to ensure scalable and reliable ML delivery. Incorporate CI/CD pipelines (using Jenkins, GitHub Actions, or similar) to automate testing, packaging, and deployment of ML workloads. Containerize applications using Docker and orchestrate scalable deployments via Kubernetes . Integrate LLMs with APIs and external systems using LangChain, Vector Databases (e.g., FAISS, Pinecone), and prompt engineering best practices. Collaborate closely with data engineers to access, prepare, and transform large-scale structured and unstructured datasets for ML pipelines. Build monitoring and retraining workflows to ensure models remain performant and robust in production. Evaluate and integrate third-party GenAI APIs or foundational models where appropriate to accelerate delivery. Maintain rigorous experiment tracking, hyperparameter tuning, and model versioning. Champion industry standards and evolving practices in ML lifecycle management , cloud-native AI architectures , and responsible AI. Work across global, multi-functional teams, including architects, principal engineers, and domain experts. Essential Skills / Experience: 4-7 years of hands-on experience in developing, training, and deploying ML/DL/GenAI models . Strong programming expertise in Python with proficiency in machine learning , data manipulation , and scripting . Demonstrated experience working with Generative AI models and Large Language Models (LLMs) such as GPT, LLaMA, Claude, or similar. Hands-on experience with deep learning frameworks like TensorFlow , Keras , or PyTorch . Experience in LangChain or similar frameworks for LLM-based app orchestration. Proven ability to implement and scale CI/CD pipelines for ML workflows using tools like Jenkins , GitHub , GitLab , or Bitbucket Pipelines . Familiarity with containerization (Docker) and orchestration tools like Kubernetes . Experience working with cloud platforms (AWS, Azure, GCP) and relevant AI/ML services such as SageMaker , Vertex AI , or Azure ML Studio . Knowledge of MLOps tools such as MLflow , Kubeflow , DVC , Weights & Biases , Airflow , and Prefect . Strong understanding of data engineering concepts , including batch/streaming pipelines, data lakes, and real-time processing (e.g., Kafka ). Solid grasp of statistical modeling , machine learning algorithms , and evaluation metrics. Experience with version control systems (Git) and collaborative development workflows. Ability to translate complex business needs into scalable ML architectures and systems. Desirable Skills / Experience: Working knowledge of vector databases (e.g., FAISS , Pinecone , Weaviate ) and semantic search implementation. Hands-on experience with prompt engineering , fine-tuning LLMs, or using techniques like LoRA , PEFT , RLHF . Familiarity with data governance , privacy , and responsible AI guidelines (bias detection, explainability, etc.). Certifications in AWS, Azure, GCP, or ML/AI specializations. Experience in high-compliance industries like pharma , banking , or healthcare . Familiarity with agile methodologies and working in iterative, sprint-based teams. Work Environment & Collaboration: You will be a key member of an agile, forward-thinking AI/ML team that values curiosity, excellence, and impact. Our hybrid work culture promotes flexibility while encouraging regular in-person collaboration to foster innovation and team synergy. You'll have access to the latest technologies, mentorship, and continuous learning opportunities through hands-on projects and professional development resources.
Posted 1 week ago
2.0 - 5.0 years
10 - 15 Lacs
Hyderabad
Work from Office
Roles and Responsibilities Design, develop, and deploy advanced AI models with a focus on generative AI, including transformer architectures (e.g., GPT, BERT, T5) and other deep learningmodels used for text, image, or multimodal generation. Work with extensive and complex datasets, performing tasks such as cleaning, preprocessing, and transforming data to meet quality and relevance standards forgenerative model training. Collaborate with cross-functional teams (e.g., product, engineering, data science) to identify project objectives and create solutions using generative AI tailored tobusiness needs. Implement, fine-tune, and scale generative AI models in production environments, ensuring robust model performance and efficient resource utilization. Develop pipelines and frameworks for efficient data ingestion, model training, evaluation, and deployment, including A/B testing and monitoring of generativemodels in production. Stay informed about the latest advancements in generative AI research, techniques, and tools, applying new findings to improve model performance, usability, andscalability. Document and communicate technical specifications, algorithms, and project outcomes to technical and non-technical stakeholders, with an emphasis onexplainability and responsible AI practices. Qualifications Required Educational Background: Bachelors or Masters degree in Computer Science, Data Science, AI/ML, or a related field. Relevant Ph.D. or research experience in generative AI is a plus. Experience: 2-5 years of experience in machine learning, with 2+ years in designing and implementing generative AI models or working specifically withtransformer-based models. Skills and Experience Required Generative AI: Transformer Models, GANs, VAEs, Text Generation, Image Generation Machine Learning: Algorithms, Deep Learning, Neural Networks Programming: Python, SQL; familiarity with libraries such as Hugging Face Transformers, PyTorch, TensorFlow MLOps: Docker, Kubernetes, MLflow, Cloud Platforms (AWS, GCP, Azure) Data Engineering: Data Preprocessing, Feature Engineering, Data Cleaning Why you'll love working with us: Opportunity to work on technical challenges with global impact. Vast opportunities for self-development, including online university access and sponsored certifications. Sponsored Tech Talks & Hackathons to foster innovation and learning. Generous benefits package including health insurance, retirement benefits, flexible work hours, and more. Supportive work environment with forums to explore passions beyond work. This role presents an exciting opportunity for a motivated individual to contribute to the development of cutting-edge solutions while advancing their career in a dynamic and collaborative environment.
Posted 1 week ago
30.0 - 35.0 years
14 - 19 Lacs
Hyderabad
Work from Office
Senior Software Developers collaborate with Business and Quality Analysts, Designers, Project Managers and more to design software solutions that will create meaningful change for our clients. They listen thoughtfully to understand the context of a business problem and write clean and iterative code to deliver a powerful end result whilst consistently advocating for better engineering practices. By balancing strong opinions with a willingness to find the right answer, Senior Software Developers bring integrity to technology, ensuring all voices are heard. For a team to thrive, it needs collaboration and room for healthy, respectful debate. Senior Developers are the technologists who cultivate this environment while driving teams toward delivering on an aspirational tech vision and acting as mentors for more junior-level consultants. You will leverage deep technical knowledge to solve complex business problems and proactively assess your teams health, code quality and nonfunctional requirements. Job responsibilities You will learn and adopt best practices like writing clean and reusable code using TDD, pair programming and design patterns You will use and advocate for continuous delivery practices to deliver high-quality software as well as value to end customers as early as possible You will work in collaborative, value-driven teams to build innovative customer experiences for our clients You will create large-scale distributed systems out of microservices You will collaborate with a variety of teammates to build features, design concepts and interactive prototypes and ensure best practices and UX specifications are embedded along the way. You will apply the latest technology thinking from our to solve client problems You will efficiently utilize DevSecOps tools and practices to build and deploy software, advocating devops culture and shifting security left in development You will oversee or take part in the entire cycle of software consulting and delivery from ideation to deployment and everything in between You will act as a mentor for less-experienced peers through both your technical knowledge and leadership skills Job qualifications Technical Skills You have experience in Golang and are using one or more development languages (Java, Kotlin, JavaScript, TypeScript, Ruby, C#, etc.) with experience in Object-Oriented programming You can skillfully write high-quality, well-tested code and you are comfortable with Object-Oriented programming You are comfortable with Agile methodologies, such as Extreme Programming (XP), Scrum and/or Kanban You have a good awareness of TDD, continuous integration and continuous delivery approaches/tools Bonus points if you have working knowledge of cloud technology such as AWS, Azure, Kubernetes and Docker Professional Skills You enjoy influencing others and always advocate for technical excellence while being open to change when needed Presence in the external tech community: you willingly share your expertise with others via speaking engagements, contributions to open source, blogs and more Youre resilient in ambiguous situations and can approach challenges from multiple perspectives
Posted 1 week ago
9.0 - 12.0 years
16 - 20 Lacs
Hyderabad
Work from Office
Roles and Responsibilities 1. Architect and design scalable, maintainable, and high-performance backend systems using Python. 2. Lead the development of clean, modular, and reusable code components and services across various domains. 3. Own the technical roadmap for Python-based services, including refactoring strategies, modernization efforts, and integration patterns. 4. Provide expert guidance on system design, code quality, performance tuning, and observability. 5. Collaborate with DevOps teams to build CI/CD pipelines, containerization strategies, and robust cloud-native deployment patterns. 6. Mentor and support software engineers by enforcing strong engineering principles, design best practices, and performance debugging techniques. 7. Evaluate and recommend new technologies or frameworks where appropriate, particularly in the areas of AI/ML and GenAI integration. Qualifications Required Preferred Qualifications: Desirable (Good-to-Have) GenAI / AI/ML Skills 1. Exposure to Large Language Models (LLMs) and prompt engineering. 2. Basic familiarity with Retrieval-Augmented Generation (RAG) and vector databases (FAISS, Pinecone, Weaviate) 3. Understanding of model fine-tuning concepts (LoRA, QLoRA, PEFT) 4. Experience using or integrating LangChain, LlamaIndex, or Hugging Face Transformers 5. Familiarity with AWS AI/ML services like Bedrock and SageMaker Technology Stack Languages: Python (Primary), Bash, YAML/JSON Web Frameworks: FastAPI, Flask, gRPC Databases: PostgreSQL, Redis, MongoDB, DynamoDB Cloud Platform: AWS (must), GCP/Azure (bonus) DevOps & Deployment: Docker, Kubernetes, Terraform, GitHub Actions, ArgoCD Observability: OpenTelemetry, Prometheus, Grafana, Loki GenAI Tools (Optional): Bedrock, SageMaker, LangChain, Hugging Face Private and Confidential Preferred Profile 8+ years of hands-on software development experience 3+ years in a solution/technical architect or lead engineer role Strong problem-solving skills and architectural thinking Experience collaborating across teams and mentoring engineers Passion for building clean, scalable systems and openness to learning emerging technologies like GenAI Skills and Experience Required Core Python & Architectural Skills Strong hands-on experience in advanced Python programming (7+ years), including: 1. Language internals (e.g., decorators, metaclasses, descriptors) 2. Concurrency (asyncio, multiprocessing, threading) 3. Performance optimization and profiling (e.g., cProfile, py-spy) 4. Strong testing discipline (pytest, mocking, coverage analysis) Proven track record in designing scalable, distributed systems: 1. Event-driven architectures, service-oriented and microservice-based systems. 2. Experience with REST/gRPC APIs, async queues, caching strategies, and database modeling. Proficiency in building and deploying cloud-native applications: 1. Strong AWS exposure (EC2, Lambda, S3, IAM, etc.) Infrastructure-as-Code (Terraform/CDK) Private and Confidential 2. CI/CD pipelines, Docker, Kubernetes, GitOps Deep understanding of software architecture patterns (e.g., hexagonal, layered, DDD) Excellent debugging, tracing, and observability skills with tools like OpenTelemetry, Prometheus, Grafana
Posted 1 week ago
4.0 - 9.0 years
12 - 16 Lacs
Gurugram
Work from Office
We are seeking a mid to senior-level Azure Cloud Engineer to deliver cloud engineering services to Rackspaces Enterprise clients. The ideal candidate will have strong, hands-on technical skills, with the experience and consulting skills to understand, shape, and deliver against our customers requirements. Design and implement Azure cloud solutions that are secure, scalable, resilient, monitored, auditable, and cost optimized. Build out new customer cloud solutions using cloud-native components. Write Infrastructure as Code (IaC) using Terraform. Write application/Infra deployment pipeline using Azure DevOps or other industry-standard deployment and configuration tools. Usage of cloud foundational architecture and components to build out automated cloud environments, CI/CD pipelines, and supporting services frameworks. Work with developers to identify necessary Azure resources and automate their provisioning. Document automation processes. Create and document a disaster recovery plan. Strong communication skills along with customer-facing experience. Respond to customer support requests within response time SLAs. Troubleshoot performance degradation or loss of service as time-critical incidents. Ownership of issues, including collaboration with other teams and escalation. Participate in a shared on-call rotation. Support the success and development of others in the team. SKILLS & EXPERIENCE Engineer with 4-12 years of experience in the Azure cloud along with writing Infrastructure as Code (IaC) and building application/Infra deployment pipelines. Experienced in on-prem/AWS/GCP to Azure migration using tooling like Azure migrate etc. Expert-level knowledge of Azure Products & Services, Scaling, Load Balancing, etc. Expert level knowledge of Azure DevOps, Pipelines, and CI/CD. Expert level knowledge of Terraform and scripting (Python/Shell/PowerShell). Working knowledge in containerization technologies like Kubernetes, AKS, etc. Working knowledge of Azure networking, like VPN gateways, VNets, etc. Working knowledge of Windows or Linux operating systems experience with supporting and troubleshooting stability and performance issues. Working knowledge of automating the management and enforcement of policies using Azure Policies or similar. Good understanding of other DevOps tools like Ansible, Jenkins, etc. Good understanding of the design of native Cloud applications, Cloud application design patterns, and practices. Azure Admin, Azure DevOps, terraform certified candidates preferred.
Posted 1 week ago
12.0 - 16.0 years
14 - 18 Lacs
Hyderabad
Work from Office
Roles and Responsibilities Design, develop, and deploy advanced AI models with a focus on generative AI, including transformer architectures (e.g., GPT, BERT, T5) and other deep learning models used for text, image, or multimodal generation. Work with extensive and complex datasets, performing tasks such as cleaning, preprocessing, and transforming data to meet quality and relevance standards for generative model training. Collaborate with cross-functional teams (e.g., product, engineering, data science) to identify project objectives and create solutions using generative AI tailored to business needs. Implement, fine-tune, and scale generative AI models in production environments, ensuring robust model performance and efficient resource utilization. Develop pipelines and frameworks for efficient data ingestion, model training, evaluation, and deployment, including A/B testing and monitoring of generative models in production. Stay informed about the latest advancements in generative AI research, techniques, and tools, applying new findings to improve model performance, usability, and scalability. Document and communicate technical specifications, algorithms, and project outcomes to technical and non-technical stakeholders, with an emphasis on explainability and responsible AI practices. Qualifications Required Educational Background: Bachelors or Masters degree in Computer Science, Data Science, AI/ML, or a related field. Relevant Ph.D. or research experience in generative AI is a plus. Experience: 12 - 16 Years of experience in machine learning, with 8+ years in designing and implementing generative AI models or working specifically with transformer-based models. Skills and Experience Required Generative AI: Transformer Models, GANs, VAEs, Text Generation, Image Generation Machine Learning: Algorithms, Deep Learning, Neural Networks Langchain, GPT-4, Sagemaker/Bedrock Programming: Python, SQL; familiarity with libraries such as Hugging Face Transformers, PyTorch, TensorFlow MLOps: Docker, Kubernetes, MLflow, Cloud Platforms (AWS, GCP, Azure) Data Engineering: Data Preprocessing, Feature Engineering, Data Cleaning
Posted 1 week ago
6.0 - 11.0 years
18 - 22 Lacs
Hyderabad, Gurugram, Bengaluru
Work from Office
Overview We are seeking an experienced Data Architect with extensive expertise in designing and implementing modern data architectures. This role requires strong software engineering principles, hands-on coding abilities, and experience building data engineering frameworks. The ideal candidate will have a proven track record of implementing Databricks-based solutions in the healthcare industry, with expertise in data catalog implementation and governance frameworks. About the Role As a Data Architect, you will be responsible for designing and implementing scalable, secure, and efficient data architectures on the Databricks platform. You will lead the technical design of data migration initiatives from legacy systems to modern Lakehouse architecture, ensuring alignment with business requirements, industry best practices, and regulatory compliance. Key Responsibilities Design and implement modern data architectures using Databricks Lakehouse platform Lead the technical design of Data Warehouse/Data Lake migration initiatives from legacy systems Develop data engineering frameworks and reusable components to accelerate delivery Establish CI/CD pipelines and infrastructure-as-code practices for data solutions Implement data catalog solutions and governance frameworks Create technical specifications and architecture documentation Provide technical leadership to data engineering teams Collaborate with cross-functional teams to ensure alignment of data solutions Evaluate and recommend technologies, tools, and approaches for data initiatives Ensure data architectures meet security, compliance, and performance requirements Mentor junior team members on data architecture best practices Stay current with emerging technologies and industry trends Qualifications Extensive experience in data architecture design and implementation Strong software engineering background with expertise in Python or Scala Proven experience building data engineering frameworks and reusable components Experience implementing CI/CD pipelines for data solutions Expertise in infrastructure-as-code and automation Experience implementing data catalog solutions and governance frameworks Deep understanding of Databricks platform and Lakehouse architecture Experience migrating workloads from legacy systems to modern data platforms Strong knowledge of healthcare data requirements and regulations Experience with cloud platforms (AWS, Azure, GCP) and their data services Bachelor's degree in computer science, Information Systems, or related field; advanced degree preferred Technical Skills Programming languages: Python and/or Scala (required) Data processing frameworks: Apache Spark, Delta Lake CI/CD tools: Jenkins, GitHub Actions, Azure DevOps Infrastructure-as-code (optional): Terraform, CloudFormation, Pulumi Data catalog tools: Databricks Unity Catalog, Collibra, Alation Data governance frameworks and methodologies Data modeling and design patterns API design and development Cloud platforms: AWS, Azure, GCP Container technologies: Docker, Kubernetes Version control systems: Git SQL and NoSQL databases Data quality and testing frameworks Optional - Healthcare Industry Knowledge Healthcare data standards (HL7, FHIR, etc.) Clinical and operational data models Healthcare interoperability requirements Healthcare analytics use cases
Posted 1 week ago
5.0 - 10.0 years
19 - 25 Lacs
Hyderabad, Gurugram, Bengaluru
Work from Office
As a full spectrum integrator, we assist hundreds of companies to realize the value, efficiency, and productivity of the cloud. We take customers on their journey to enable, operate, and innovate using cloud technologies from migration strategy to operational excellence and immersive transformation. If you like a challenge, youll love it here, because were solving complex business problems every day, building and promoting great technology solutions that impact our customers success. The best part is, were committed to you and your growth, both professionally and personally. You will be part of a team designing, automating, and deploying services on behalf of our customers to the cloud in a way that allows these services to automatically heal themselves if things go south. We have deep experience applying cloud architecture techniques in virtually every industry. Every week is different and the problems you will be challenged to solve are constantly evolving. We build solutions using infrastructure-as-code so our customers can refine and reuse these processes again and again - all without having to come back to us for additional deployments. Key Responsibilities Create well-designed, documented, and tested software features that meet customer requirements. Identify and address product bugs, deficiencies, and performance bottlenecks. Participate in an agile delivery team, helping to ensure the technical quality of the features delivered across the team, including documentation, testing strategies, and code. Help determine technical feasibility and solutions for business requirements. Remain up-to-date on emerging technologies and architecture and propose ways to use them in current and upcoming projects. Leverage technical knowledge to cut scope while maintaining or achieving the overall goals of the product. Leverage technical knowledge to improve the quality and efficiency of product applications and tools. Willingness to travel to client locations and deliver professional services Qualifications Experience developing software in GCP, AWS, or Azure 5+ yrs experience developing applications in Java 3+ years required with at least one other programming language such as , Scala, Python, Go, C#, Typescript, Ruby. Experience with relational databases, including designing complex schemas and queries. Experience developing within distributed systems or a microservice based architecture. Strong verbal and written communication skills for documenting workflows, tools, or complex areas of a codebase. Ability to thrive in a fast-paced environment and multi-task efficiently. Strong analytical and troubleshooting skills. 3+ years of experience as a technical specialist in Customer-facing roles Experience with Agile development methodologies Experience with Continuous Integration and Continuous Delivery (CI/CD) Preferred Qualifications Experience with GCP Building applications using Container and serverless technologies Cloud Certifications Good exposure to Agile software development and DevOps practices such as Infrastructure as Code (IaC), Continuous Integration and automated deployment Exposure to Continuous Integration (CI) tools (e.g. Jenkins) Strong practical application development experience on Linux and Windows-based systems Experience working directly with customers, partners or third-party developers Location- Remote,Bangalore,Gurgaon,Hyderabad
Posted 1 week ago
3.0 - 8.0 years
10 - 20 Lacs
Hyderabad, Ahmedabad, Bengaluru
Work from Office
SUMMARY Sr. Data Analytics Engineer Databricks - Power mission-critical decisions with governed insight s Company: Ajmera Infotech Private Limited (AIPL) Location: Ahmedabad, Bangalore /Bengaluru, Hyderabad (On-site) Experience: 5 9 years Position Type: Full-time, Permanent Ajmera Infotech builds planet-scale software for NYSE-listed clients, driving decisions that can’t afford to fail. Our 120-engineer team specializes in highly regulated domains HIPAA, FDA, SOC 2 and delivers production-grade systems that turn data into strategic advantage. Why You’ll Love It End-to-end impact Build full-stack analytics from lake house pipelines to real-time dashboards. Fail-safe engineering TDD, CI/CD, DAX optimization, Unity Catalog, cluster tuning. Modern stack Databricks, PySpark , Delta Lake, Power BI, Airflow. Mentorship culture Lead code reviews, share best practices, grow as a domain expert. Mission-critical context Help enterprises migrate legacy analytics into cloud-native, governed platforms. Compliance-first mindset Work in HIPAA-aligned environments where precision matters. Requirements Key Responsibilities Build scalable pipelines using SQL, PySpark , Delta Live Tables on Databricks. Orchestrate workflows with Databricks Workflows or Airflow; implement SLA-backed retries and alerting. Design dimensional models (star/snowflake) with Unity Catalog and Great Expectations validation. Deliver robust Power BI solutions dashboards, semantic layers, paginated reports, DAX. Migrate legacy SSRS reports to Power BI with zero loss of logic or governance. Optimize compute and cost through cache tuning, partitioning, and capacity monitoring. Document everything from pipeline logic to RLS rules in Git-controlled formats. Collaborate cross-functionally to convert product analytics needs into resilient BI assets. Champion mentorship by reviewing notebooks, dashboards, and sharing platform standards. Must-Have Skills 5+ years in analytics engineering, with 3+ in production Databricks/Spark contexts. Advanced SQL (incl. windowing), expert PySpark , Delta Lake, Unity Catalog. Power BI mastery DAX optimization, security rules, paginated reports. SSRS-to-Power BI migration experience (RDL logic replication) Strong Git, CI/CD familiarity, and cloud platform know-how (Azure/AWS). Communication skills to bridge technical and business audiences. Nice-to-Have Skills Databricks Data Engineer Associate cert. Streaming pipeline experience (Kafka, Structured Streaming). dbt , Great Expectations, or similar data quality frameworks. BI diversity experience with Tableau, Looker, or similar platforms. Cost governance familiarity (Power BI Premium capacity, Databricks chargeback). Benefits What We Offer Competitive salary package with performance-based bonuses. Comprehensive health insurance for you and your family.
Posted 1 week ago
6.0 - 10.0 years
13 - 17 Lacs
Chennai
Work from Office
Capgemini Invent Capgemini Invent is the digital innovation, consulting and transformation brand of the Capgemini Group, a global business line that combines market leading expertise in strategy, technology, data science and creative design, to help CxOs envision and build whats next for their businesses. Your role You act as a contact person for our customers and advise them on data-driven projects. You are responsible for architecture topics and solution scenarios in the areas of Cloud Data Analytics Platform, Data Engineering, Analytics and Reporting. Experience in Cloud and Big Data architecture. Responsibility for designing viable architectures based on Microsoft Azure, AWS, Snowflake, Google (or similar) and implementing analytics. Experience in DevOps, Infrasturcure as a code, DataOps, MLOps. Experience in business development (as well as your support in the proposal process). Data warehousing, data modelling and data integration for enterprise data environments. Experience in design of large scale ETL solutions integrating multiple / heterogeneous systems. Experience in data analysis, modelling (logical and physical data models) and design specific to a data warehouse / Business Intelligence environment (normalized and multi-dimensional modelling). Experience with ETL tools primarily Talend and/or any other Data Integrator tools (Open source / proprietary), extensive experience with SQL and SQL scripting (PL/SQL & SQL query tuning and optimization) for relational databases such as PostgreSQL, Oracle, Microsoft SQL Server and MySQL etc., and on NoSQL like MongoDB and/or document-based databases. Must be detail oriented, highly motivated and work independently with minimal direction. Excellent written, oral and interpersonal communication skills with ability to communicate design solutions to both technical and non-technical audiences. IdeallyExperience in agile methods such as safe, scrum, etc. IdeallyExperience on programming languages like Python, JavaScript, Java/ Scala etc. Your Profile Provides data services for enterprise information strategy solutions - Works with business solutions leaders and teams to collect and translate information requirements into data to develop data-centric solutions. Design and develop modern enterprise data centric solutions (e.g. DWH, Data Lake, Data Lakehouse) Responsible for designing of data governance solutions. What you will love about working here We recognize the significance of flexible work arrangements to provide support . Be it remote work, or flexible work hours, you will get an environment to maintain healthy work life balance. At the heart of our mission is your career growth. Our array of career growth programs and diverse professions are crafted to support you in exploring a world of opportunities. Equip yourself with valuable certifications in the latest technologies such as Generative AI. About Capgemini Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market leading capabilities in AI, cloud and data, combined with its deep industry expertise and partner ecosystem. The Group reported 2023 global revenues of 22.5 billion.
Posted 1 week ago
5.0 - 10.0 years
12 - 16 Lacs
Kolkata
Work from Office
Capgemini Invent Capgemini Invent is the digital innovation, consulting and transformation brand of the Capgemini Group, a global business line that combines market leading expertise in strategy, technology, data science and creative design, to help CxOs envision and build whats next for their businesses. Your role Having 5+ years of experience in creating data strategy frameworks/ roadmaps Having relevant experience in data exploration & profiling, involve in data literacy activities for all stakeholders. 5+ years in Analytics and data maturity evaluation based on current AS-is vs to-be framework. 5+ years Relevant experience in creating functional requirements document, Enterprise to-be data architecture. Relevant experience in identifying and prioritizing use case by for business; important KPI identification opex/capex for CXO's 2+ years working knowledge in Data StrategyData Governance/ MDM etc 4+ year experience in Data Analytics operating model with vision on prescriptive, descriptive, predictive , cognitive analytics Identify, design, and recommend internal process improvementsautomating manual processes, optimizing data delivery, re- designing infrastructure for greater scalability, etc. Identify data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader. Work with data and analytics experts to create frameworks for digital twins/ digital threads. Relevant experience in co-ordinating with cross functional team ; aka SPOC for global master data Your Profile 8+ years of experience in a Data Strategy role, who has attained a Graduate degree in Computer Science, Informatics, Information Systems, or another quantitative field. They should also have experience using the following software/tools: Experience with understanding big data toolsHadoop, Spark, Kafka, etc. Experience with understanding relational SQL and NoSQL databases, including Postgres and Cassandra/Mongo dB. Experience with understanding data pipeline and workflow management toolsLuigi, Airflow, etc. Good to have cloud skillsets (Azure/ AWS/ GCP), 5+ years of Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.Postgres/ SQL/ Mongo 2+ years working knowledge in Data StrategyData Governance/ MDM etc. Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement. Strong analytic skills related to working with unstructured datasets. A successful history of manipulating, processing, and extracting value from large, disconnected datasets. Working knowledge of message queuing, stream processing, and highly scalable big data data stores. Strong project management and organizational skills. Experience supporting and working with cross-functional teams in a dynamic environment. About Capgemini Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market leading capabilities in AI, cloud and data, combined with its deep industry expertise and partner ecosystem. The Group reported 2023 global revenues of 22.5 billion.
Posted 1 week ago
5.0 - 8.0 years
8 - 13 Lacs
Bengaluru
Work from Office
Short Description Seeking a skilled and experienced IBM Maximo Manage Specialist to support and enhance enterprise asset and configuration management operations. The ideal candidate will have a strong background in IT operations , IT Asset Management (ITAM) , and system administration , with hands-on experience in configuring and managing IBM Maximo environments. Key Responsibilities: Configure, customize, and maintain IBM Maximo Manage to support IT asset lifecycle and operational workflows. Implement and manage ITAM processes including asset discovery, tracking, and compliance. Develop and maintain configuration management databases (CMDB) and ensure data integrity. Collaborate with IT operations, infrastructure, and service management teams to align Maximo with business needs. Perform system administration tasks including user management, security roles, and workflow automation. Troubleshoot and resolve technical issues related to Maximo performance, integrations, and upgrades. Generate reports and dashboards for asset performance, lifecycle status, and compliance metrics. Required Skills: 6+ years of experience with IBM Maximo Manage in an enterprise IT environment. Strong understanding of IT Asset Management (ITAM) and Configuration Management principles. Experience with Maximo Application Designer , Database Configuration , and Automation Scripts . Proficiency in SQL , Java , or Python for Maximo customization and reporting. Familiarity with ITIL processes , especially Change , Incident , and Problem Management . Knowledge of system administration , including patching, upgrades, and performance tuning. Preferred Skills: Experience integrating Maximo with ERP systems , discovery tools , or CMDB platforms . Understanding of cloud-based Maximo deployments (e.g., IBM Maximo Application Suite on AWS/Azure). Exposure to mobile asset management and IoT integrations . Certification in IBM Maximo , ITIL , or ServiceNow is a plus. Qualifications: Bachelors degree in Information Technology , Computer Science , or a related field. Relevant certifications or training in IBM Maximo or ITAM tools preferred. Soft Skills: Strong analytical and problem-solving skills. Excellent communication and documentation abilities. Ability to work independently and collaboratively in a fast-paced environment. Detail-oriented with a focus on process improvement and operational efficiency.
Posted 1 week ago
5.0 - 9.0 years
8 - 12 Lacs
Bengaluru
Work from Office
About The Role 5+ years of experience in Devops Tools and container orchestration platforms like Docker , Kubernetes. Experienced in Linux Environment, Jenkins, CICD Integrated Pipeline with python scripting concepts. Experience working with GitHub repositories, managing pull requests, branching strategies, GitHub Enterprise and automation using GitHub APIs or GitHub CLI. Primary Skills Devops Tools, CI/CD Integrated Pipeline, Docker, Kubernetes, Linux environment ,Python scripting and Jenkins
Posted 1 week ago
3.0 - 8.0 years
0 - 2 Lacs
Hyderabad, Ahmedabad, Bengaluru
Work from Office
SUMMARY Kubernetes Engineer Build bulletproof infrastructure for regulated industries At Ajmera Infotech , we're building planet-scale software for NYSE-listed clients with a 120+ strong engineering team. Our work powers mission-critical systems in HIPAA, FDA, and SOC2-compliant domains where failure is not an option. Why You’ll Love It Own production-grade Kubernetes deployments at real scale Drive TDD-first DevOps in CI/CD environments Work in a compliance-first org (HIPAA, FDA, SOC2) with code-first values Collaborate with top-tier engineers in multi-cloud deployments Career growth via mentorship , deep-tech projects , and leadership tracks Requirements Key Responsibilities Design, deploy, and manage resilient Kubernetes clusters (k8s/k3s) Automate workload orchestration using Ansible or custom scripting Integrate Kubernetes deeply into CI/CD pipelines Tune infrastructure for performance, scalability, and regulatory reliability Support secure multi-tenant environments and compliance needs (e.g., HIPAA/FDA) Must-Have Skills 3 8 years of hands-on experience in production Kubernetes environments Expert-level knowledge of containerization with Docker Proven experience with CI/CD integration for k8s Automation via Ansible , shell scripting, or similar tools Infrastructure performance tuning within Kubernetes clusters Nice-to-Have Skills Multi-cloud cluster management (AWS/GCP/Azure) Helm, ArgoCD, or Flux for deployment and GitOps Service mesh, ingress controllers, and pod security policies Benefits Competitive salary package with performance-based bonuses. Comprehensive health insurance for you and your family. Flexible working hours and generous paid leave . High-end workstations and access to our in-house device lab. Sponsored learning: certifications, workshops, and tech conferences.
Posted 1 week ago
3.0 - 8.0 years
0 - 2 Lacs
Hyderabad, Ahmedabad, Bengaluru
Work from Office
SUMMARY CI/CD Pipeline Engineer Build mission-critical release pipelines for regulated industries (On-site only) At Ajmera Infotech , we engineer planet-scale software with a 120-strong dev team powering NYSE-listed clients. From HIPAA-grade healthcare systems to FDA-audited workflows, our code runs where failure isn't an option. Why You’ll Love It TDD/BDD culture we build defensible code from day one Code-first pipelines GitHub Actions, Octopus, IaC principles Mentorship-driven growth senior engineers help level you up End-to-end ownership deploy what you build Audit-readiness baked in work in HIPAA, FDA, SOC2 landscapes Cross-platform muscle deploy to Linux, MacOS, Windows Requirements Key Responsibilities Design and maintain CI pipelines using GitHub Actions (or Jenkins/Bamboo) Own build and release automation across dev, staging, and prod Integrate with Octopus Deploy (or equivalent) for continuous delivery Configure pipelines for multi-platform environments Build compliance-resilient workflows (SOC2, HIPAA, FDA) Manage source control (Git), Jira, Confluence, and build APIs Implement advanced deployment strategies: canary, blue-green, rollback Must-Have Skills CI expertise: GitHub Actions , Jenkins, or Bamboo Deep understanding of build/release pipelines Cross-platform deployment: Linux, MacOS, Windows Experience with compliance-first CI/CD practices Proficiency with Git, Jira, Confluence, API integrations Nice-to-Have Skills Octopus Deploy or similar CD tools Experience with containerized multi-stage pipelines Familiarity with feature flagging , canary releases , rollback tactics Benefits Competitive salary package with performance-based bonuses. Comprehensive health insurance for you and your family. Flexible working hours and generous paid leave . High-end workstations and access to our in-house device lab. Sponsored learning: certifications, workshops, and tech conferences.
Posted 1 week ago
3.0 - 8.0 years
0 - 2 Lacs
Hyderabad, Ahmedabad, Bengaluru
Work from Office
SUMMARY Sr. Cloud Infrastructure Engineer Build the Backbone of Mission-Critical Software (On-site only) Ajmera Infotech is a planet-scale engineering firm powering NYSE-listed clients with a 120+ strong team of elite developers. We build fail-safe, compliant software systems that cannot go down and now, we’re hiring a senior cloud engineer to help scale our infrastructure to the next level. Why You’ll Love It Terraform everything Zero-click, GitOps-driven provisioning pipelines Hardcore compliance Build infrastructure aligned with HIPAA, FDA, and SOC2 Infra across OSes Automate for Linux, MacOS, and Windows environments Own secrets & state Use Vault, Packer, Consul like a HashiCorp champion Team of pros Collaborate with engineers who write tests before code Dev-first culture Code reviews, mentorship, and CI/CD by default Real-world scale Azure-first systems powering critical applications Requirements Key Responsibilities Design and automate infrastructure as code using Terraform, Ansible, and GitOps Implement secure secret management via HashiCorp Vault Build CI/CD-integrated infra automation across hybrid environments Develop scripts and tooling in PowerShell, Bash, and Python Manage cloud infrastructure primarily on Azure, with exposure to AWS Optimize for performance, cost, and compliance at every layer Support infrastructure deployments using containerization tools (e.g., Docker, Kubernetes) Must-Have Skills 3 8 years in infrastructure automation and cloud engineering Deep expertise in Terraform (provisioning, state management) Hands-on with HashiCorp Vault, Packer, and Consul Strong Azure experience Proficiency with Ansible and GitOps workflows Cross-platform automation: Linux, MacOS, Windows CI/CD knowledge for infra pipelines REST API usage for automation tasks PowerShell, Python, and Bash scripting Nice-to-Have Skills AWS exposure Cost-performance optimization experience in cloud environments Containerization for infra deployments (Docker, Kubernetes) Benefits What We Offer Competitive salary package with performance-based bonuses. Comprehensive health insurance for you and your family. Flexible working hours and generous paid leave . High-end workstations and access to our in-house device lab. Sponsored learning: certifications, workshops, and tech conferences.
Posted 1 week ago
5.0 - 7.0 years
7 - 9 Lacs
Thane
Work from Office
The candidate must possess in-depth functional knowledge of the process area and apply it to operational scenarios to provide effective solutions. The candidate must be able to identify discrepancies and propose optimal solutions by using a logical, systematic, and sequential methodology. It is vital to be open-minded towards inputs and views from team members and to effectively lead, control, and motivate groups towards company objects. Additionally, the candidate must be self-directed, proactive, and seize every opportunity to meet internal and external customer needs and achieve customer satisfaction by effectively auditing processes, implementing best practices and process improvements, and utilizing the frameworks and tools available. Goals and thoughts must be clearly and concisely articulated and conveyed, verbally and in writing, to clients, colleagues, subordinates, and supervisors. Associate Process Manager Roles and responsibilities: Leadership and Mentorship Team LeadershipLead and mentor a team of data scientists and analysts, guiding them in best practices, advanced methodologies, and career development. Project ManagementOversee multiple analytics projects, ensuring they are completed on time, within scope, and deliver impactful results. Innovation and Continuous LearningStay at the forefront of industry trends, new technologies, and methodologies, fostering a culture of innovation within the team. Collaboration with Cross-Functional Teams Stakeholder EngagementWork closely with key account managers, data analysts, and other stakeholders to understand their needs and translate them into data-driven solutions. Communication of InsightsPresent complex analytical findings clearly and actionably to non-technical stakeholders, helping guide strategic business decisions. Advanced Data Analysis and Modeling Develop Predictive ModelsCreate and validate complex predictive models for risk assessment, portfolio optimization, fraud detection, and market forecasting. Quantitative ResearchConduct in-depth quantitative research to identify trends, patterns, and relationships within large financial datasets. Statistical Analysis:Apply advanced statistical techniques to assess investment performance, asset pricing, and financial risk. Business Impact and ROI Performance MetricsDefine and track key performance indicators (KPIs) to measure the effectiveness of analytics solutions and their impact on the firm's financial performance. Cost-Benefit AnalysisPerform cost-benefit analyses to prioritize analytics initiatives that offer the highest return on investment (ROI). Algorithmic Trading and Automation Algorithm DevelopmentDevelop and refine trading algorithms that automate decision-making processes, leveraging machine learning and AI techniques. Back testing and SimulationConduct rigorous back testing and simulations of trading strategies to evaluate their performance under different market conditions. Advanced Statistical TechniquesExpertise in statistical methods such as regression analysis, time-series forecasting, hypothesis testing, and statistics. Machine Learning and AIProficiency in machine learning algorithms and experience with AI techniques, particularly in the context of predictive modeling, anomaly detection, and natural language processing (NLP). Programming LanguagesStrong coding skills in languages like Python, commonly used for data analysis, modeling, and automation. Data Management:Experience with big data technologies, and relational databases to handle and manipulate large datasets. Data VisualizationProficiency in creating insightful visualizations that effectively communicate complex data findings to stakeholders. Cloud ComputingFamiliarity with cloud platforms like AWS, Azure, or Google Cloud for deploying scalable data solutions. Quantitative AnalysisDeep understanding of quantitative finance, including concepts like pricing models, portfolio theory, and risk metrics. Algorithmic TradingExperience in developing and back testing trading algorithms using quantitative models and data-driven strategies. Technical and Functional Skills: Bachelor's degree in a related field, such as computer science, data science, or statistics. Proven experience of 5 to 7 years in programming languages, machine learning, data visualization and statistical analysis.
Posted 1 week ago
5.0 - 7.0 years
7 - 9 Lacs
Dombivli
Work from Office
The candidate must possess in-depth functional knowledge of the process area and apply it to operational scenarios to provide effective solutions. The candidate must be able to identify discrepancies and propose optimal solutions by using a logical, systematic, and sequential methodology. It is vital to be open-minded towards inputs and views from team members and to effectively lead, control, and motivate groups towards company objects. Additionally, the candidate must be self-directed, proactive, and seize every opportunity to meet internal and external customer needs and achieve customer satisfaction by effectively auditing processes, implementing best practices and process improvements, and utilizing the frameworks and tools available. Goals and thoughts must be clearly and concisely articulated and conveyed, verbally and in writing, to clients, colleagues, subordinates, and supervisors. Associate Process Manager Roles and responsibilities: Leadership and Mentorship Team LeadershipLead and mentor a team of data scientists and analysts, guiding them in best practices, advanced methodologies, and career development. Project ManagementOversee multiple analytics projects, ensuring they are completed on time, within scope, and deliver impactful results. Innovation and Continuous LearningStay at the forefront of industry trends, new technologies, and methodologies, fostering a culture of innovation within the team. Collaboration with Cross-Functional Teams Stakeholder EngagementWork closely with key account managers, data analysts, and other stakeholders to understand their needs and translate them into data-driven solutions. Communication of InsightsPresent complex analytical findings clearly and actionably to non-technical stakeholders, helping guide strategic business decisions. Advanced Data Analysis and Modeling Develop Predictive ModelsCreate and validate complex predictive models for risk assessment, portfolio optimization, fraud detection, and market forecasting. Quantitative ResearchConduct in-depth quantitative research to identify trends, patterns, and relationships within large financial datasets. Statistical Analysis:Apply advanced statistical techniques to assess investment performance, asset pricing, and financial risk. Business Impact and ROI Performance MetricsDefine and track key performance indicators (KPIs) to measure the effectiveness of analytics solutions and their impact on the firm's financial performance. Cost-Benefit AnalysisPerform cost-benefit analyses to prioritize analytics initiatives that offer the highest return on investment (ROI). Algorithmic Trading and Automation Algorithm DevelopmentDevelop and refine trading algorithms that automate decision-making processes, leveraging machine learning and AI techniques. Back testing and SimulationConduct rigorous back testing and simulations of trading strategies to evaluate their performance under different market conditions. Advanced Statistical TechniquesExpertise in statistical methods such as regression analysis, time-series forecasting, hypothesis testing, and statistics. Machine Learning and AIProficiency in machine learning algorithms and experience with AI techniques, particularly in the context of predictive modeling, anomaly detection, and natural language processing (NLP). Programming LanguagesStrong coding skills in languages like Python, commonly used for data analysis, modeling, and automation. Data Management:Experience with big data technologies, and relational databases to handle and manipulate large datasets. Data VisualizationProficiency in creating insightful visualizations that effectively communicate complex data findings to stakeholders. Cloud ComputingFamiliarity with cloud platforms like AWS, Azure, or Google Cloud for deploying scalable data solutions. Quantitative AnalysisDeep understanding of quantitative finance, including concepts like pricing models, portfolio theory, and risk metrics. Algorithmic TradingExperience in developing and back testing trading algorithms using quantitative models and data-driven strategies. Technical and Functional Skills: Bachelor's degree in a related field, such as computer science, data science, or statistics. Proven experience of 5 to 7 years in programming languages, machine learning, data visualization and statistical analysis.
Posted 1 week ago
5.0 - 7.0 years
7 - 9 Lacs
Chennai
Work from Office
The candidate must possess in-depth functional knowledge of the process area and apply it to operational scenarios to provide effective solutions. The candidate must be able to identify discrepancies and propose optimal solutions by using a logical, systematic, and sequential methodology. It is vital to be open-minded towards inputs and views from team members and to effectively lead, control, and motivate groups towards company objects. Additionally, the candidate must be self-directed, proactive, and seize every opportunity to meet internal and external customer needs and achieve customer satisfaction by effectively auditing processes, implementing best practices and process improvements, and utilizing the frameworks and tools available. Goals and thoughts must be clearly and concisely articulated and conveyed, verbally and in writing, to clients, colleagues, subordinates, and supervisors. Associate Process Manager Roles and responsibilities: Leadership and Mentorship Team LeadershipLead and mentor a team of data scientists and analysts, guiding them in best practices, advanced methodologies, and career development. Project ManagementOversee multiple analytics projects, ensuring they are completed on time, within scope, and deliver impactful results. Innovation and Continuous LearningStay at the forefront of industry trends, new technologies, and methodologies, fostering a culture of innovation within the team. Collaboration with Cross-Functional Teams Stakeholder EngagementWork closely with key account managers, data analysts, and other stakeholders to understand their needs and translate them into data-driven solutions. Communication of InsightsPresent complex analytical findings clearly and actionably to non-technical stakeholders, helping guide strategic business decisions. Advanced Data Analysis and Modeling Develop Predictive ModelsCreate and validate complex predictive models for risk assessment, portfolio optimization, fraud detection, and market forecasting. Quantitative ResearchConduct in-depth quantitative research to identify trends, patterns, and relationships within large financial datasets. Statistical Analysis:Apply advanced statistical techniques to assess investment performance, asset pricing, and financial risk. Business Impact and ROI Performance MetricsDefine and track key performance indicators (KPIs) to measure the effectiveness of analytics solutions and their impact on the firm's financial performance. Cost-Benefit AnalysisPerform cost-benefit analyses to prioritize analytics initiatives that offer the highest return on investment (ROI). Algorithmic Trading and Automation Algorithm DevelopmentDevelop and refine trading algorithms that automate decision-making processes, leveraging machine learning and AI techniques. Back testing and SimulationConduct rigorous back testing and simulations of trading strategies to evaluate their performance under different market conditions. Advanced Statistical TechniquesExpertise in statistical methods such as regression analysis, time-series forecasting, hypothesis testing, and statistics. Machine Learning and AIProficiency in machine learning algorithms and experience with AI techniques, particularly in the context of predictive modeling, anomaly detection, and natural language processing (NLP). Programming LanguagesStrong coding skills in languages like Python, commonly used for data analysis, modeling, and automation. Data Management:Experience with big data technologies, and relational databases to handle and manipulate large datasets. Data VisualizationProficiency in creating insightful visualizations that effectively communicate complex data findings to stakeholders. Cloud ComputingFamiliarity with cloud platforms like AWS, Azure, or Google Cloud for deploying scalable data solutions. Quantitative AnalysisDeep understanding of quantitative finance, including concepts like pricing models, portfolio theory, and risk metrics. Algorithmic TradingExperience in developing and back testing trading algorithms using quantitative models and data-driven strategies. Technical and Functional Skills: Bachelor's degree in a related field, such as computer science, data science, or statistics. Proven experience of 5 to 7 years in programming languages, machine learning, data visualization and statistical analysis.
Posted 1 week ago
5.0 - 7.0 years
7 - 9 Lacs
Panvel
Work from Office
The candidate must possess in-depth functional knowledge of the process area and apply it to operational scenarios to provide effective solutions. The candidate must be able to identify discrepancies and propose optimal solutions by using a logical, systematic, and sequential methodology. It is vital to be open-minded towards inputs and views from team members and to effectively lead, control, and motivate groups towards company objects. Additionally, the candidate must be self-directed, proactive, and seize every opportunity to meet internal and external customer needs and achieve customer satisfaction by effectively auditing processes, implementing best practices and process improvements, and utilizing the frameworks and tools available. Goals and thoughts must be clearly and concisely articulated and conveyed, verbally and in writing, to clients, colleagues, subordinates, and supervisors. Associate Process Manager Roles and responsibilities: Leadership and Mentorship Team LeadershipLead and mentor a team of data scientists and analysts, guiding them in best practices, advanced methodologies, and career development. Project ManagementOversee multiple analytics projects, ensuring they are completed on time, within scope, and deliver impactful results. Innovation and Continuous LearningStay at the forefront of industry trends, new technologies, and methodologies, fostering a culture of innovation within the team. Collaboration with Cross-Functional Teams Stakeholder EngagementWork closely with key account managers, data analysts, and other stakeholders to understand their needs and translate them into data-driven solutions. Communication of InsightsPresent complex analytical findings clearly and actionably to non-technical stakeholders, helping guide strategic business decisions. Advanced Data Analysis and Modeling Develop Predictive ModelsCreate and validate complex predictive models for risk assessment, portfolio optimization, fraud detection, and market forecasting. Quantitative ResearchConduct in-depth quantitative research to identify trends, patterns, and relationships within large financial datasets. Statistical Analysis:Apply advanced statistical techniques to assess investment performance, asset pricing, and financial risk. Business Impact and ROI Performance MetricsDefine and track key performance indicators (KPIs) to measure the effectiveness of analytics solutions and their impact on the firm's financial performance. Cost-Benefit AnalysisPerform cost-benefit analyses to prioritize analytics initiatives that offer the highest return on investment (ROI). Algorithmic Trading and Automation Algorithm DevelopmentDevelop and refine trading algorithms that automate decision-making processes, leveraging machine learning and AI techniques. Back testing and SimulationConduct rigorous back testing and simulations of trading strategies to evaluate their performance under different market conditions. Advanced Statistical TechniquesExpertise in statistical methods such as regression analysis, time-series forecasting, hypothesis testing, and statistics. Machine Learning and AIProficiency in machine learning algorithms and experience with AI techniques, particularly in the context of predictive modeling, anomaly detection, and natural language processing (NLP). Programming LanguagesStrong coding skills in languages like Python, commonly used for data analysis, modeling, and automation. Data Management:Experience with big data technologies, and relational databases to handle and manipulate large datasets. Data VisualizationProficiency in creating insightful visualizations that effectively communicate complex data findings to stakeholders. Cloud ComputingFamiliarity with cloud platforms like AWS, Azure, or Google Cloud for deploying scalable data solutions. Quantitative AnalysisDeep understanding of quantitative finance, including concepts like pricing models, portfolio theory, and risk metrics. Algorithmic TradingExperience in developing and back testing trading algorithms using quantitative models and data-driven strategies. Technical and Functional Skills: Bachelor's degree in a related field, such as computer science, data science, or statistics. Proven experience of 5 to 7 years in programming languages, machine learning, data visualization and statistical analysis.
Posted 1 week ago
4.0 - 7.0 years
13 - 17 Lacs
Bengaluru
Work from Office
Overview This is an opportunity to join a growing team in a dynamic new business opportunity within Zebra. Zebra Technologies located in Bangalore revolutionizes the way work is done in large warehouses and distribution centers with innovative robotic solutions. Our team members include robotics and software experts with multidisciplinary backgrounds. We are seeking software engineers to join the Zebra Team! As the key backend developer, you will be asked to interpret the business requirements as provided by the product owner and translate those requirements into a design, an application and supporting DB schema. Careful design with solid implementations will be key to success in this role. Position Description: Zebra is looking for passionate, talented software engineers to join the tight knit team that designs and builds our cloud robotics platform which powers fleets of robots deployed at our customer sites. This is a very specific and remarkable greenfield opportunity. You will partner closely with cloud software, infrastructure and robotics teams Responsibilities Design, develop and deploy core components of our next generation cloud robotics platform. Work closely with the product and other engineering teams to envision and drive the technical roadmap for cloud robotics platform Influence larger team's technical and collaborative culture and processes by growing and mentoring other engineers Work with business analyst to understand our customer’s business processing needs. Work with the DevOps teams to ensure a smooth, automated test and deployment for the application. Qualifications Qualifications: Preferred Education:Bachelor's degree 3-6 years of relevant work experience in server side and/or data engineering development. Outstanding abstraction skills to decouple complex problems into simple concepts. Software development using programming languages Go, Python Strong knowledge in any one of the cloud technologies like AWS/Google Cloud/Azure Experience with automated testing, Unit testing, deployment pipelines and cloud based infrastructure. Experience in Docker, Kubernetes Strong understanding of databases, NoSQL data stores, storage and distributed persistence technologies Experience with cloud-based architectures: SaaS, Micro-Services Experience in Design principle patterns & system design Proficient understanding of code versioning tools, such as Git Strong knowledge on data structures, algorithms and problem-solving skills Passionate about enabling next generation experiences Good Analytical, Problem solving and Debugging skills. Experience in greenfield architecture and well-crafted, elegant systems. Demonstrated ability to find the best solution for the problem at hand. Highly collaborative and open working style. Desired: Good understanding on REST, gRPC and GraphQL Experience with robotics and embedded systems development. Experience with 3rd party data platforms (e.g., Snowflake, Spark, Databricks). Experience integrating with 3rd party monitoring tools (e.g., Prometheus, Kibanna, Grafana). Experience working at deployment and infrastructure level. What We Offer: Competitive salary. Zebra Incentive Program (annual performance bonus) Zebra’s GEM appreciation/recognition program Flexible time off - work hard & play hard. Awesome company culture and ability to work with robots! Zebra’s culture is encouraging and collaborative where our employees are encouraged to learn and grow together. As we celebrate our 5 decades of success, this is a phenomenal time to join us and make your mark for the 6th one. We are excited to hear from you!”
Posted 1 week ago
3.0 - 8.0 years
5 - 10 Lacs
Pune
Work from Office
Roles and Responsibility Design, develop, and implement scalable Kafka infrastructure solutions. Collaborate with cross-functional teams to identify and prioritize project requirements. Develop and maintain technical documentation for Kafka infrastructure projects. Troubleshoot and resolve complex issues related to Kafka infrastructure. Ensure compliance with industry standards and best practices for Kafka infrastructure. Participate in code reviews and contribute to the improvement of the overall code quality. Job Requirements Strong understanding of Kafka architecture and design principles. Experience with Kafka tools such as Streams, KSQL, and SCADA. Proficient in programming languages such as Java, Python, or Scala. Excellent problem-solving skills and attention to detail. Ability to work collaboratively in a team environment. Strong communication and interpersonal skills.
Posted 1 week ago
6.0 - 11.0 years
0 Lacs
Pune, Chennai, Bengaluru
Work from Office
Job Description **mandatory SN Required Information Details 1 Role** Digital Workplace (SD & EUC) Automation Solution Architect 2 Required Technical Skill Set** Sound knowledge of ServiceDesk operation process, Tools and Analytics Extensive knowledge of digital ServiceDesk Automation solution using market standard virtual agent platforms (ServiceNow VA, Azure Bot, Kore.AI, Amelia etc.), ServiceNow Orchestrator and ITSM, Avaya, AWS Connect and digital works place automation tools& platforms (Nexthink, Nanoheal etc.) Sound knowledge of Workplace or EUC process, Tools and Analytics, remote support, desktop engineering, field support, touch services, messaging & collaboration. Digital workplace automation tools & platforms (ignio DWS, Nexthink, Nanoheal, Systrack etc.) Strong Experience with at least one Orchestration & automation platform (ServiceNow Orchestrator, Ansible, etc.) 3 Good to have Technical Skill Set Exposure & experience with, Aternity, 1E Tachyon Exposure to ITSM tools (Ex: ServiceNow, BMC Remedy) 4 Desired Experience Range** 5 to 15 years 5 Location of Requirement PAN India Desired Competencies (Technical/Behavioral Competency) Must-Have** (Ideally should not be more than ) 5+ experience in Workplace (EUC) & SD operations or overall, 5+ years of experience with Automation solution in ServiceDesk/wider IT Ops Automation with exposure to Workplace (EUC) & IT Operation At least one implementation experience of SD / workplace automation solution (EUC) and/or ServiceDesk automation solution Implementation experience with API (Rest / Webservice / SOAP / XML over https) for integration with Monitoring tools, ITSM (Service Now/Cherwell/Remedy), Experience working with one Virtual agent platform (Azure based, AWS based, ServiceNow VA) Excellent presentation skills & ability to articulate Automation Solution for SD and EUC Automation deals area in a simplified manner understood by various levels of customer Good-to-Have Knowledge of chat/voice bot integration experience on Bot (Ex: RASA, Kore.AI) Ability to spot & mine automation interventions opportunities in various sub processes Good experience in MS Excel, Power point and Word SN Responsibility of / Expectations from the Role 1 Author Automation Solution for SD and EUC Automation deals. Defense/support defense (including presentation and demo) of automation solution with customers 2 Position & defend TCS IT Ops Automation offerings with our internal stakeholders (Industry Vertical team, IT Operation Services Team) 3 Defence/support defence (including presentation and demo) of automation solution with customers Support Geo Sales & Solution Team with various customer meeting preparation 4 Collaborate with various offerings team with TCS to incorporate newer and up to date offerings & case studies in IT Operation Automation solution proposal 5 Support Geo Sales & Solution Team with various customer meeting preparation. 6 Develop & share reusable solution assets/frameworks with rest of the solution team 7 Liaise with Automation & AI & workplace automation solution product vendors for product solution collaterals and pricing 8 Train, coach and guide the generalist member of the team with latest and greatest of workplace automation platforms, Chat/Voice bot, contact centre product 9 Create & present POV on workplace automation solution, virtual agent & contact centre automation solution for wider organization and Industry consumption 10 Co-lead requirements gathering workshops with internal business stakeholders to provide technical guidance, and assist with capturing all necessary business, functional and technical requirements necessary to design and implement the solution
Posted 1 week ago
2.0 - 6.0 years
4 - 5 Lacs
Hyderabad
Work from Office
NTT DATA is looking for End User Computing Sr. Associate to join our dynamic team and embark on a rewarding career journey Processing requisition and other business forms, checking account balances, and approving purchases. Advising other departments on best practices related to fiscal procedures. Managing account records, issuing invoices, and handling payments. Collaborating with internal departments to reconcile any accounting discrepancies. Analyzing financial data and assisting with audits, reviews, and tax preparations. Updating financial spreadsheets and reports with the latest available data. Preparation of operating budgets, financial statements, and reports. Reviewing existing financial policies and procedures to ensure regulatory compliance. Providing assistance with payroll administration. Keeping records and documenting financial processe
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France