Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
9.0 - 14.0 years
9 - 13 Lacs
Bengaluru
Work from Office
Lead Software Engineer: We are seeking a highly motivated and experienced Lead Software Engineer to join our dynamic team. As a key member of our team, you will contribute to the development of innovative cutting-edge solutions, collaborating with cross-functional teams, and driving the delivery of new products and features that meet our customer needs. About the Role: Work closely with business partners and stakeholders to identify requirements and prioritize new enhancements and features Collaborate with software engineers, architects, technical management, and business partners across geographical and organizational boundaries Assist in architecture direction and finalize design with Architects Break down deliverables into meaningful stories for the development team Provide technical leadership, mentoring, and coaching to software or systems engineering teams Share knowledge and best practices on using new and emerging technologies Provide technical support to operations or other development teams by troubleshooting, debugging, and solving critical issues Interpret code and solve problems based on existing standards Create and maintain technical documentation related to assigned components About You: To be successful in this role, you should have: Bachelor's or Master's degree in Computer Science, Engineering, Information Technology, or equivalent experience 9+ years of professional software development experience Strong technical skills, including: Python programming (3+ years) AWS experience with EKS/Kubernetes Experience with LLMs, AI Solutions, and evaluation Understanding of agentic systems and workflows Experience with Retrieval Systems leveraging tools like OpenSearch Experience with event-driven/asynchronous programming Experience with high-concurrency systems Experience with CI/CD using GitHub Actions and AWS services (Code Pipeline/Code Builds Strong understanding of Microservices and RESTful APIs FastAPI Celery Data Engineering background Experience with AWS services (Redis, DynamoDB, S3, SQS, Kinesis, KMS, IAM, Secret Manager, etc.) Performance optimization and security practices Self-driven with ability to work with minimal direction Strong context-switching abilities Problem-solving mindset Clear communication Strong documentation habits Attention to detail #LI-BD1 Whats in it For You Hybrid Work Model Weve adopted a flexible hybrid working environment (2-3 days a week in the office depending on the role) for our office-based roles while delivering a seamless experience that is digitally and physically connected. Flexibility & Work-Life Balance: Flex My Way is a set of supportive workplace policies designed to help manage personal and professional responsibilities, whether caring for family, giving back to the community, or finding time to refresh and reset. This builds upon our flexible work arrangements, including work from anywhere for up to 8 weeks per year, empowering employees to achieve a better work-life balance. Career Development and Growth: By fostering a culture of continuous learning and skill development, we prepare our talent to tackle tomorrows challenges and deliver real-world solutions. Our Grow My Way programming and skills-first approach ensures you have the tools and knowledge to grow, lead, and thrive in an AI-enabled future. Industry Competitive Benefits We offer comprehensive benefit plans to include flexible vacation, two company-wide Mental Health Days off, access to the Headspace app, retirement savings, tuition reimbursement, employee incentive programs, and resources for mental, physical, and financial wellbeing. Culture: Globally recognized, award-winning reputation for inclusion and belonging, flexibility, work-life balance, and more. We live by our valuesObsess over our Customers, Compete to Win, Challenge (Y)our Thinking, Act Fast / Learn Fast, and Stronger Together. Social Impact Make an impact in your community with our Social Impact Institute. We offer employees two paid volunteer days off annually and opportunities to get involved with pro-bono consulting projects and Environmental, Social, and Governance (ESG) initiatives. Making a Real-World Impact: We are one of the few companies globally that helps its customers pursue justice, truth, and transparency. Together, with the professionals and institutions we serve, we help uphold the rule of law, turn the wheels of commerce, catch bad actors, report the facts, and provide trusted, unbiased information to people all over the world. Thomson Reuters informs the way forward by bringing together the trusted content and technology that people and organizations need to make the right decisions. We serve professionals across legal, tax, accounting, compliance, government, and media. Our products combine highly specialized software and insights to empower professionals with the data, intelligence, and solutions needed to make informed decisions, and to help institutions in their pursuit of justice, truth, and transparency. Reuters, part of Thomson Reuters, is a world leading provider of trusted journalism and news. We are powered by the talents of 26,000 employees across more than 70 countries, where everyone has a chance to contribute and grow professionally in flexible work environments. At a time when objectivity, accuracy, fairness, and transparency are under attack, we consider it our duty to pursue them. Sound excitingJoin us and help shape the industries that move society forward. As a global business, we rely on the unique backgrounds, perspectives, and experiences of all employees to deliver on our business goals. To ensure we can do that, we seek talented, qualified employees in all our operations around the world regardless of race, color, sex/gender, including pregnancy, gender identity and expression, national origin, religion, sexual orientation, disability, age, marital status, citizen status, veteran status, or any other protected classification under applicable law. Thomson Reuters is proud to be an Equal Employment Opportunity Employer providing a drug-free workplace. We also make reasonable accommodations for qualified individuals with disabilities and for sincerely held religious beliefs in accordance with applicable law. More information on requesting an accommodation here. Learn more on how to protect yourself from fraudulent job postings here. More information about Thomson Reuters can be found on thomsonreuters.com.
Posted -1 days ago
10.0 - 15.0 years
22 - 37 Lacs
Bengaluru
Work from Office
Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role Are you ready to dive headfirst into the captivating world of data engineering at Kyndryl? As a Data Engineer, you'll be the visionary behind our data platforms, crafting them into powerful tools for decision-makers. Your role? Ensuring a treasure trove of pristine, harmonized data is at everyone's fingertips. As an AWS Data Engineer at Kyndryl, you will be responsible for designing, building, and maintaining scalable, secure, and high-performing data pipelines using AWS cloud-native services. This role requires extensive hands-on experience with both real-time and batch data processing, expertise in cloud-based ETL/ELT architectures, and a commitment to delivering clean, reliable, and well-modeled datasets. Key Responsibilities: Design and develop scalable, secure, and fault-tolerant data pipelines utilizing AWS services such as Glue, Lambda, Kinesis, S3, EMR, Step Functions, and Athena. Create and maintain ETL/ELT workflows to support both structured and unstructured data ingestion from various sources, including RDBMS, APIs, SFTP, and Streaming. Optimize data pipelines for performance, scalability, and cost-efficiency. Develop and manage data models, data lakes, and data warehouses on AWS platforms (e.g., Redshift, Lake Formation). Collaborate with DevOps teams to implement CI/CD and infrastructure as code (IaC) for data pipelines using CloudFormation or Terraform. Ensure data quality, validation, lineage, and governance through tools such as AWS Glue Data Catalog and AWS Lake Formation. Work in concert with data scientists, analysts, and application teams to deliver data-driven solutions. Monitor, troubleshoot, and resolve issues in production pipelines. Stay abreast of AWS advancements and recommend improvements where applicable. Your Future at Kyndryl Every position at Kyndryl offers a way forward to grow your career. We have opportunities that you won’t find anywhere else, including hands-on experience, learning opportunities, and the chance to certify in all four major platforms. Whether you want to broaden your knowledge base or narrow your scope and specialize in a specific sector, you can find your opportunity here. Who You Are You’re good at what you do and possess the required experience to prove it. However, equally as important – you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused – someone who prioritizes customer success in their work. And finally, you’re open and borderless – naturally inclusive in how you work with others. Required Skills and Experience Bachelor’s or master’s degree in computer science, Engineering, or a related field Over 8 years of experience in data engineering More than 3 years of experience with the AWS data ecosystem Strong experience with Pyspark, SQL, and Python Proficiency in AWS services: Glue, S3, Redshift, EMR, Lambda, Kinesis, CloudWatch, Athena, Step Functions Familiarity with data modelling concepts, dimensional models, and data lake architectures Experience with CI/CD, GitHub Actions, CloudFormation/Terraform Understanding of data governance, privacy, and security best practices Strong problem-solving and communication skills Preferred Skills and Experience Experience working as a Data Engineer and/or in cloud modernization. Experience with AWS Lake Formation and Data Catalog for metadata management. Knowledge of Databricks, Snowflake, or BigQuery for data analytics. AWS Certified Data Engineer or AWS Certified Solutions Architect is a plus. Strong problem-solving and analytical thinking. Excellent communication and collaboration abilities. Ability to work independently and in agile teams. A proactive approach to identifying and addressing challenges in data workflows. Being You Diversity is a whole lot more than what we look like or where we come from, it’s how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we’re not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you – and everyone next to you – the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That’s the Kyndryl Way. What You Can Expect With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter – wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed. Get Referred! If you know someone that works at Kyndryl, when asked ‘How Did You Hear About Us’ during the application process, select ‘Employee Referral’ and enter your contact's Kyndryl email address.
Posted 1 day ago
4.0 - 7.0 years
9 - 12 Lacs
Pune
Hybrid
So, what’s the role all about? In NiCE as a Senior Software professional specializing in designing, developing, and maintaining applications and systems using the Java programming language. They play a critical role in building scalable, robust, and high-performing applications for a variety of industries, including finance, healthcare, technology, and e-commerce How will you make an impact? Working knowledge of unit testing Working knowledge of user stories or use cases Working knowledge of design patterns or equivalent experience. Working knowledge of object-oriented software design. Team Player Have you got what it takes? Bachelor’s degree in computer science, Business Information Systems or related field or equivalent work experience is required. 4+ year (SE) experience in software development Well established technical problem-solving skills. Experience in Java, spring boot and microservices. Experience with Kafka, Kinesis, KDA, Apache Flink Experience in Kubernetes operators, Grafana, Prometheus Experience with AWS Technology including (EKS, EMR, S3, Kinesis, Lambda’s, Firehose, IAM, CloudWatch, etc) You will have an advantage if you also have: Experience with Snowflake or any DWH solution. Excellent communication skills, problem-solving skills, decision-making skills Experience in Databases Experience in CI/CD, git, GitHub Actions Jenkins based pipeline deployments. Strong experience in SQL What’s in it for you? Join an ever-growing, market disrupting, global company where the teams – comprised of the best of the best – work in a fast-paced, collaborative, and creative environment! As the market leader, every day at NiCE is a chance to learn and grow, and there are endless internal career opportunities across multiple roles, disciplines, domains, and locations. If you are passionate, innovative, and excited to constantly raise the bar, you may just be our next NiCEr! Enjoy NiCE-FLEX! At NiCE, we work according to the NiCE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Requisition ID: 6965 Reporting into: Tech Manager Role Type: Individual Contributor
Posted 4 days ago
7.0 - 12.0 years
9 - 14 Lacs
Bengaluru
Work from Office
Okta is seeking a highly skilled Full-Stack Engineer with deep expertise in AWS Bedrock, generative AI, and modern software development to join our fast-moving team at the intersection of developer experience, machine learning, and enterprise software. As part of the Okta Business Technology (BT) team, you will build cutting-edge tools that make AI development intuitive, collaborative, and scalable. If you're passionate about building next-generation AI applications and empowering developers through innovative platforms, this is the role for youJob Duties and Responsibilities Design and develop full-stack applications that integrate seamlessly with Amazon Bedrock AI agents. Build scalable, production-grade AI/ML solutions using AWS Bedrock and the AWS Agent Development Kit. Implement back-end services and APIs to interact with foundation models for tasks such as automated sourcing, content generation, and prompt orchestration. Create intuitive and performant front-end interfaces using Angular that connect with GenAI capabilities. Ensure seamless integration of LLMs and foundation models into the broader application architecture. Explore and rapidly prototype with the latest LLMs and GenAI tools to iterate on new capabilities and features. Build sophisticated AI workflows using knowledge bases, guardrails, and prompt chaining/flows. Deploy and maintain enterprise-ready GenAI applications at scale. Collaborate with business analysts to understand customer needs and use cases, and work with the team to design, develop POCs to test, implement & support solutions Foster strong relationships with teammates, customers, and vendors to facilitate effective communication and collaboration throughout the project lifecycle Perform in-depth analysis of requirements, ensuring compatibility and adherence to established standards and best practicesRequired Skills: 7+ years of robust experience with hands-on development & design experience 3+ years of experience in one or more of the following areasDeep Learning, LLMs, NLP, Speech, Conversational AI, AI Infrastructure, Fine-tuning, and optimizations of PyTorch models. Software development experience in languages like Python (must have) and one from the optional (Go, Rust, and C/C++). Experience with at least one LLM such as Llama, GPT, Claude, Falcon, Gemini, etc. Expertise in AWS Bedrock and the AWS Agent Development Kit is mandatory Hands-on experience with Python libraries including boto3, NumPy, Pandas, TensorFlow or PyTorch, and Hugging Face Transformers Solid understanding of the AWS ecosystem (e.g., CloudWatch, Step Functions, Kinesis, Lambda) Familiarity with full software development lifecycle, including version control, CI/CD, code reviews, automated testing, and production monitoring Knowledge about the ERP, HR technology and Legal business processes is an advantageEducation and Certifications Bachelors degree in Computer Science or a related field, or equivalent practical experience AWS certifications (e.g., AWS Certified Solutions Architect, Machine Learning Specialty) are a plus Experience working with large-scale generative AI or LLM-based applications Knowledge of secure application development and data privacy practices in AI/ML workloads"This role requires in-person onboarding and travel to our Bengaluru, IN office during the first week of employment."
Posted 4 days ago
2.0 - 7.0 years
13 - 18 Lacs
Pune
Work from Office
The Customer, Sales & Service Practice | Cloud Job Title - Amazon Connect + Level 11 (Analyst) + Entity (S&C GN) Management Level: Level 11 - Analyst Location: Delhi, Mumbai, Bangalore, Gurgaon, Pune, Hyderabad and Chennai Must have skills: AWS contact center, Amazon Connect flows, AWS Lambda and Lex bots, Amazon Connect Contact Center Good to have skills: AWS Lambda and Lex bots, Amazon Connect Experience: Minimum 2 year(s) of experience is required Educational Qualification: Engineering Degree or MBA from a Tier 1 or Tier 2 institute Join our team of Customer Sales & Service consultants who solve customer facing challenges at clients spanning sales, service and marketing to accelerate business change. Practice: Customer Sales & Service Sales I Areas of Work: Cloud AWS Cloud Contact Center Transformation, Analysis and Implementation | Level: Analyst | Location: Delhi, Mumbai, Bangalore, Gurgaon, Pune, Hyderabad and Chennai | Years of Exp: 2-5 years Explore an Exciting Career at Accenture Are you passionate about scaling businesses using in-depth frameworks and techniques to solve customer facing challengesDo you want to design, build and implement strategies to enhance business performanceDoes working in an inclusive and collaborative environment spark your interest Then, this is the right place for you! Welcome to a host of exciting global opportunities within Accenture Strategy & Consultings Customer, Sales & Service practice. The Practice A Brief Sketch The Customer Sales & Service Consulting practice is aligned to the Capability Network Practice of Accenture and works with clients across their marketing, sales and services functions. As part of the team, you will work on transformation services driven by key offerings like Living Marketing, Connected Commerce and Next-Generation Customer Care. These services help our clients become living businesses by optimizing their marketing, sales and customer service strategy, thereby driving cost reduction, revenue enhancement, customer satisfaction and impacting front end business metrics in a positive manner. You will work closely with our clients as consulting professionals who design, build and implement initiatives that can help enhance business performance. As part of these, you will drive the following: Work on creating business cases for journey to cloud, cloud strategy, cloud contact center vendor assessment activities Work on creating Cloud transformation approach for contact center transformations Work along with Solution Architects for architecting cloud contact center technology with AWS platform Work on enabling cloud contact center technology platforms for global clients specifically on Amazon connect Work on innovative assets, proof of concept, sales demos for AWS cloud contact center Support AWS offering leads in responding to RFIs and RFPs Bring your best skills forward to excel at the role: Good understanding of contact center technology landscape. An understanding of AWS Cloud platform and services with Solution architect skills. Deep expertise on AWS contact center relevant services. Sound experience in developing Amazon Connect flows , AWS Lambda and Lex bots Deep functional and technical understanding of APIs and related integration experience Functional and technical understanding of building API-based integrations with Salesforce, Service Now and Bot platforms Ability to understand customer challenges and requirements, ability to address these challenges/requirements in a differentiated manner. Ability to help the team to implement the solution, sell, deliver cloud contact center solutions to clients. Excellent communications skills Ability to develop requirements based on leadership input Ability to work effectively in a remote, virtual, global environment Ability to take new challenges and to be a passionate learner Read about us. Blogs Whats in it for you An opportunity to work on transformative projects with key G2000 clients Potential to Co-create with leaders in strategy, industry experts, enterprise function practitioners and, business intelligence professionals to shape and recommend innovative solutions that leverage emerging technologies. Ability to embed responsible business into everythingfrom how you service your clients to how you operate as a responsible professional. Personalized training modules to develop your strategy & consulting acumen to grow your skills, industry knowledge and capabilities Opportunity to thrive in a culture that is committed to accelerate equality for all. Engage in boundaryless collaboration across the entire organization. About Accenture: Accenture is a leading global professional services company, providing a broad range of services and solutions in strategy, consulting, digital, technology and operations. Combining unmatched experience and specialized skills across more than 40 industries and all business functions underpinned by the worlds largest delivery network Accenture works at the intersection of business and technology to help clients improve their performance and create sustainable value for their stakeholders. With 569,000 people serving clients in more than 120 countries, Accenture drives innovation to improve the way the world works and lives. Visit us at www.accenture.com About Accenture Strategy & Consulting: Accenture Strategy shapes our clients future, combining deep business insight with the understanding of how technology will impact industry and business models. Our focus on issues such as digital disruption, redefining competitiveness, operating and business models as well as the workforce of the future helps our clients find future value and growth in a digital world. Today, digital is changing the way organizations engage with their employees, business partners, customers and communities. This is our unique differentiator. To bring this global perspective to our clients, Accenture Strategy's services include those provided by our Capability Network a distributed management consulting organization that provides management consulting and strategy expertise across the client lifecycle. Our Capability Network teams complement our in-country teams to deliver cutting-edge expertise and measurable value to clients all around the world. For more information visit https://www.accenture.com/us-en/Careers/capability-network Accenture Capability Network | Accenture in One Word come and be a part of our team.QualificationYour experience counts! Bachelors degree in related field or equivalent experience and Post-Graduation in Business management would be added value. Minimum 2 years of experience in delivering software as a service or platform as a service projects related to cloud CC service providers such as Amazon Connect Contact Center cloud solution Hands-on experience working on the design, development and deployment of contact center solutions at scale. Hands-on development experience with cognitive service such as Amazon connect, Amazon Lex, Lambda, Kinesis, Athena, Pinpoint, Comprehend, Transcribe Working knowledge of one of the programming/scripting languages such as Node.js, Python, Java
Posted 4 days ago
4.0 - 9.0 years
8 - 13 Lacs
Bengaluru
Work from Office
JR: R00195769 Experience: Minimum 4 year(s) of experience is required Educational Qualification: Engineering Degree or MBA from a Tier 1 or Tier 2 institute --------------------------------------------------------------------- Job Title - AWS Devops + Level 9 (Consultant) + Entity (S&C GN) Management Level: 9-Team Lead/Consultant Location: Delhi, Mumbai, Bangalore, Gurgaon, Pune, Hyderabad and Chennai Must-have skills: AWS Cloud, Cloud Contact Center transformation, CI/CD pipelines, AWS Devops, DynamoDB, S3, Lambda, Lex, IAM, VPC, CloudWatch, CloudTrial, API Gateway, Kinesis, WAF Good to have skills: AWS Connect, Amazon Connect , Python, Bash, or PowerShell or Node.js Job Summary : This role involves driving strategic initiatives, managing business transformations, and leveraging industry expertise to create value-driven solutions. Roles & Responsibilities: Provide strategic advisory services, conduct market research, and develop data-driven recommendations to enhance business performance. Are you passionate about scaling businesses using in-depth frameworks and techniques to solve customer facing challengesDo you want to design, build and implement strategies to enhance business performanceDoes working in an inclusive and collaborative environment spark your interest Then, this is the right place for you! Welcome to a host of exciting global opportunities within Accenture Strategy & Consultings Customer, Sales & Service practice. The Practice A Brief Sketch The Strategy & Consulting Global Network Song practice is aligned to the Global Network Song Practice of Accenture and works with clients across their marketing, sales and services functions. As part of the team, you will work on transformation services driven by key offerings like Living Marketing, Connected Commerce and Next-Generation Customer Care. These services help our clients become living businesses by optimizing their marketing, sales and customer service strategy, thereby driving cost reduction, revenue enhancement, customer satisfaction and impacting front end business metrics in a positive manner. You will work closely with our clients as consulting professionals who design, build and implement initiatives that can help enhance business performance. As part of these, you will drive the following: Work on creating business cases for journey to cloud, cloud strategy, cloud contact center vendor assessment activities Work on creating Cloud transformation approach for contact center transformations Work along with Solution Architects for architecting cloud contact center technology with AWS platform Work on enabling cloud contact center technology platforms for global clients specifically on Amazon connect Work on innovative assets, proof of concept, sales demos for AWS cloud contact center Support AWS offering leads in responding to RFIs and RFPs Bring your best skills forward to excel at the role: Good understanding of contact center technology landscape. An understanding of AWS Cloud platform and services with Solution architect skills. Deep expertise on AWS contact center relevant services. Sound experience in developing Amazon Connect flows and Lex bots Deep functional and technical understanding of APIs and related integration experience Functional and technical understanding of building API-based integrations with Salesforce, Service Now and Bot platforms Ability to understand customer challenges and requirements, ability to address these challenges/requirements in a differentiated manner. Ability to help the team to implement the solution, sell, deliver cloud contact center solutions to clients. Excellent communications skills Ability to develop requirements based on leadership input Ability to work effectively in a remote, virtual, global environment Ability to take new challenges and to be a passionate learner Configure and manage Amazon Connect instances to enhance customer service operations. Integrate Amazon Connect with other AWS services and third-party applications. Work closely with cross-functional teams to understand requirements and deliver solutions that align with business goals. Troubleshoot and resolve issues related to deployment, infrastructure, and application performance. Provide technical support and guidance to team members and stakeholders. Maintain comprehensive documentation of systems, processes, and configurations. Proven experience in designing and managing cloud based(preferably AWS) infrastructure and deployment pipelines(CI/CD) Proficiency in AWS services, including DynamoDB, S3, Lambda, Lex, IAM, VPC, CloudWatch, CloudTrial, API Gateway, Kinesis, WAF, etc Strong experience in DevOps tools and practices, including CI/CD and infrastructure as code (Terraform, CloudFormation, CDK, etc) including configuration, integration, and optimization. Working knowledge of one of the programming/scripting languages such as Python, Bash, or PowerShell or Node.js Knowledge of security best practices and compliance requirements related to contact centers and cloud infrastructure. Experience in setting up cloud instances, account / users with security profiles and designing applications Experience in taking a lead role for building contact center applications that have been successfully delivered to customers Whats in it for you An opportunity to work on transformative projects with key G2000 clients Potential to Co-create with leaders in strategy, industry experts, enterprise function practitioners and, business intelligence professionals to shape and recommend innovative solutions that leverage emerging technologies. Ability to embed responsible business into everythingfrom how you service your clients to how you operate as a responsible professional. Personalized training modules to develop your strategy & consulting acumen to grow your skills, industry knowledge and capabilities Opportunity to thrive in a culture that is committed to accelerate equality for all. Engage in boundaryless collaboration across the entire organization. About Accenture: Professional & Technical Skills: - Relevant experience in the required domain. - Strong analytical, problem-solving, and communication skills. - Ability to work in a fast-paced, dynamic environment. Additional Information: - Opportunity to work on innovative projects. - Career growth and leadership exposure. About Our Company | Accenture Qualification Experience: Minimum 4 year(s) of experience is required Educational Qualification: Engineering Degree or MBA from a Tier 1 or Tier 2 institute
Posted 4 days ago
3.0 - 6.0 years
2 - 6 Lacs
Chennai
Work from Office
AWS Lambda Glue Kafka/Kinesis RDBMS Oracle, MySQL, RedShift, PostgreSQL, Snowflake Gateway Cloudformation / Terraform Step Functions Cloudwatch Python Pyspark Job role & responsibilities: Looking for a Software Engineer/Senior Software engineer with hands on experience in ETL projects and extensive knowledge in building data processing systems with Python, pyspark and Cloud technologies(AWS). Experience in development in AWS Cloud (S3, Redshift, Aurora, Glue, Lambda, Hive, Kinesis, Spark, Hadoop/EMR) Required Skills: Amazon Kinesis, Amazon Aurora, Data Warehouse, SQL, AWS Lambda, Spark, AWS QuickSight Advanced Python Skills Data Engineering ETL and ELT Skills Experience of Cloud Platforms (AWS or GCP or Azure) Mandatory skills- Datawarehouse, ETL, SQL, Python, AWS Lambda, Glue, AWS Redshift.
Posted 1 week ago
6.0 - 10.0 years
15 - 25 Lacs
Bengaluru
Work from Office
Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role Are you ready to dive headfirst into the captivating world of data engineering at Kyndryl? As a Data Engineer, you'll be the visionary behind our data platforms, crafting them into powerful tools for decision-makers. Your role? Ensuring a treasure trove of pristine, harmonized data is at everyone's fingertips. AWS Data/API Gateway Pipeline Engineer responsible for designing, building, and maintaining real-time, serverless data pipelines and API services. This role requires extensive hands-on experience with Java, Python, Redis, DynamoDB Streams, and PostgreSQL, along with working knowledge of AWS Lambda and AWS Glue for data processing and orchestration. This position involves collaboration with architects, backend developers, and DevOps engineers to deliver scalable, event-driven data solutions and secure API services across cloud-native systems. Key Responsibilities API & Backend Engineering Build and deploy RESTful APIs using AWS API Gateway, Lambda, and Java and Python. Integrate backend APIs with Redis for low-latency caching and pub/sub messaging. Use PostgreSQL for structured data storage and transactional processing. Secure APIs using IAM, OAuth2, and JWT, and implement throttling and versioning strategies. Data Pipeline & Streaming Design and develop event-driven data pipelines using DynamoDB Streams to trigger downstream processing. Use AWS Glue to orchestrate ETL jobs for batch and semi-structured data workflows. Build and maintain Lambda functions to process real-time events and orchestrate data flows. Ensure data consistency and resilience across services, queues, and databases. Cloud Infrastructure & DevOps Deploy and manage cloud infrastructure using CloudFormation, Terraform, or AWS CDK. Monitor system health and service metrics using CloudWatch, SNS and structured logging. Contribute to CI/CD pipeline development for testing and deploying Lambda/API services. So, if you're a technical enthusiast with a passion for data, we invite you to join us in the exhilarating world of data engineering at Kyndryl. Let's transform data into a compelling story of innovation and growth. Your Future at Kyndryl Every position at Kyndryl offers a way forward to grow your career. We have opportunities that you won’t find anywhere else, including hands-on experience, learning opportunities, and the chance to certify in all four major platforms. Whether you want to broaden your knowledge base or narrow your scope and specialize in a specific sector, you can find your opportunity here. Who You Are You’re good at what you do and possess the required experience to prove it. However, equally as important – you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused – someone who prioritizes customer success in their work. And finally, you’re open and borderless – naturally inclusive in how you work with others. Required Skills and Experience Bachelor's degree in computer science, Engineering, or a related field. Over 6 years of experience in developing backend or data pipeline services using Java and Python . Strong hands-on experience with: AWS API Gateway , Lambda , DynamoDB Streams Redis (caching, messaging) PostgreSQL (schema design, tuning, SQL) AWS Glue for ETL jobs and data transformation Solid understanding of REST API design principles, serverless computing, and real-time architecture. Preferred Skills and Experience Familiarity with Kafka, Kinesis, or other message streaming systems Swagger/OpenAPI for API documentation Docker and Kubernetes (EKS) Git, CI/CD tools (e.g., GitHub Actions) Experience with asynchronous event processing, retries, and dead-letter queues (DLQs) Exposure to data lake architectures (S3, Glue Data Catalog, Athena) Being You Diversity is a whole lot more than what we look like or where we come from, it’s how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we’re not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you – and everyone next to you – the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That’s the Kyndryl Way. What You Can Expect With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter – wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed. Get Referred! If you know someone that works at Kyndryl, when asked ‘How Did You Hear About Us’ during the application process, select ‘Employee Referral’ and enter your contact's Kyndryl email address.
Posted 1 week ago
1.0 - 4.0 years
2 - 6 Lacs
Mumbai, Pune, Chennai
Work from Office
Graph data Engineer required for a complex Supplier Chain Project. Key required Skills Graph data modelling (Experience with graph data models (LPG, RDF) and graph language (Cypher), exposure to various graph data modelling techniques) Experience with neo4j Aura, Optimizing complex queries. Experience with GCP stacks like BigQuery, GCS, Dataproc. Experience in PySpark, SparkSQL is desirable. Experience in exposing Graph data to visualisation tools such as Neo Dash, Tableau and PowerBI The Expertise You Have: Bachelors or Masters Degree in a technology related field (e.g. Engineering, Computer Science, etc.). Demonstrable experience in implementing data solutions in Graph DB space. Hands-on experience with graph databases (Neo4j(Preferred), or any other). Experience Tuning Graph databases. Understanding of graph data model paradigms (LPG, RDF) and graph language, hands-on experience with Cypher is required. Solid understanding of graph data modelling, graph schema development, graph data design. Relational databases experience, hands-on SQL experience is required. Desirable (Optional) skills: Data ingestion technologies (ETL/ELT), Messaging/Streaming Technologies (GCP data fusion, Kinesis/Kafka), API and in-memory technologies. Understanding of developing highly scalable distributed systems using Open-source technologies. Experience in Supply Chain Data is desirable but not essential. Location: Pune, Mumbai, Chennai, Bangalore, Hyderabad
Posted 1 week ago
5.0 - 8.0 years
8 - 13 Lacs
Pune
Work from Office
We are staffing small, self-contained development teams with people who love solving problems, building high quality products and services. We use a wide range of technologies and are building up a next generation microservices platform that can make our learning tools and content available to all our customers. If you want to make a difference in the lives of students and teachers and understand what it takes to deliver high quality software, we would love to talk to you about this opportunity. Technology Stack You'll work with technologies such as Java, Spring Boot, Kafka, Aurora, Mesos, Jenkins etc. This will be a hands-on coding role working as part of a cross-functional team alongside other developers, designers and quality engineers, within an agile development environment. Were working on the development of our next generation learning platform and solutions utilizing the latest in server and web technologies. Responsibilities: Build high-quality, clean, scalable, and reusable code by enforcing best practices around software engineering architecture and processes (Code Reviews, Unit testing, etc.) on the team. Work with the product owners to understand detailed requirements and own your code from design, implementation, test automation and delivery of high-quality product to our users. Drive the design, prototype, implementation, and scale of cloud data platforms to tackle business needs. Identify ways to improve data reliability, efficiency, and quality. Plan and perform development tasks from design specifications. Provide accurate time estimates for development tasks. Construct and verify (unit test) software components to meet design specifications. Perform quality assurance functions by collaborating with the cross-team members to identify and resolve software defects. Provide mentoring on software design, construction, development methodologies, and best practices. Participate in production support and on-call rotation for the services owned by the team. Mentors less experienced engineers in understanding the big picture of company objectives, constraints, inter-team dependencies, etc. Participate in creating standards and ensuring team members adhere to standards, such as security patterns, logging patterns, etc. Collaborate with project architects and cross-functional team members/vendors in different geographical locations and assist team members to prove the validity of new software technologies. Promote AGILE processes among development and the business, including facilitation of scrums. Have ownership over the things you build, help shape the product and technical vision, direction, and how we iterate. Work closely with your product and design teammates for improved stability, reliability, and quality. Perform other duties as assigned to ensure the success of the team and the entire organization. Run numerous experiments in a fast-paced, analytical culture so you can quickly learn and adapt your work. Promote a positive engineering culture through teamwork, engagement, and empowerment. Function as the tech lead for various features and initiatives on the team. Build and maintain CI/CD pipelines for services owned by team by following secure development practices. Skills & Experience: 5 to 8 years' experience in a relevant software development role Excellent object-oriented design & programming skills, including the application of design patterns and avoidance of anti-patterns. Strong Cloud platform skills: AWS Lambda, Terraform, SNS, SQS, RDS, Kinesis, DynamoDB etc. Experience building large-scale, enterprise applications with ReactJS/AngularJS. Proficient with front-end technologies, such as HTML, CSS, JavaScript preferred. Experience working in a collaborative team of application developers and source code repositories. Deep knowledge of more than one programming language like Node.js/Java. Demonstrable knowledge of AWS and Data Platform experience: Lambda, Dynamodb, RDS, S3, Kinesis, Snowflake. Demonstrated ability to follow through with all tasks, promises and commitments. Ability to communicate and work effectively within priorities. Ability to advocate ideas and to objectively participate in design critiques. Ability to work under tight timelines in a fast-paced environment. Advanced understanding of software design concepts. Understanding software development methodologies and principles. Ability to solve large scale complex problems. Ability to architect, design, implement, and maintain large scale systems. Strong technical leadership and mentorship ability. Working experience of modern Agile software development methodologies (i.e. Kanban, Scrum, Test Driven Development)
Posted 1 week ago
3.0 - 6.0 years
6 - 8 Lacs
Pune
Work from Office
Software Engineering at HMH is focused on building fantastic software to meet the challenges facing teachers and learners, enabling and supporting a wide range of next generation learning experiences. We design and build custom applications and services used by millions. We are creating teams full of innovative, eager software professionals to build the products that will transform our industry. We are staffing small, self-contained development teams with people who love solving problems, building high quality products and services. We use a wide range of technologies and are building up a next generation microservices platform that can make our learning tools and content available to all our customers. If you want to make a difference in the lives of students and teachers and understand what it takes to deliver high quality software, we would love to talk to you about this opportunity. Technology Stack You'll work with technologies such as Java, Spring Boot, Kafka, Aurora, Mesos, Jenkins etc. This will be a hands-on coding role working as part of a cross-functional team alongside other developers, designers and quality engineers, within an agile development environment. Were working on the development of our next generation learning platform and solutions utilizing the latest in server and web technologies. Responsibilities: Build high-quality, clean, scalable, and reusable code by enforcing best practices around software engineering architecture and processes (Code Reviews, Unit testing, etc.) on the team. Work with the product owners to understand detailed requirements and own your code from design, implementation, test automation and delivery of high-quality product to our users. Identify ways to improve data reliability, efficiency, and quality. Perform development tasks from design specifications. Construct and verify (unit test) software components to meet design specifications. Perform quality assurance functions by collaborating with the cross-team members to identify and resolve software defects. Participate in production support and on-call rotation for the services owned by the team. Adhere to standards, such as security patterns, logging patterns, etc. Collaborate with cross-functional team members/vendors in different geographical locations to ensure successful delivery of the product features Have ownership over the things you build, help shape the product and technical vision, direction, and how we iterate. Work closely with your teammates for improved stability, reliability, and quality. Perform other duties as assigned to ensure the success of the team and the entire organization. Run numerous experiments in a fast-paced, analytical culture so you can quickly learn and adapt your work. Build and maintain CI/CD pipelines for services owned by team by following secure development practices. Skills & Experience: 3 to 6 years' experience in a relevant software development role Excellent object-oriented design & programming skills, including the application of design patterns and avoidance of anti-patterns. Strong Cloud platform skills: AWS Lambda, Terraform, SNS, SQS, RDS, Kinesis, DynamoDB etc. Experience building large-scale, enterprise applications with ReactJS/AngularJS. Proficient with front-end technologies, such as HTML, CSS, JavaScript preferred. Experience working in a collaborative team of application developers and source code repositories. Deep knowledge of more than one programming language like Node.js/Java. Demonstrable knowledge of AWS and Data Platform experience: Lambda, Dynamodb, RDS, S3, Kinesis, Snowflake. Demonstrated ability to follow through with all tasks, promises and commitments. Ability to communicate and work effectively within priorities. Ability to work under tight timelines in a fast-paced environment. Understanding software development methodologies and principles. Ability to solve large scale complex problems. Working experience of modern Agile software development methodologies (i.e. Kanban, Scrum, Test Driven Development)
Posted 1 week ago
4.0 - 9.0 years
9 - 14 Lacs
Noida, Bhubaneswar, Pune
Work from Office
4+ years experience as an IoT developer Must have experience on AWS Cloud - IoTCore, Kinesis, DynamoDB, API Gateway Expertise in creating applications by integrating with various AWS services Must have worked one IoT implementationon AWS Ability to work in Agile delivery Skills: Java, AWS Certified(Developer), MQTT, AWS IoT Core, nodejs
Posted 1 week ago
2.0 - 7.0 years
6 - 10 Lacs
Bengaluru
Work from Office
About the Position This is an opportunity for Engineering Managers to join our Data Platform organization that is passionate about scaling high volume, low-latency, distributed data-platform services & data products. In this role, you will get to work with engineers throughout the organization to build foundational infrastructure that allows Okta to scale for years to come. As the manager of the Data Foundations team in the Data Platform Group, your team will be responsible for designing, building, and deploying the foundational systems that power our data analytics and ML. Our analytics infrastructure stack sits on top of many modern technologies, including Kinesis, Flink, ElasticSearch, and Snowflake and are now looking to adopt GCP. We are seeking an Engineering Manager with a strong technical background and excellent communication skills to join us and partner with senior leadership as a thought leader in our strategic Data & ML projects. Our platform projects have a directive from engineering leadership to make OKTA a leader in the use of data and machine learning to improve end-user security and to expand that core-competency across the rest of engineering. You will have a sizable impact on the direction, design & implementation of the data solutions to these problems. What you will be doing: Recruit and mentor a globally distributed and talented group of diverse employees Collaborate with Product, Design, QA, Documentation, Customer Support, Program Management, TechOps, and other scrum teams. Engage in technical design and discussions and also help drive technical architecture Ensure the happiness and productivity of the team s software engineers Communicate the vision of our product to external entities Help mitigate risk (technical, product, personnel) Utilize professional acumen to improve Okta s technology, product, and engineering Participate in relevant Engineering workgroups and on-call rotations Foster, enable and promote innovation Define team metrics and meet productivity goals of the organization Cloud infrastructure cost tracking and management in partnership with Okta s FinOps team What you will bring to the role: A track record of leading or managing high performing platform teams (2 year experience minimum) Experience with end to end project delivery; building roadmaps through operational sustainability Strong facilitation skills (design, requirements gathering, progress and status sessions) Production experience with distributed systems running in AWS. GCP a bonus Passion about automation and leveraging agile software development methodologies Prior experience with Data Platform Prior experience in software development with hands-on experience as an IC using a cloud-based distributed computing technologies including Messaging systems such as Kinesis, Kafka Data processing systems like Flink, Spark, Beam Storage & Compute systems such as Snowflake, Hadoop Coordinators and schedulers like the ones in Kubernetes, Hadoop, Mesos Developing and tuning highly scalable distributed systems Experience with reliability engineering specifically in areas such as data quality, data observability and incident management And extra credit if you have experience in any of the following! Deep Data & ML experience Multi-cloud experience Federal cloud environments / Fedramp Contributed to the development of distributed systems or used one or more at high volume or criticality such as Kafka or Hadoop
Posted 1 week ago
3.0 - 8.0 years
6 - 10 Lacs
Bengaluru
Hybrid
Okta s Workforce Identity Cloud Security Engineering group is looking for an experienced and passionate Senior Site Reliability Engineer to join a team focused on designing and developing Security solutions to harden our cloud infrastructure. We embrace innovation and pave the way to transform bright ideas into excellent security solutions that help run large-scale, critical infrastructure. We encourage you to prescribe defense-in-depth measures, industry security standards and enforce the principle of least privilege to help take our Security posture to the next level. Our Infrastructure Security team has a niche skill-set that balances Security domain expertise with the ability to design, implement, rollout infrastructure across multiple cloud environments without adding friction to product functionality or performance. We are responsible for the ever-growing need to improve our customer safety and privacy by providing security services that are coupled with the core Okta product. This is a high-impact role in a security-centric, fast-paced organization that is poised for massive growth and success. You will act as a liaison between the Security org and the Engineering org to build technical leverage and influence the security roadmap. You will focus on engineering security aspects of the systems used across our services. Join us and be part of a company that is about to change the cloud computing landscape forever. Bring all the passion and dedication along and there s no telling what you could accomplish! You will work on: Building, running, and monitoring Okta's production infrastructure Be an evangelist for security best practices and also lead initiatives/projects to strengthen our security posture for critical infrastructure Responding to production incidents and determining how we can prevent them in the future Triaging and troubleshooting complex production issues to ensure reliability and performance Identifying and automating manual processes Continuously evolving our monitoring tools and platform Promoting and applying best practices for building scalable and reliable services across engineering Developing and maintaining technical documentation, runbooks, and procedures Supporting a 24x7 online environment as part of an on-call rotation You are an ideal candidate if you: Are always willing to go the extra mile: see a problem, fix the problem. Have experience automating, securing, and running large-scale production IAM and containerized services in AWS (EC2, ECS, KMS, Kinesis, RDS), GCP (GKE, GCE) or other cloud providers. Have knowledge of CI/CD principles, Linux fundamentals, OS hardening, networking concepts, and IP protocols. Have an understanding and familiarity with configuration management tools like Chef and Terraform. Have experience in operational tooling languages such as Ruby, Python, Go and shell, and use of source control. Experience with industry-standard security tools like Nessus, Qualys, OSQuery, Splunk, etc. Experience with Public Key Infrastructure (PKI) and secrets management Bonus points for: Experience conducting threat assessments, and assessing vulnerabilities in a high-availability setting. Understand MySQL, including replication and clustering strategies, and are familiar with data stores such as DynamoDB, Redis, and Elasticsearch. Minimum Required Knowledge, Skills, Abilities, and Qualities: 3+ years of experience architecting and running complex AWS or other cloud networking infrastructure resources 3+ years of experience with Chef and Terraform Unflappable troubleshooting skills Strong Linux understanding and experience. Security background and knowledge. BS In computer science (or equivalent experience). This role requires in-person onboarding and travel to our Bengaluru, IN office during the first week of employment."
Posted 1 week ago
2.0 - 7.0 years
6 - 11 Lacs
Bengaluru
Hybrid
You will work on: Mentoring, managing, and leading a team of SRE s with a broad range of expertise and experience. Being an evangelist and advocate for security best practices, leading initiatives and projects to strengthen our security posture for our most critical infrastructure. Responding to production incidents, driving us to remediation as quickly as possible and determining how we can prevent them in the future. Triaging and troubleshooting complex production issues to ensure reliability and performance. Working closely with our stakeholders across the organization to ensure our new capabilities are aligned to our competing constraints of reliability, security, and delivery velocity. Partnering directly with recruiting and people ops to hire and retain the best talent in the world. Keep sharp eyes on our metrics, including vulnerability scanning and security posture, cloud spend, RPO and RTO, and toil overhead, and ensure our projects are driving our metrics in the right direction. Supporting a 24x7 online environment as part of an on-call rotation. You are an ideal candidate if you: Are always willing to go the extra mile: see a problem, fix the problem. Are passionate about encouraging the development of engineering peers and leading by example. Have experience managing teams running large-scale production Java/Tomcat and containerized services in AWS (EC2, ECS, KMS, Kinesis, RDS) or other cloud providers. Have deep knowledge of CI/CD principles, Linux fundamentals, OS hardening, networking concepts, and IP protocols. Minimum Required Knowledge, Skills, Abilities, and Qualities: 2+ years of experience managing SRE or SWE teams, ideally in a cloud native environment. Strong leadership, communication, and project management skills. Strong security background and knowledge. BS In computer science (or equivalent experience).
Posted 1 week ago
3.0 - 7.0 years
2 - 6 Lacs
Hyderabad, Pune, Gurugram
Work from Office
Location Pune, Hyderabad, Gurgaon, Bangalore [Hybrid] : Python, Pyspark, SQL, AWS Services - AWS Glue, S3, IAM, Athena, AWS CloudFormation, AWS Code Pipeline, AWS Lambda, Transfer Family, AWS Lake Formation, and CloudWatch, CI/CD automation of AWS CloudFormation stacks Not Ready to Apply Join our talent pool and we'll reach out when a job fits your skills.
Posted 1 week ago
9.0 - 14.0 years
11 - 16 Lacs
Bengaluru
Work from Office
About us: As a Fortune 50 company with more than 400,000 team members worldwide, Target is an iconic brand and one of America's leading retailers. Joining Target means promoting a culture of mutual care and respect and striving to make the most meaningful and positive impact. Becoming a Target team member means joining a community that values different voices and lifts each other up. Here, we believe your unique perspective is important, and you'll build relationships by being authentic and respectful. Overview about TII: At Target, we have a timeless purpose and a proven strategy. And that hasn t happened by accident. Some of the best minds from different backgrounds come together at Target to redefine retail in an inclusive learning environment that values people and delivers world-class outcomes. That winning formula is especially apparent in Bengaluru, where Target in India operates as a fully integrated part of Target s global team and has more than 4,000 team members supporting the company s global strategy and operations. Team Overview: Every time a guest enters a Target store or browses Target.com nor the app, they experience the impact of Target s investments in technology and innovation. We re the technologists behind one of the most loved retail brands, delivering joy to millions of our guests, team members, and communities. Join our global in-house technology team of more than 5,000 of engineers, data scientists, architects and product managers striving to make Target the most convenient, safe and joyful place to shop. We use agile practices and leverage open-source software to adapt and build best-in-class technology for our team members and guests and we do so with a focus on diversity and inclusion, experimentation and continuous learning. Position Overview: We are building Machine Learning Platform to enable MLOPs capabilities to help Data scientists and ML engineers at Target to implement ML solutions at scale. It encompasses building the Featurestore, Model ops, experimentation, iteration, monitoring, explainability, and continuous improvement of the machine learning lifecycle. You will be part of a team building scalable applications by leverage latest technologies. Connect with us if you want to join us in this exiting journey. Roles and responsibilities: Build and maintain Machine learning infrastructure that is scalable, reliable and efficient. Familiar with Google cloud infrastructure and MLOPS Write highly scalable APIs. Deploy and maintain machine learning models, pipelines and workflows in production environment. Collaborate with data scientists and software engineers to design and implement machine learning workflows. Implement monitoring and logging tools to ensure that machine learning models are performing optimally. Continuously improve the performance, scalability and reliability of machine learning systems. Work with teams to deploy and manage infrastructure for machine learning services. Create and maintain technical documentation for machine learning infrastructure and workflows. Stay up to date with the latest developments in technologies. Tech stack: GCP cloud skills, GCP Machine Learning Engineer skills , GCP VertexAI skills, Python, Microservices, API development Cassandra, Elastic Search, Postgres, Kafka, Docker, CICD, optional (Java + Spring boot) Required Skills: Bachelor's or Master's degree in computer science, engineering or related field. 9+ years of experience in software development, machine learning engineering. A Lead Machine Learning Engineer specializing in Google Cloud (GCP ) needs a deep understanding of machine learning (ML) principles, cloud infrastructure and MLOps Hands-on experience with Vertex AI to manage ML platform for Feature engineering, ML training & deploying models VertexAI Skills needed areBigQueyML, Automating ML workflows using Kubeflow (KFP) or Cloud composer, AI APIs, Endpoints for real-time inference, Model Monitoring, Cloud Logging & Monitoring, Cloud Dataflow for stream processing, Cloud Dataproc (Spark & Hadoop) for distributed ML workloads Deep experience with Python, API development, microservices. Creating ML-powered REST APIs using FastAPI, Flask, Cloud Functions Java (Optional, but useful for production ML systems) Expert in building high-performance APIs. Experience with DevOps practices, containerization and tools such as Kubernetes, Docker, Jenkins, Git. Good understanding of machine learning concepts and frameworks, deep learning, LLM etc. Good to have experience in deploying machine learning models in a production environment. Good to have experience with data streaming technologies such as Kafka, Dataflow, Kinesis, Pub/Sub etc. Strong analytical and problem-solving skills Good to have GCP certification - Professional Machine Learning Engineer
Posted 1 week ago
5.0 - 8.0 years
2 - 6 Lacs
Chennai
Work from Office
Job Information Job Opening ID ZR_1668_JOB Date Opened 19/12/2022 Industry Technology Job Type Work Experience 5-8 years Job Title Sr. AWS Developer City Chennai Province Tamil Nadu Country India Postal Code 600001 Number of Positions 4 AWS Lambda Glue Kafka/Kinesis RDBMS Oracle, MySQL, RedShift, PostgreSQL, Snowflake Gateway Cloudformation / Terraform Step Functions Cloudwatch Python Pyspark Job role & responsibilities: Looking for a Software Engineer/Senior Software engineer with hands on experience in ETL projects and extensive knowledge in building data processing systems with Python, pyspark and Cloud technologies(AWS). Experience in development in AWS Cloud (S3, Redshift, Aurora, Glue, Lambda, Hive, Kinesis, Spark, Hadoop/EMR) Required Skills: Amazon Kinesis, Amazon Aurora, Data Warehouse, SQL, AWS Lambda, Spark, AWS QuickSight Advanced Python Skills Data Engineering ETL and ELT Skills Experience of Cloud Platforms (AWS or GCP or Azure) Mandatory skills Datawarehouse, ETL, SQL, Python, AWS Lambda, Glue, AWS Redshift. check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested
Posted 1 week ago
5.0 - 8.0 years
2 - 6 Lacs
Mumbai
Work from Office
Job Information Job Opening ID ZR_1963_JOB Date Opened 17/05/2023 Industry Technology Job Type Work Experience 5-8 years Job Title Neo4j GraphDB Developer City Mumbai Province Maharashtra Country India Postal Code 400001 Number of Positions 5 Graph data Engineer required for a complex Supplier Chain Project. Key required Skills Graph data modelling (Experience with graph data models (LPG, RDF) and graph language (Cypher), exposure to various graph data modelling techniques) Experience with neo4j Aura, Optimizing complex queries. Experience with GCP stacks like BigQuery, GCS, Dataproc. Experience in PySpark, SparkSQL is desirable. Experience in exposing Graph data to visualisation tools such as Neo Dash, Tableau and PowerBI The Expertise You Have: Bachelors or Masters Degree in a technology related field (e.g. Engineering, Computer Science, etc.). Demonstrable experience in implementing data solutions in Graph DB space. Hands-on experience with graph databases (Neo4j(Preferred), or any other). Experience Tuning Graph databases. Understanding of graph data model paradigms (LPG, RDF) and graph language, hands-on experience with Cypher is required. Solid understanding of graph data modelling, graph schema development, graph data design. Relational databases experience, hands-on SQL experience is required. Desirable (Optional) skills: Data ingestion technologies (ETL/ELT), Messaging/Streaming Technologies (GCP data fusion, Kinesis/Kafka), API and in-memory technologies. Understanding of developing highly scalable distributed systems using Open-source technologies. Experience in Supply Chain Data is desirable but not essential. Location: Pune, Mumbai, Chennai, Bangalore, Hyderabad check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested
Posted 1 week ago
8.0 - 13.0 years
25 - 30 Lacs
Bengaluru
Work from Office
Data Analyst Location: Bangalore Experience: 8 - 15 Yrs Type: Full-time Role Overview We are seeking a skilled Data Analyst to support our platform powering operational intelligence across airports and similar sectors. The ideal candidate will have experience working with time-series datasets and operational information to uncover trends, anomalies, and actionable insights. This role will work closely with data engineers, ML teams, and domain experts to turn raw data into meaningful intelligence for business and operations stakeholders. Key Responsibilities Analyze time-series and sensor data from various sources Develop and maintain dashboards, reports, and visualizations to communicate key metrics and trends. Correlate data from multiple systems (vision, weather, flight schedules, etc) to provide holistic insights. Collaborate with AI/ML teams to support model validation and interpret AI-driven alerts (e.g., anomalies, intrusion detection). Prepare and clean datasets for analysis and modeling; ensure data quality and consistency. Work with stakeholders to understand reporting needs and deliver business-oriented outputs. Qualifications & Required Skills Bachelors or Masters degree in Data Science, Statistics, Computer Science, Engineering, or a related field. 5+ years of experience in a data analyst role, ideally in a technical/industrial domain. Strong SQL skills and proficiency with BI/reporting tools (e.g., Power BI, Tableau, Grafana). Hands-on experience analyzing structured and semi-structured data (JSON, CSV, time-series). Proficiency in Python or R for data manipulation and exploratory analysis. Understanding of time-series databases or streaming data (e.g., InfluxDB, Kafka, Kinesis). Solid grasp of statistical analysis and anomaly detection methods. Experience working with data from industrial systems or large-scale physical infrastructure. Good-to-Have Skills Domain experience in airports, smart infrastructure, transportation, or logistics. Familiarity with data platforms (Snowflake, BigQuery, Custom-built using open-source). Exposure to tools like Airflow, Jupyter Notebooks and data quality frameworks. Basic understanding of AI/ML workflows and data preparation requirements.
Posted 1 week ago
12.0 - 17.0 years
5 - 9 Lacs
Hyderabad
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NA Minimum 12 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will be responsible for designing, building, and configuring applications to meet business process and application requirements. You will collaborate with teams to ensure successful project delivery and contribute to key decisions. Work on client projects to deliver AWS, PySpark, Databricks based Data engineering & Analytics solutions. Build and operate very large data warehouses or data lakes. ETL optimization, designing, coding, & tuning big data processes using Apache Spark. Build data pipelines & applications to stream and process datasets at low latencies. Show efficiency in handling data - tracking data lineage, ensuring data quality, and improving discoverability of data. Technical Experience: Minimum of 5 years of experience in Databricks engineering solutions on AWS Cloud platforms using PySpark, Databricks SQL, Data pipelines using Delta Lake. Minimum of 5 years of experience years of experience in ETL, Big Data/Hadoop and data warehouse architecture & delivery. Minimum of 2 years of experience years in real time streaming using Kafka/Kinesis Minimum 4 year of Experience in one or more programming languages Python, Java, Scala. Experience using airflow for the data pipelines in min 1 project. 1 years of experience developing CICD pipelines using GIT, Jenkins, Docker, Kubernetes, Shell Scripting, Terraform Professional Attributes: Ready to work in B Shift (12 PM – 10 PM) A Client facing skills:solid experience working in client facing environments, to be able to build trusted relationships with client stakeholders. Good critical thinking and problem-solving abilities Health care knowledge Good Communication Skills Educational Qualification:Bachelor of Engineering / Bachelor of Technology Additional Information: The candidate should have a minimum of 12 years of experience in Databricks Unified Data Analytics Platform. This position is based at our Hyderabad office. A 15 years full-time education is required. A Client facing skills:solid experience working in client facing environments, to be able to build trusted relationships with client stakeholders. Good critical thinking and problem-solving abilities Health care knowledge Good Communication Skills Qualification 15 years full time education
Posted 1 week ago
6.0 - 11.0 years
25 - 30 Lacs
Bengaluru
Hybrid
Mandatory Skills : Data engineer , AWS Athena, AWS Glue,Redshift,Datalake,Lakehouse,Python,SQL Server Must Have Experience: 6+ years of hands-on data engineering experience Expertise with AWS services: S3, Redshift, EMR, Glue, Kinesis, DynamoDB Building batch and real-time data pipelines Python, SQL coding for data processing and analysis Data modeling experience using Cloud based data platforms like Redshift, Snowflake, Databricks Design and Develop ETL frameworks Nice-to-Have Experience : ETL development using tools like Informatica, Talend, Fivetran Creating reusable data sources and dashboards for self-service analytics Experience using Databricks for Spark workloads or Snowflake Working knowledge of Big Data Processing CI/CD setup Infrastructure-as-code implementation Any one of the AWS Professional Certification
Posted 1 week ago
6.0 - 11.0 years
20 - 22 Lacs
Gurugram
Work from Office
AWS ETL developer with 5–8 yrs experience in EC2, Lambda, Glue, PySpark, Terraform, Kafka, Kinesis, Python, SQL/NoSQL, data modeling, performance tuning, and secure AWS services. Mail:kowsalya.k@srsinfoway.com
Posted 2 weeks ago
5.0 - 10.0 years
5 - 9 Lacs
Hyderabad
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will be responsible for designing, building, and configuring applications to meet business process and application requirements. Your typical day will involve collaborating with team members to develop innovative solutions and ensure seamless application functionality. Key Responsibilities- Work on client projects to deliver AWS, PySpark, Databricks based Data engineering & Analytics solutions. -Build and operate very large data warehouses or data lakes. ETL optimization, designing, coding, & tuning big data processes using Apache Spark. Build data pipelines & applications to stream and process datasets at low latencies. -Show efficiency in handling data - tracking data lineage, ensuring data quality, and improving discoverability of data. Technical Experience:- Minimum of 5 years of experience in Databricks engineering solutions on AWS Cloud platforms using PySpark, Databricks SQL, Data pipelines using Delta Lake.-Minimum of 5 years of experience years of experience in ETL, Big Data/Hadoop and data warehouse architecture & delivery. -Minimum of 2 years of experience years in real time streaming using Kafka/Kinesis- Minimum 4 year of Experience in one or more programming languages Python, Java, Scala.- Experience using airflow for the data pipelines in min 1 project.-1 years of experience developing CICD pipelines using GIT, Jenkins, Docker, Kubernetes, Shell Scripting, Terraform Professional Attributes:- Ready to work in B Shift (12 PM to 10 PM) - A Client facing skills:solid experience working in client facing environments, to be able to build trusted relationships with client stakeholders.- Good critical thinking and problem-solving abilities - Health care knowledge - Good Communication Skills ducational Qualification:Bachelor of Engineering / Bachelor of Technology Additional Information:Data Engineering, PySpark, AWS, Python Programming Language, Apache Spark, Databricks, Hadoop, Certifications in Databrick or Python or AWS. Additional Information:- The candidate should have a minimum of 5 years of experience in Databricks Unified Data Analytics Platform- This position is based at our Hyderabad office- A 15 years full-time education is required Qualification 15 years full time education
Posted 2 weeks ago
2.0 - 7.0 years
4 - 8 Lacs
Mumbai, New Delhi, Bengaluru
Work from Office
The Customer, Sales & Service Practice | Cloud Job Title - Amazon Connect + Level 11 (Analyst) + Entity (S&C GN) Management Level: Level 11 - Analyst Location: Delhi, Mumbai, Bangalore, Gurgaon, Pune, Hyderabad and Chennai Must have skills: AWS contact center, Amazon Connect flows, AWS Lambda and Lex bots, Amazon Connect Contact Center Good to have skills: AWS Lambda and Lex bots, Amazon Connect Join our team of Customer Sales & Service consultants who solve customer facing challenges at clients spanning sales, service and marketing to accelerate business change. Practice: Customer Sales & Service Sales I Areas of Work: Cloud AWS Cloud Contact Center Transformation, Analysis and Implementation | Level: Analyst | Location: Delhi, Mumbai, Bangalore, Gurgaon, Pune, Hyderabad and Chennai | Years of Exp: 2-5 years Explore an Exciting Career at Accenture Are you passionate about scaling businesses using in-depth frameworks and techniques to solve customer facing challenges? Do you want to design, build and implement strategies to enhance business performance? Does working in an inclusive and collaborative environment spark your interest? Then, this is the right place for you! Welcome to a host of exciting global opportunities within Accenture Strategy & Consulting's Customer, Sales & Service practice. The Practice A Brief Sketch The Customer Sales & Service Consulting practice is aligned to the Capability Network Practice of Accenture and works with clients across their marketing, sales and services functions. As part of the team, you will work on transformation services driven by key offerings like Living Marketing, Connected Commerce and Next-Generation Customer Care. These services help our clients become living businesses by optimizing their marketing, sales and customer service strategy, thereby driving cost reduction, revenue enhancement, customer satisfaction and impacting front end business metrics in a positive manner. You will work closely with our clients as consulting professionals who design, build and implement initiatives that can help enhance business performance. As part of these, you will drive the following: Work on creating business cases for journey to cloud, cloud strategy, cloud contact center vendor assessment activities Work on creating Cloud transformation approach for contact center transformations Work along with Solution Architects for architecting cloud contact center technology with AWS platform Work on enabling cloud contact center technology platforms for global clients specifically on Amazon connect Work on innovative assets, proof of concept, sales demos for AWS cloud contact center Support AWS offering leads in responding to RFIs and RFPs Bring your best skills forward to excel at the role Good understanding of contact center technology landscape. An understanding of AWS Cloud platform and services with Solution architect skills. Deep expertise on AWS contact center relevant services. Sound experience in developing Amazon Connect flows , AWS Lambda and Lex bots Deep functional and technical understanding of APIs and related integration experience Functional and technical understanding of building API-based integrations with Salesforce, Service Now and Bot platforms Ability to understand customer challenges and requirements, ability to address these challenges/requirements in a differentiated manner. Ability to help the team to implement the solution, sell, deliver cloud contact center solutions to clients. Excellent communications skills Ability to develop requirements based on leadership input Ability to work effectively in a remote, virtual, global environment Ability to take new challenges and to be a passionate learner Read about us. Blogs Your experience counts! Bachelor's degree in related field or equivalent experience and Post-Graduation in Business management would be added value. Minimum 2 years of experience in delivering software as a service or platform as a service projects related to cloud CC service providers such as Amazon Connect Contact Center cloud solution Hands-on experience working on the design, development and deployment of contact center solutions at scale. Hands-on development experience with cognitive service such as Amazon connect, Amazon Lex, Lambda, Kinesis, Athena, Pinpoint, Comprehend, Transcribe Working knowledge of one of the programming/scripting languages such as Node.js, Python, Java What's in it for you? An opportunity to work on transformative projects with key G2000 clients Potential to Co-create with leaders in strategy, industry experts, enterprise function practitioners and, business intelligence professionals to shape and recommend innovative solutions that leverage emerging technologies. Ability to embed responsible business into everything"from how you service your clients to how you operate as a responsible professional. Personalized training modules to develop your strategy & consulting acumen to grow your skills, industry knowledge and capabilities Opportunity to thrive in a culture that is committed to accelerate equality for all. Engage in boundaryless collaboration across the entire organization. Qualifications Experience: Minimum 2 year(s) of experience is required Educational Qualification: Engineering Degree or MBA from a Tier 1 or Tier 2 institute
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2