Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
6.0 - 11.0 years
12 - 22 Lacs
bengaluru
Hybrid
Job description * Role & Responsibilities: Work as a backend Java Developer with strong expertise in Spring Boot and Microservices architecture. Design and develop RESTful APIs with robust, scalable, and secure implementations. Develop and optimize SQL/PLSQL queries across databases such as Sybase, DB2, DynamoDB, MongoDB. Hands-on experience in Docker-based Microservices deployment. Work with AWS cloud services including EC2, S3, ALB, NATGW, EFS, Lambda, and API Gateway. Design AWS infrastructure including DR strategy at AWS Infra level. Implement infrastructure provisioning using Terraform. Build and manage event-driven messaging solutions using Kafka. Participate in Agile ceremonies and collaborate with cross-functional teams for end-to-end solution delivery. Preferred Candidate Profile: 69 years of relevant experience in backend development with Java & Spring Boot. Strong exposure to Microservices, RESTful API design, and SQL/PLSQL. Proficient in AWS cloud services and Terraform-based Infra provisioning. Working knowledge of Docker, CI/CD pipelines, and Kafka event streaming. Excellent problem-solving, debugging, and communication skills.
Posted 1 week ago
6.0 - 11.0 years
22 - 30 Lacs
bengaluru
Remote
• Proficiency in Python, PySpark and SQL for data processing and manipulation. • Min 5 years of experience in data engineering, specifically working with Apache Airflow and AWS technologies. • Understanding of Snowflake Data Lake is preferred.
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
chennai, tamil nadu
On-site
As a Software Developer at our organization, you will play a crucial role in all stages of the software development lifecycle. Your responsibilities will include designing, implementing, and maintaining Java-based applications that can handle high-volume and low-latency requirements. By analyzing user requirements, you will define business objectives and envision system features and functionality. Furthermore, you will identify and resolve any technical issues that may arise, creating detailed design documentation and proposing changes to the current Java infrastructure. Your role will also involve developing technical designs for application development, conducting software analysis, programming, testing, and debugging, as well as managing Java and Java EE application development. Additionally, you will be responsible for transforming requirements into stipulations and preparing software releases. We are looking for a candidate who has experience with Advanced Java 8 or higher, along with at least 5 years of experience with Java. You should have a good understanding of Jetty, Spring Boot, Struts, Hibernate, REST API, Web Services, Background Services, GIT, OOPS, and databases like PostgreSQL, Oracle, or SQL Server. Experience with AWS Lamba, EC2, and S3 will be considered a plus. At our organization, we are driven by the mission of enabling our customers to transform their aspirations into tangible outcomes. We foster a culture defined by agility, innovation, and a commitment to progress. Our streamlined and vibrant organizational framework prioritizes results and growth through hands-on leadership. We offer various perks to our employees, including clear objectives aligned with our mission, engagement opportunities with customers and leadership, and guidance through progressive paths and ongoing feedback sessions. You will have the opportunity to cultivate connections within diverse communities, access continuous learning and upskilling opportunities through Nexversity, and enjoy a flexible work model promoting work-life balance. We also provide comprehensive family health insurance coverage, prioritizing the well-being of your loved ones, and offer accelerated career paths to help you achieve your professional aspirations. Our organization focuses on enabling high-growth enterprises to build hyper-personalized solutions that transform their vision into reality. By applying creativity, embracing new technology, and harnessing the power of data and AI, we co-create tailored solutions to meet unique customer needs. Join our passionate team and embark on a journey of growth and innovation with us!,
Posted 1 week ago
10.0 - 14.0 years
0 Lacs
hyderabad, telangana
On-site
As a Senior Data Engineer & Architect with 10-12 years of experience, you will be responsible for designing and implementing enterprise-grade Data Lake solutions using AWS technologies such as S3, Glue, and Lake Formation. Your expertise in building Data Lakes and proficiency with AWS tools like S3, EC2, Redshift, Athena, and Airflow will be essential in optimizing cloud infrastructure for performance, scalability, and cost-effectiveness. You will be required to define data architecture patterns, best practices, and frameworks for handling large-scale data ingestion, storage, compute, and processing. Developing and maintaining ETL pipelines using tools like AWS Glue, creating robust Data Warehousing solutions using Redshift, and ensuring high data quality and integrity across all pipelines will be key aspects of your role. Collaborating with business stakeholders to define key metrics, deliver actionable insights, and designing and deploying dashboards and visualizations using tools like Tableau, Power BI, or Qlik will be part of your responsibilities. You will also be involved in implementing best practices for data encryption, secure data transfer, and role-based access control to maintain data security. As a Senior Data Engineer & Architect, you will lead audits and compliance certifications, work closely with cross-functional teams including Data Scientists, Analysts, and DevOps engineers, and mentor junior team members. Your role will also involve partnering with stakeholders to define and align data strategies that meet business objectives. Clovertex offers a competitive salary and benefits package, reimbursement for AWS certifications, a Hybrid work model for maintaining work-life balance, and health insurance and benefits for employees and dependents. If you have a Bachelor of Engineering degree in Computer Science or a related field, AWS Certified Solutions Architect Associate certification, and experience with Agile/Scrum methodologies, this role is perfect for you.,
Posted 1 week ago
3.0 - 7.0 years
0 - 0 Lacs
udaipur, rajasthan
On-site
You are invited to join Katha Ads as a Tech Lead, located in Udaipur, Rajasthan. The position offers a competitive CTC ranging from 60,000 to 1,25,000 per month, dependent on your skills and experience. The expected start date for this role is June 01, 2025. Please note that we require a notice period of 7-15 days, so kindly refrain from applying if you cannot meet this timeline. As a Tech Lead at Katha Ads, you will be a vital part of our growing product engineering team. We are seeking an individual who is passionate about writing clean code and developing scalable digital products that have a positive impact on thousands of users. To be successful in this role, you should possess 3-4+ years of hands-on experience in product-based companies. You must have proven expertise in backend development using Ruby on Rails or Python (Django), as well as strong frontend skills in React JS or Next JS. Additionally, working knowledge of cloud infrastructure such as AWS, GCP, or Azure (EC2, RDS, S3, CloudFront, SES, etc.), solid experience with MySQL or PostgreSQL, and excellent problem-solving and debugging skills are essential. You should be able to work independently, take ownership of tasks, and be a fast learner who can adapt to new tools and frameworks efficiently. Our selection process includes two rounds of interviews - technical assessment and problem-solving, as well as a culture fit evaluation. Furthermore, we conduct a paid one-week live project on our current system to assess your real-world contributions. This hybrid role will initially be based in Udaipur with the potential for full-time remote work in the future. If you are excited about this opportunity and meet the qualifications mentioned above, please send your resumes to hr@katha.today. We are looking forward to welcoming you to our team!,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
As an experienced candidate with 5-8 years of relevant experience, you will be expected to have proficiency in Informatica PWC skills. Additionally, hands-on experience with AWS Redshift and S3 is essential for this role. Your responsibilities will include SQL scripting and familiarity with Redshift commands to effectively contribute to the team's success.,
Posted 1 week ago
4.0 - 8.0 years
0 Lacs
karnataka
On-site
As a Data Engineer on Contract with 4-8 years of experience, you will play a crucial role in building and maintaining data systems on AWS. Your responsibilities will include designing and constructing efficient data pipelines using Spark, PySpark, or Scala, managing complex data processes with Airflow, and cleaning, transforming, and preparing data for analysis. You will utilize Python for data tasks, automation, and tool development, while collaborating with the Analytics team to understand their data requirements and provide appropriate solutions. In this role, you will work with various AWS services such as S3, Redshift, EMR, Glue, and Athena to manage the data infrastructure effectively. Additionally, you will be involved in developing and maintaining a Node.js backend for data services, utilizing Typescript, and managing settings for data tools using YAML. Setting up automated deployment processes through GitHub Actions, monitoring and resolving issues in data pipelines, implementing data accuracy checks, and designing data warehouses and data lakes will also be part of your responsibilities. As a qualified candidate, you should possess a Bachelor's or Master's degree in Computer Science, Engineering, or a related field, along with strong skills in Spark, Scala, and Airflow. Proficiency in Python, experience with AWS cloud services, familiarity with Node.js and Typescript, and hands-on experience with GitHub Actions for automated deployment are essential requirements. A good understanding of data warehousing concepts, strong database skills, proficiency in SQL, experience with stream processing using Kafka, and excellent problem-solving, analytical, and communication skills are also expected. Additionally, guiding and mentoring junior data engineers and contributing to the development of data management rules and procedures will be part of your role.,
Posted 1 week ago
2.0 - 10.0 years
0 Lacs
pune, maharashtra
On-site
As a Data Engineer, you will be responsible for designing and implementing next-generation data pipelines, analytics, and Business Intelligence solutions. You will manage various AWS resources such as RDS, EMR, MWAA, and Lambda. Your primary focus will be on developing high-quality data architecture and pipelines to support business analysts and data scientists. Collaborating with other technology teams, you will extract, transform, and load data from diverse sources. Additionally, you will work towards enhancing reporting and analysis processes by automating self-service support for customers. You should have a solid background with 7-10 years of data engineering experience, including expertise in data modeling, warehousing, and building ETL pipelines. Proficiency in at least one modern programming language, preferably Python, is required. The role entails working independently on end-to-end projects and a good understanding of distributed systems related to data storage and computing. Analyzing and interpreting data using tools like Postgres and NoSQL for at least 2 years is essential. Hands-on experience with big data frameworks such as Apache Spark, EMR, Glue, Data Lake, and BI tools like Tableau is necessary. Experience with geospatial and time series data is a plus. Desirable skills include collaborating with cross-functional teams to design and deliver data initiatives effectively. You will build fault-tolerant and scalable data solutions using cutting-edge technologies like Spark, EMR, Python, AirFlow, Glue, and S3. The role requires a proactive approach to continuously evaluate and enhance the strategy, architecture, tooling, and codebase to optimize performance, scalability, and availability.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
At Arctic Wolf, we are redefining the cybersecurity landscape with a global team committed to setting new industry standards. Our achievements include recognition in prestigious lists such as Forbes Cloud 100, CNBC Disruptor 50, Fortune Future 50, and Fortune Cyber 60, as well as winning the 2024 CRN Products of the Year award. We are proud to be named a Leader in the IDC MarketScape for Worldwide Managed Detection and Response Services and to have earned a Customers" Choice distinction from Gartner Peer Insights. Our Aurora Platform has also been recognized with CRN's Products of the Year award. Join us in shaping the future of security operations. Our mission is to End Cyber Risk, and we are looking for a Senior Developer to contribute to this goal. In this role, you will be part of our expanding Infrastructure teams and work closely with the Observability team. Your responsibilities will include designing, developing, and maintaining solutions to monitor the behavior and performance of R&D teams" workloads, reduce incidents, and troubleshoot issues effectively. We are seeking candidates with operations backgrounds (DevOps/SysOps/TechOps) who have experience supporting infrastructure at scale. If you believe in Infrastructure as Code, continuous deployment/delivery practices, and enjoy helping teams understand their services in real-world scenarios, this role might be a great fit for you. **Technical Responsibilities:** - Design, configure, integrate, deploy, and operate Observability systems and tools to collect metrics, logs, and events from backend services - Collaborate with engineering teams to support services from development to production - Ensure Observability platform meets availability, capacity, efficiency, scalability, and performance goals - Build next-generation observability integrating with Istio - Develop libraries and APIs for a unified interface for developers using monitoring, logging, and event processing systems - Enhance alerting capabilities with tools like Slack, Jira, and PagerDuty - Contribute to building a continuous deployment system driven by metrics and data - Implement anomaly detection in the observability stack - Participate in a 24x7 on-call rotation after at least 6 months of employment **What You Know:** - Minimum of five years of experience - Proficiency in Python or Go - Strong understanding of AWS services like Lambda, CloudWatch, IAM, EC2, ECS, S3 - Solid knowledge of Kubernetes - Experience with tools like Prometheus, Grafana, Thanos, AlertManager, etc. - Familiarity with monitoring protocols/frameworks such as Prometheus/Influx line format, SNMP, JMX, etc. - Exposure to Elastic stack, syslog, CloudWatch Logs - Comfortable with git, Github, and CI/CD approaches - Experience with IAC tools like CloudFormation or Terraform **How You Do Things:** - Provide expertise and guidance on the right way forward - Collaborate effectively with SRE, platform, and development teams - Work independently and seek support when needed - Advocate for automation and code-driven practices Join us if you have expertise in distributed tracing tools, Java, open Observability initiatives, Kafka, monitoring in GCP and Azure, AWS certifications, SQL, and more. At Arctic Wolf, we offer a collaborative and inclusive work environment that values diversity and inclusion. Our commitment to growth and customer satisfaction is unmatched, making us the most trusted name in the industry. Join us on our mission to End Cyber Risk and engage with a community that values unique perspectives and corporate responsibility.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
hyderabad, telangana
On-site
You will join our team as a Full Stack Developer, bringing your 5+ years of experience to the table. Your main responsibilities will revolve around designing, developing, testing, and deploying applications using ReactJS for the front-end and Node.js for the back-end. You will work on ensuring performance, scalability, and security of the applications, utilizing AWS Cloud services such as EC2, S3, Lambda, API Gateway, DynamoDB, and RDS to build scalable and fault-tolerant solutions. Your role will involve implementing scalable, cost-effective, and resilient cloud architectures, managing application state using tools like Redux, Mobx, or Zustand, and writing unit and integration tests using frameworks such as Jest, Mocha, or Chai. You will be responsible for developing modern user interfaces using frameworks like Tailwind CSS, Material UI, or Bootstrap, as well as leading the development process, collaborating with cross-functional teams, and staying updated with the latest technologies and industry trends. Additionally, mentoring junior developers, providing guidance for career growth, and ensuring best practices are followed within the team will be part of your tasks. You should possess hands-on experience in ReactJS, Node.js, HTML, CSS, JavaScript, and TypeScript, along with expertise in AWS Cloud services, unit testing, state management, UI frameworks, version control using Git, and effective collaboration in agile environments. Preferred skills include experience with Next.js, Docker, automated deployment pipelines, and security best practices for web applications and cloud environments.,
Posted 1 week ago
4.0 - 8.0 years
0 Lacs
hyderabad, telangana
On-site
Genpact is a global professional services and solutions firm with over 125,000 employees in 30+ countries, driven by curiosity, agility, and a commitment to creating value for clients. We serve leading enterprises worldwide, leveraging our expertise in digital operations, data, technology, and AI. We are seeking a Lead Consultant- Databricks Developer to solve cutting-edge problems and meet functional and non-functional requirements. As a Databricks Developer, you will work closely with architects and lead engineers to design solutions and stay abreast of industry trends and standards. Responsibilities: - Stay updated on new technologies for potential application in service offerings. - Collaborate with architects and lead engineers to develop solutions. - Demonstrate knowledge of industry trends and standards. - Exhibit strong analytical and technical problem-solving skills. - Required experience in the Data Engineering domain. Minimum qualifications: - Bachelor's Degree in CS, CE, CIS, IS, MIS, or equivalent work experience. - Proficiency in Python or Scala, preferably Python. - Experience in Data Engineering with a focus on Databricks. - Implementation of at least 2 projects end-to-end in Databricks. - Proficiency in Databricks components like Delta lake, dbConnect, db API 2.0, and Databricks workflows orchestration. - Understanding of Databricks Lakehouse concept and its implementation. - Ability to create complex data pipelines and knowledge of data structure & algorithms. - Strong skills in SQL and spark-sql. - Experience in performance optimization and working on both batch and streaming data pipelines. - Extensive knowledge of Spark and Hive data processing framework. - Familiarity with cloud platforms like Azure, AWS, GCP, and related services. - Experience in writing unit and integration test cases. - Excellent communication skills and team collaboration experience. Preferred qualifications: - Knowledge of Unity catalog and basic governance. - Understanding of Databricks SQL Endpoint. - Experience with CI/CD for building Databricks job pipelines. - Exposure to migration projects for building Unified data platforms. - Familiarity with DBT, docker, and Kubernetes. Join us as a Lead Consultant in Hyderabad, India, on a full-time basis to contribute to our digital initiatives and shape the future of professional services.,
Posted 1 week ago
6.0 - 11.0 years
30 - 35 Lacs
hyderabad, chennai, bengaluru
Hybrid
Job Title: SRE Developer AWS Serverless (Offshore, 6+ yrs) Company: Xebia Work Location: All Xebia locations (Hybrid, 3 days office/week) Work Hours: 2:00 PM 11:00 PM IST Experience Required: 6+ years Notice Period: Immediate to 2 weeks only please apply only if you can join early Job Description We are seeking an SRE Developer (AWS Serverless) with 6+ years of experience to join our offshore team. This role is focused on building highly reliable, resilient & automated AWS architectures with deep expertise in serverless technologies and SRE principles. Key Responsibilities Build & maintain resilient AWS serverless architectures with automation & multi-region failover. Apply SRE best practices SLIs, SLOs, SLAs, error budgets, toil reduction. Develop automation & self-healing workflows with Python, Node.js/TypeScript, Lambda, Step Functions, EventBridge. Implement observability pipelines (logs, metrics, traces, events) & configure dashboards (CloudWatch, Dynatrace, Grafana). Conduct chaos testing, fault injection, performance optimization & capacity planning . Drive AIOps adoption for anomaly detection & automated incident workflows. Collaborate with Offshore Lead/Manager & Onsite Architect for delivery alignment. Skills Required Hands-on experience with AWS serverless stack (Lambda, EventBridge, Kinesis, Firehose, S3, SQS, SNS, DynamoDB, RDS). Strong expertise in Python & Node.js/TypeScript for automation & AWS SDK integrations. Experience in IAM, Security Groups, Secrets Manager , chaos testing, performance tuning. Familiarity with ECS/EKS & data observability is a plus. Who Should Apply? Professionals with 6+ years of experience in SRE/DevOps with AWS serverless. Candidates who can join immediately or within 2 weeks only. To apply, please share your profile with the following details to Vijay.S@xebia.com : Total Experience Relevant Experience Current CTC Expected CTC Notice Period ( Immediate2 weeks ) Current Location Preferred Location LinkedIn Profile URL
Posted 1 week ago
10.0 - 15.0 years
0 - 3 Lacs
noida, hyderabad, mumbai (all areas)
Hybrid
Must have skills : 15 years of experience in design and delivery of Distributed Systems capable of handling petabytes of data in a distributed environment. 10 years of experience in the development of Data Lakes with Data Ingestion from disparate data sources, including relational databases, flat files, APIs, and streaming data. Experience in providing Design and development of Data Platforms and data ingestion from disparate data sources into the cloud. Expertise in core AWS Services including AWS IAM, VPC, EC2, EKS/ECS, S3, RDS, DMS, Lambda, CloudWatch, CloudFormation, CloudTrail, CloudWatch. Proficiency in programming languages like Python and PySpark to ensure efficient data processing. preferably Python. Architect and implement robust ETL pipelines using AWS Glue, defining data extraction methods, transformation logic, and data loading procedures across different data sources 15 years of Experience in using IaC tools like Terraform etc. 10 years of experience in development of CI/CD pipelines (GitHub Actions, Jenkins). Experience in the development of Event-Driven Distributed Systems in the Cloud using Serverless Architecture. Ability to work with Infrastructure team for AWS service provisioning for databases, services, network design, IAM roles and AWS cluster. 2-3 years of experience working with Document DB. Ability to design, orchestrate and schedule jobs using Airflow. Knowledge of AWS AI Services like AWS Entity Resolution, AWS Comprehend. Ability to run custom LLMs using Amazon SageMaker. Ability to use Large Language Models (LLMs) for Data Classification and Identification of PII data entities Nice to have Skills: 10 years of experience in the development of Data Audit, Compliance and Retention standards for Data Governance, and automation of the governance processes. Experience in data modelling with NoSQL Databases like Document DB. Experience in using column-oriented data file format like Apache Parquet, and Apache Iceberg as the table format for analytical datasets. Expertise in development of Retrieval-Augmented Generation (RAG) and Agentic Workflows for providing context to LLMs based on proprietary enterprise data. Ability to develop re-ranking strategies using results from Index and Vector stores for LLMs to improve the quality of the output.
Posted 1 week ago
10.0 - 15.0 years
0 - 3 Lacs
pune, chennai, bengaluru
Hybrid
Must have skills : 15 years of experience in design and delivery of Distributed Systems capable of handling petabytes of data in a distributed environment. 10 years of experience in the development of Data Lakes with Data Ingestion from disparate data sources, including relational databases, flat files, APIs, and streaming data. Experience in providing Design and development of Data Platforms and data ingestion from disparate data sources into the cloud. Expertise in core AWS Services including AWS IAM, VPC, EC2, EKS/ECS, S3, RDS, DMS, Lambda, CloudWatch, CloudFormation, CloudTrail, CloudWatch. Proficiency in programming languages like Python and PySpark to ensure efficient data processing. preferably Python. Architect and implement robust ETL pipelines using AWS Glue, defining data extraction methods, transformation logic, and data loading procedures across different data sources 15 years of Experience in using IaC tools like Terraform etc. 10 years of experience in development of CI/CD pipelines (GitHub Actions, Jenkins). Experience in the development of Event-Driven Distributed Systems in the Cloud using Serverless Architecture. Ability to work with Infrastructure team for AWS service provisioning for databases, services, network design, IAM roles and AWS cluster. 2-3 years of experience working with Document DB. Ability to design, orchestrate and schedule jobs using Airflow. Knowledge of AWS AI Services like AWS Entity Resolution, AWS Comprehend. Ability to run custom LLMs using Amazon SageMaker. Ability to use Large Language Models (LLMs) for Data Classification and Identification of PII data entities Nice to have Skills: 10 years of experience in the development of Data Audit, Compliance and Retention standards for Data Governance, and automation of the governance processes. Experience in data modelling with NoSQL Databases like Document DB. Experience in using column-oriented data file format like Apache Parquet, and Apache Iceberg as the table format for analytical datasets. Expertise in development of Retrieval-Augmented Generation (RAG) and Agentic Workflows for providing context to LLMs based on proprietary enterprise data. Ability to develop re-ranking strategies using results from Index and Vector stores for LLMs to improve the quality of the output.
Posted 1 week ago
10.0 - 14.0 years
0 Lacs
karnataka
On-site
You are a highly experienced Senior Python Developer with over 10 years of experience, specializing in FastAPI and AWS. Your main responsibility will be to lead the development of scalable backend systems by designing robust API architectures, optimizing cloud deployments, and mentoring junior developers. Your deep technical expertise and problem-solving skills will be essential in driving innovation within the team. Your key responsibilities will include developing and optimizing high-performance APIs using FastAPI, designing and implementing scalable backend solutions in Python, leading AWS cloud deployments and infrastructure management, developing and maintaining microservices-based architectures, optimizing database performance, implementing best practices for API security, authentication, and authorization, leading code reviews, mentoring junior developers, monitoring system performance, and staying updated on emerging technologies. To excel in this role, you must have strong hands-on experience with FastAPI, deep expertise in AWS services such as EC2, Lambda, RDS, S3, and API Gateway, experience with asynchronous programming, concurrency, and event-driven architecture, a solid understanding of microservices, RESTful APIs, and GraphQL, proficiency in both SQL (PostgreSQL/MySQL) and NoSQL (DynamoDB, MongoDB) databases, familiarity with software design patterns, SOLID principles, and clean code practices, as well as experience working in an Agile/Scrum environment. This is a full-time permanent position with benefits including health insurance and Provident Fund. The work schedule is during day shifts at the company's office located in Bangalore.,
Posted 1 week ago
2.0 - 8.0 years
0 Lacs
pune, maharashtra
On-site
As a Python + React Developer with over 8 years of experience, you will be responsible for developing and maintaining applications using Python frameworks such as Django and Flask. Your expertise in AWS services including EC2, Lambda, S3, RDS, and DynamoDB will be essential in designing, building, and optimizing database systems utilizing both SQL and NoSQL databases. In this role, you will also need to have a strong command of React.js and modern JavaScript (ES6), along with experience working with Redux and Context API. Your experience in AWS should be at least 5 years, Python at least 8 years, and React at least 2 years. This is a full-time position based in Pune, Bangalore, Chennai, or Hyderabad with a hybrid work mode. The work schedule is from Monday to Friday, and the ideal candidate should be able to join immediately. If you are passionate about developing cutting-edge applications and have the required skills and experience, we would love to have you on board.,
Posted 1 week ago
10.0 - 14.0 years
0 Lacs
karnataka
On-site
Capco, a Wipro company, is a global technology and management consulting firm that has been recognized for its deep transformation execution and delivery. With a presence in 32 cities across the globe, Capco supports over 100 clients in the banking, financial, and energy sectors. Awarded Consultancy of the year in the British Bank Award and ranked in the Top 100 Best Companies for Women in India 2022 by Avtar & Seramount, Capco is committed to delivering disruptive work that is changing the energy and financial services industries. At Capco, you will have the opportunity to make an impact by providing innovative thinking, delivery excellence, and thought leadership to help clients transform their businesses. The company values diversity, inclusivity, and creativity, fostering a tolerant and open culture where everyone is encouraged to be themselves at work. With no forced hierarchy, Capco provides a platform for career advancement where individuals can take their careers into their own hands and grow along with the company. Currently, we are looking for a Principal Consultant - Senior Data Architecture with expertise in AWS Glue, Spark, and Python. This role is based in Bangalore and requires a minimum of 10 years of experience. Responsibilities include designing, developing, testing, deploying, and maintaining large-scale data pipelines using AWS Glue, as well as collaborating with cross-functional teams to deliver high-quality ETL solutions. The ideal candidate should have a good understanding of Spark/Pyspark and technical knowledge of AWS services like EC2 and S3. Previous experience with AWS development for data ETL, pipeline, integration, and automation work is essential. Additionally, a deep understanding of BI & Analytics Solution development lifecycle and familiarity with AWS services such as Redshift, Glue, Lambda, Athena, S3, and EC2 are required. Join Capco and be part of a diverse and inclusive team that believes in the competitive advantage of diversity in people and perspectives. Make a meaningful impact, grow your career, and contribute to transformative work in the energy and financial services industries.,
Posted 1 week ago
8.0 - 12.0 years
0 Lacs
haryana
On-site
About KPMG in India KPMG entities in India are professional services firm(s) affiliated with KPMG International Limited, established in India in August 1993. Our professionals leverage the global network of firms and possess in-depth knowledge of local laws, regulations, markets, and competition. With offices spread across India in cities like Ahmedabad, Bengaluru, Chandigarh, Chennai, Gurugram, Jaipur, Hyderabad, Kochi, Kolkata, Mumbai, Noida, Pune, Vadodara, and Vijayawada, KPMG entities in India offer services to national and international clients across various sectors. We aim to provide rapid, performance-based, industry-focused, and technology-enabled services that demonstrate our understanding of global and local industries and our experience in the Indian business environment. Equal employment opportunity information Job Description: We are looking for a highly skilled Senior Data AWS Solutions Engineer to join our dynamic team. The ideal candidate should have substantial experience in designing and implementing data solutions on AWS, utilizing cloud services to drive business outcomes. The candidate must have hands-on experience in implementing solutions like Data Lake or involvement in technical architecture reviews and discussions. Previous knowledge of the Automobile or Banking sector would be advantageous. Key Responsibilities: - Design, develop, and implement scalable data architectures on AWS. - Collaborate with cross-functional teams to gather requirements and translate them into technical solutions. - Optimize data storage and processing using AWS services such as S3, Glue, RDS, and Lambda. - Ensure data security, compliance, and best practices in data management. - Troubleshoot and resolve data-related issues and performance bottlenecks. - Mentor junior engineers and contribute to team knowledge sharing. - Stay updated with the latest AWS services and industry trends to recommend improvements. Qualifications: - Bachelor's degree in Computer Science, Engineering, or related field. - 8+ years of experience in data engineering or solutions architecture, focusing on AWS. - Proficiency in AWS data services, ETL tools, and data modeling techniques. - Strong programming skills in Python, Pyspark Java, or similar languages. - Experience with data warehousing and big data technologies (e.g., Hadoop, Spark). - Excellent problem-solving, consulting, and analytical skills. - Strong communication and collaboration abilities. - Stakeholder Management and leading teams. Preferred Qualifications: - AWS certifications (e.g., AWS Certified Solutions Architect, AWS Certified Data Analytics). - Experience with containerization (Docker, Kubernetes). - Knowledge of machine learning and data science principles. Join us to apply your expertise in AWS and data engineering to develop innovative solutions that drive our business forward.,
Posted 1 week ago
1.0 - 5.0 years
0 Lacs
karnataka
On-site
Job Description: As a member of our team at DataArt, you will play a crucial role in designing and developing new features to align with the ever-evolving business and technical requirements. Your responsibilities will include maintaining and enhancing existing functionality to ensure optimal reliability and performance. Collaborating directly with customers will be a key aspect of your role, as you gather requirements and comprehend business objectives. It is essential to stay abreast of the latest technologies and utilize them to influence project decisions and outcomes positively. Qualifications: - Possess 1-3 years of hands-on experience in developing commercial applications using .NET. - Demonstrate a solid understanding of the Software Development Lifecycle. - Proficient in C#, encompassing .NET 6/8 and .NET Framework 4.8. - Adequate knowledge and practical experience with Azure (Azure Functions, VMs, Cosmos DB, Azure SQL) or AWS (EC2, Lambda, S3, DynamoDB). - Exhibit skills in front-end web development, particularly in React, Angular, and TypeScript.,
Posted 1 week ago
1.0 - 5.0 years
0 Lacs
maharashtra
On-site
As a talented Fullstack Web Developer at our company based in Mumbai, India, you will play a crucial role in building innovative tech products aimed at serving a vast user base across various sectors. Your primary focus will be on developing impactful solutions that cater to millions of users. You should have a minimum of 1-2 years of practical experience in web development, with a strong background in building quality web software. Your ability to seamlessly transition between frontend and backend development is essential, as our tech stack involves working across the full spectrum (Next.js with TypeScript, Node.js, and PostgreSQL). Proficiency in JavaScript and TypeScript is a must, as these languages are integral to our frontend (Next.js) and backend (Node.js) development processes. You should also have experience in creating and managing REST APIs, utilizing Prisma for database operations, and implementing JWT for authentication. Writing comprehensive unit, integration, and end-to-end test cases for both frontend and backend components is a key requirement. Familiarity with modern testing tools such as Jest, Cypress, and SuperTest will be beneficial. Moreover, experience with AWS services like S3, ECS with ECR, and Redis for caching is highly desirable. Your problem-solving abilities will be further enhanced by your proficiency in using AI tools efficiently. It is crucial that you are comfortable working with our specific tech stack, particularly in JavaScript/TypeScript. In addition to technical skills, you should possess excellent communication skills and the ability to collaborate effectively, especially when addressing challenges. Demonstrating a strong sense of ownership and a drive to create impactful solutions is vital. Your quick learning capabilities and adaptability to iterate rapidly on new features are essential qualities that will contribute to our success. Leadership skills are advantageous, as you may be required to organize and motivate a team towards a shared product vision. Prior experience in mentoring or managing junior developers will be a valuable asset. Our tech stack encompasses a variety of core technologies, including Next.js with TypeScript for frontend development and Node.js with TypeScript for backend operations. Understanding multi-tenancy support, Tailwind CSS, NextAuth, and Vercel for frontend deployment will be beneficial. Similarly, familiarity with REST APIs, PostgreSQL, Prisma ORM, JWT authentication, and Redis for caching on the backend is essential. As part of our infrastructure, you will work with AWS services such as S3 for storage, ECS with ECR for containerization and deployment, and VPC setup inside AWS for backend services deployment. Joining our early-stage startup means embracing a dynamic work environment with ambitious goals. If you are passionate about problem-solving and enjoy pushing boundaries, this role offers you the opportunity to make a substantial impact and contribute to our growth.,
Posted 1 week ago
3.0 - 8.0 years
12 - 22 Lacs
chennai
Work from Office
Key Responsibilities: Design, develop, and optimize scalable and reliable ETL pipelines using Python and PySpark . Extract data from diverse data sources and transform it to meet analytical and business needs. Implement robust data validation, error handling, and quality checks within ETL pipelines. Work with large-scale datasets and ensure efficient performance and scalability. Collaborate with data engineers, analysts, and stakeholders to gather requirements and deliver end-to-end data solutions. Deploy and monitor ETL processes on AWS cloud services such as S3, Glue, Lambda, EMR, Redshift, Step Functions , etc. Ensure compliance with data governance and security standards. Troubleshoot and resolve performance bottlenecks and data quality issues. Mandatory Qualifications: 4+ years of professional experience in ETL development . Strong programming skills in Python and experience with PySpark for distributed data processing. Proficient in SQL and working with relational and non-relational databases. Hands-on experience with AWS cloud services related to data engineering (e.g., S3, Glue, EMR, Lambda, Redshift ). Experience in designing and implementing ETL workflows in a production environment. Strong problem-solving and analytical skills. Excellent communication and collaboration skills.
Posted 1 week ago
4.0 - 9.0 years
18 - 33 Lacs
gurugram, chennai
Work from Office
Key Responsibilities: Design, develop, and maintain scalable and secure ETL pipelines using PySpark , Python , and SQL . Build and orchestrate data workflows leveraging AWS services such as Glue , Lambda , Step Functions , S3 , Athena , and Redshift . Perform data profiling, cleansing, and transformation to ensure data quality and consistency. Optimize and tune data pipelines for performance and cost-efficiency. Work with data architects, analysts, and stakeholders to understand data requirements and translate them into technical solutions. Develop and maintain technical documentation for all ETL processes and data flows. Monitor data pipelines and troubleshoot any issues related to performance, failures, or data inconsistencies. Participate in Agile/Scrum development cycles and contribute to sprint planning and retrospectives. Required Skills & Qualifications: 7+ years of hands-on experience in ETL development and data engineering. Strong programming skills in Python and PySpark . Expert-level knowledge of SQL for complex data manipulation and analysis. Experience with AWS services , including but not limited to: AWS Glue AWS Lambda Amazon S3 Amazon Redshift AWS Step Functions Amazon Athena Solid understanding of data warehousing concepts, data modeling, and schema design. Experience working with large-scale distributed data systems and real-time/batch processing. Familiarity with version control tools (e.g., Git) and CI/CD pipelines. Excellent problem-solving and communication skills
Posted 1 week ago
6.0 - 8.0 years
6 - 10 Lacs
visakhapatnam
Work from Office
Desired Skills - Windows server, Active Directory, AWS CloudFormation, Terraform, GitHub, Jenkins Duration - 6 months on contract Work Time - 2:003:00 PM to 10:30/11:00 PM What you will do: Experience in configuration, management and maintenance of AWS cloud infrastructure and services like IAM, EC2, Security Groups, VPC, Internet Gateways, Transit Gateways, Route53, RDS, S3, EBS, ALB, FSx, Amazon Workspaces etc. Experience in IaaC tools like AWS CloudFormation & Terraform. Experience in configuration, management and maintenance of Azure cloud infrastructure and services like Entra ID, M365, Teams, Exchange Online, including security and compliance settings. Experience in build, configuration, tuning, securing, patching and maintenance of Windows Servers. Experience with setting up infrastructure from scratch and/or migrating infrastructure (for example on-prem to cloud). Experience with infrastructure and services like Windows server creation & setup, Active Directory, SCCM, MS365, setting up PKI (CAs and related services). Excellent understanding of Active Directory identity and security, group policy objects,Windows file shares (FSx or equivalent), MS365 administration, Patching/SCCM Experience with automation frameworks, CI/CD, Jenkins, Bamboo or Concourse and usage in conjunction with IaaC automation tools and source code management software like Github. Experience in authoring and maintaining complex scripts to automate tasks. Experience in authenticating to M365 and Exchange, and automating functions using Powershell scripts. Experience in other scripting languages like Python, Ruby, Go or Perl is a plus. Experience in core infrastructure services like DNS, NTP, Kerberos, Radius, LDAP, SMTP, CIFS, DHCP. Experience in writing technical documentation, including design documents, end user facing documents, and run books. Experience with system monitoring and related metrics. Experience handling production incidents.
Posted 1 week ago
6.0 - 8.0 years
6 - 10 Lacs
nagpur
Work from Office
Desired Skills - Windows server, Active Directory, AWS CloudFormation, Terraform, GitHub, Jenkins Duration - 6 months on contract Work Time - 2:003:00 PM to 10:30/11:00 PM What you will do: Experience in configuration, management and maintenance of AWS cloud infrastructure and services like IAM, EC2, Security Groups, VPC, Internet Gateways, Transit Gateways, Route53, RDS, S3, EBS, ALB, FSx, Amazon Workspaces etc. Experience in IaaC tools like AWS CloudFormation & Terraform. Experience in configuration, management and maintenance of Azure cloud infrastructure and services like Entra ID, M365, Teams, Exchange Online, including security and compliance settings. Experience in build, configuration, tuning, securing, patching and maintenance of Windows Servers. Experience with setting up infrastructure from scratch and/or migrating infrastructure (for example on-prem to cloud). Experience with infrastructure and services like Windows server creation & setup, Active Directory, SCCM, MS365, setting up PKI (CAs and related services). Excellent understanding of Active Directory identity and security, group policy objects,Windows file shares (FSx or equivalent), MS365 administration, Patching/SCCM Experience with automation frameworks, CI/CD, Jenkins, Bamboo or Concourse and usage in conjunction with IaaC automation tools and source code management software like Github. Experience in authoring and maintaining complex scripts to automate tasks. Experience in authenticating to M365 and Exchange, and automating functions using Powershell scripts. Experience in other scripting languages like Python, Ruby, Go or Perl is a plus. Experience in core infrastructure services like DNS, NTP, Kerberos, Radius, LDAP, SMTP, CIFS, DHCP. Experience in writing technical documentation, including design documents, end user facing documents, and run books. Experience with system monitoring and related metrics. Experience handling production incidents.
Posted 1 week ago
6.0 - 8.0 years
6 - 10 Lacs
gurugram
Work from Office
Desired Skills - Windows server, Active Directory, AWS CloudFormation, Terraform, GitHub, Jenkins Duration - 6 months on contract Work Time - 2:003:00 PM to 10:30/11:00 PM What you will do: Experience in configuration, management and maintenance of AWS cloud infrastructure and services like IAM, EC2, Security Groups, VPC, Internet Gateways, Transit Gateways, Route53, RDS, S3, EBS, ALB, FSx, Amazon Workspaces etc. Experience in IaaC tools like AWS CloudFormation & Terraform. Experience in configuration, management and maintenance of Azure cloud infrastructure and services like Entra ID, M365, Teams, Exchange Online, including security and compliance settings. Experience in build, configuration, tuning, securing, patching and maintenance of Windows Servers. Experience with setting up infrastructure from scratch and/or migrating infrastructure (for example on-prem to cloud). Experience with infrastructure and services like Windows server creation & setup, Active Directory, SCCM, MS365, setting up PKI (CAs and related services). Excellent understanding of Active Directory identity and security, group policy objects,Windows file shares (FSx or equivalent), MS365 administration, Patching/SCCM Experience with automation frameworks, CI/CD, Jenkins, Bamboo or Concourse and usage in conjunction with IaaC automation tools and source code management software like Github. Experience in authoring and maintaining complex scripts to automate tasks. Experience in authenticating to M365 and Exchange, and automating functions using Powershell scripts. Experience in other scripting languages like Python, Ruby, Go or Perl is a plus. Experience in core infrastructure services like DNS, NTP, Kerberos, Radius, LDAP, SMTP, CIFS, DHCP. Experience in writing technical documentation, including design documents, end user facing documents, and run books. Experience with system monitoring and related metrics. Experience handling production incidents.
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |