Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
6.0 - 10.0 years
15 - 25 Lacs
Chennai
Work from Office
Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role Are you ready to dive headfirst into the captivating world of data engineering at Kyndryl? As an AWS Data Engineer, you'll be the visionary behind our data platforms, crafting them into powerful tools for decision-makers. Your role? Ensuring a treasure trove of pristine, harmonized data is at everyone's fingertips. In this role, you'll be engineering the backbone of our data infrastructure, ensuring the availability of pristine, refined data sets. With a well-defined methodology, critical thinking, and a rich blend of domain expertise, consulting finesse, and software engineering prowess, you'll be the mastermind of data transformation. Key Responsibilities: 1. Data Pipeline Design & Development Design and develop scalable, resilient, and secure ETL/ELT data pipelines using AWS services. Build and optimize data workflows leveraging AWS Glue, EMR, Lambda, and Step Functions. Implement batch and real-time data ingestion using Kafka, Kinesis, or AWS Data Streams. Ensure efficient data movement across S3, Redshift, DynamoDB, RDS, and Snowflake. 2. Cloud Data Engineering & Storage Architect and manage data lakes and data warehouses using Amazon S3, Redshift, and Athena. Optimize data storage and retrieval using Parquet, ORC, Avro, and columnar storage formats. Implement data partitioning, indexing, and query performance tuning. Work with NoSQL databases (DynamoDB, MongoDB) and relational databases (PostgreSQL, MySQL, Aurora). 3. Infrastructure as Code (IaC) & Automation Deploy and manage AWS data infrastructure using Terraform, AWS CloudFormation, or AWS CDK. Implement CI/CD pipelines for automated data pipeline deployments using GitHub Actions, Jenkins, or AWS CodePipeline. Automate data workflows and job orchestration using Apache Airflow, AWS Step Functions, or MWAA. 4. Performance Optimization & Monitoring Optimize Spark, Hive, and Presto queries for performance and cost efficiency. Implement auto-scaling strategies for AWS EMR clusters. Set up monitoring, logging, and alerting with AWS CloudWatch, CloudTrail, and Prometheus/Grafana. 5. Security, Compliance & Governance Implement IAM policies, encryption (AWS KMS), and role-based access controls. Ensure compliance with GDPR, HIPAA, and industry data governance standards. Monitor data pipelines for security vulnerabilities and unauthorized access. 6. Collaboration & Stakeholder Engagement Work closely with data analysts, data scientists, and business teams to understand data needs. Document data pipeline designs, architecture decisions, and best practices. Mentor and guide junior data engineers on AWS best practices and optimization techniques. Your journey begins by understanding project objectives and requirements from a business perspective, converting this knowledge into a data puzzle. You'll be delving into the depths of information to uncover quality issues and initial insights, setting the stage for data excellence. But it doesn't stop there. You'll be the architect of data pipelines, using your expertise to cleanse, normalize, and transform raw data into the final dataset—a true data alchemist. So, if you're a technical enthusiast with a passion for data, we invite you to join us in the exhilarating world of data engineering at Kyndryl. Let's transform data into a compelling story of innovation and growth. Your Future at Kyndryl Every position at Kyndryl offers a way forward to grow your career. We have opportunities that you won’t find anywhere else, including hands-on experience, learning opportunities, and the chance to certify in all four major platforms. Whether you want to broaden your knowledge base or narrow your scope and specialize in a specific sector, you can find your opportunity here. Who You Are You’re good at what you do and possess the required experience to prove it. However, equally as important – you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused – someone who prioritizes customer success in their work. And finally, you’re open and borderless – naturally inclusive in how you work with others. Required Skills and Experience 7+ years of experience in data engineering with a focus on AWS cloud technologies. Expertise in AWS Glue, Lambda, EMR, Redshift, Kinesis , and Step Functions. Proficiency in SQL, Python, Java and PySpark for data transformations. Strong understanding of ETL/ELT best practices and data warehousing concepts. Experience with Apache Airflow or Step Functions for orchestration. Familiarity with Kafka, Kinesis, or other streaming platforms. Knowledge of Terraform, CloudFormation, and DevOps for AWS. Expertise in data mining, data storage, and Extract-Transform-Load (ETL) processes. Experience in data pipelines development and tooling, such as Glue, Databricks, Synapse, or Dataproc. Experience with both relational and NoSQL databases, including PostgreSQL, DB2, and MongoDB. Excellent problem-solving, analytical, and critical thinking skills. Ability to manage multiple projects simultaneously while maintaining attention to detail. Communication skills: Ability to communicate with both technical and non-technical colleagues to derive technical requirements from business needs and problems. Preferred Skills and Experience Experience working as a Data Engineer and/or in cloud modernization. Experience with AWS Lake Formation and Data Catalog for metadata management. Knowledge of Databricks, Snowflake, or BigQuery for data analytics. Being You Diversity is a whole lot more than what we look like or where we come from, it’s how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we’re not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you – and everyone next to you – the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That’s the Kyndryl Way. What You Can Expect With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter – wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed. Get Referred! If you know someone that works at Kyndryl, when asked ‘How Did You Hear About Us’ during the application process, select ‘Employee Referral’ and enter your contact's Kyndryl email address.
Posted 2 weeks ago
0.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
Ready to shape the future of work At Genpact, we don&rsquot just adapt to change&mdashwe drive it. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos , our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that&rsquos shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at and on , , , and . Inviting applications for the role of a Principal Consultant -AWS Developer! We are looking for candidates who have a passion for cloud with knowledge of different cloud environments. Ideal candidates should have technical experience in AWS Platform Services - IAM Role & Policies, Glue, Lamba, EC2, S3, SNS, SQS, EKS, KMS, etc . This key role demands a highly motivated individual with a strong background in Computer Science/ Software Engineering. You are meticulous, thorough and possess excellent communication skills to engage with all levels of our stakeholders. A self -starter, you are up-to-speed with the latest developments in the tech world. Responsibilities Hands-On experience & good skills on AWS Platform Services - IAM Role & Policies, Glue, Lamba, EC2, S3, SNS, SQS, EKS, KMS, etc. Must have good working knowledge on Kubernetes & Dockers. Utilize AWS services such as Amazon Glue, Amazon S3, AWS Lambda, and others to optimize performance, reliability, and cost-effectiveness. Design, develop, and maintain AWS-based applications, ensuring high performance, scalability, and security. Integrate AWS services into application architecture, leveraging tools such as Lambda, API Gateway, S3, DynamoDB, and RDS. Collaborate with DevOps teams to automate deployment pipelines and optimize CI/CD practices. Develop scripts and automation tools to manage cloud environments efficiently. Monitor, troubleshoot, and resolve application performance issues. Implement best practices for cloud security, data management, and cost optimization. Participate in code reviews and provide technical guidance to junior developers. Qualifications we seek in you! Minimum Qualifications / Skills experience in software development with a focus on AWS technologies. Proficiency in AWS services such as EC2, Lambda, S3, RDS, and DynamoDB. Strong programming skills in Python, Node.js, or Java. Experience with RESTful APIs and microservices architecture. Familiarity with CI/CD tools like Jenkins, GitLab CI, or AWS CodePipeline . Knowledge of infrastructure as code using CloudFormation or Terraform. Problem-solving skills and the ability to troubleshoot application issues in a cloud environment. Excellent teamwork and communication skills. Preferred Qualifications/ Skills AWS Certified Developer - Associate or AWS Certified Solutions Architect - Associate. Experience with serverless architectures and API development. Familiarity with Agile development practices. Knowledge of monitoring and logging solutions like CloudWatch and ELK Stack. Why join Genpact Be a transformation leader - Work at the cutting edge of AI, automation, and digital innovation Make an impact - Drive change for global enterprises and solve business challenges that matter Accelerate your career - Get hands-on experience, mentorship, and continuous learning opportunities Work with the best - Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
indore, madhya pradesh
On-site
As a Senior Data Scientist with 5+ years of experience, you will be responsible for designing and implementing models, mining data for insights, and interpreting complex data structures to drive business decision-making. Your expertise in machine learning, including areas such as NLP, Machine vision, and Time series, will be essential in this role. You will be expected to have strong skills in Model Tuning, Model Validation, Supervised and Unsupervised Learning, and hands-on experience with model development, data preparation, training, and inference-ready deployment of models. Your proficiency in descriptive and inferential statistics, hypothesis testing, and data analysis will help in developing code for reproducible analysis of data. Experience with AWS services like Sagemaker, Lambda, Glue, Step functions, and EC2 is necessary, along with knowledge of Databricks, Anaconda distribution, and similar data science code development and deployment IDEs. Your familiarity with ML algorithms related to time-series, natural language processing, optimization, object detection, topic modeling, clustering, and regression analysis will be highly valued. You should have expertise in Hive/Impala, Spark, Python, Pandas, Keras, SKLearn, StatsModels, Tensorflow, and PyTorch. End-to-end model deployment and production experience of at least 1 year is required, along with a good understanding of Model Deployment in Azure ML platform, Anaconda Enterprise, or AWS Sagemaker. Basic knowledge of deep learning algorithms such as MaskedCNN, YOLO, and familiarity with Visualization and analytics/Reporting Tools like Power BI, Tableau, and Alteryx will be considered advantageous for this role.,
Posted 2 weeks ago
2.0 - 6.0 years
0 Lacs
karnataka
On-site
At Lilly, you are part of a global healthcare leader that is committed to uniting caring with discovery to enhance the lives of people worldwide. With our headquarters in Indianapolis, Indiana, our dedicated team of 39,000 employees collaborates to discover and deliver life-changing medicines, enhance disease understanding and management, and contribute to our communities through philanthropy and volunteerism. Our focus is on making a positive impact on people's lives around the world. As part of our ongoing efforts, we are in the process of developing and internalizing a cutting-edge recommendation engine platform. This platform aims to streamline sales and marketing operations by analyzing diverse data sources, implementing advanced personalization models, and seamlessly integrating with other Lilly operations platforms. The goal is to provide tailored recommendations to our sales and marketing teams at the individual doctor level, enabling informed decision-making and enhancing customer experience. Responsibilities: - Utilize deep learning models to optimize Omnichannel Promotional Sequences for sales teams - Analyze large datasets to identify trends and relevant information for modeling decisions - Translate business problems into statistical problem statements and propose solution approaches - Collaborate with stakeholders to effectively communicate analysis findings - Preference for familiarity with pharmaceutical datasets and industry - Experience in code refactoring, model training, deployment, testing, and monitoring for drift - Optimize model hyperparameters and adapt to new ML techniques for business problem-solving Qualifications: - Bachelor's degree in Computer Science, Statistics, or related field (preferred) - 2-6 years of hands-on experience with data analysis, coding, and result interpretation - Proficiency in coding languages like SQL or Python - Prior experience with ML techniques for recommendation engine models in healthcare sectors - Expertise in Feature Engineering, Selection, and Model Validation on Big Data - Familiarity with cloud technology, particularly AWS, and tools like Tableau and Power BI At Lilly, we are committed to promoting workplace diversity and providing equal opportunities for all individuals, including those with disabilities. If you require accommodation during the application process, please complete the accommodation request form on our website. Join us at Lilly and be part of a team dedicated to making a difference in the lives of people worldwide.,
Posted 2 weeks ago
10.0 - 14.0 years
0 Lacs
chennai, tamil nadu
On-site
You should have at least 10 years of experience. You should be proficient in setting up, configuring, and integrating API gateways in AWS. Your expertise should include API frameworks, XML/JSON, REST, and data protection in software design, build, test, and documentation. Experience with various AWS services such as Lambda, S3, CDN (CloudFront), SQS, SNS, EventBridge, API Gateway, Glue, and RDS is required. You should be able to articulate and implement projects using these AWS services effectively. Your role will involve improving business processes through effective integration solutions. Location: Bangalore, Chennai, Pune, Mumbai, Noida Notice Period: Immediate joiner If you meet the requirements mentioned above, please apply for this position by filling out the form with your Full Name, Email, Phone, Cover Letter, and uploading your CV/Resume (PDF, DOC, DOCX formats accepted). By submitting this form, you agree to the storage and handling of your data by this website.,
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
You are a Senior Software Engineer at Elevance Health, a prominent health company in America dedicated to enhancing lives and simplifying healthcare. Elevance Health is the largest managed healthcare company in the Blue Cross Blue Shield (BCBS) Association, serving over 45 million lives across 14 states. This Fortune 500 company is currently ranked 20th and led by Gail Boudreaux, a prominent figure in the Fortune list of most powerful women. Your role will be within Carelon Global Solutions (CGS), a subsidiary of Elevance Health, focused on simplifying complex operational processes in the healthcare system. CGS brings together a global team of innovators across various locations, including Bengaluru and Gurugram in India, to optimize healthcare operations effectively and efficiently. As a Senior Software Engineer, your primary responsibility involves collaborating with data architects to implement data models and ensure seamless integration with AWS services. You will be responsible for supporting, monitoring, and resolving production issues to meet SLAs, being available 24/7 for business application support. You should have hands-on experience with technologies like Snowflake, Python, AWS S3-Athena, RDS, Cloudwatch, Lambda, and more. Your expertise should include handling nested JSON files, analyzing daily loads/issues, working closely with admin/architect teams, and understanding complex job and data flows in the project. To qualify for this role, you need a Bachelor's degree in Information Technology/Data Engineering or equivalent education and experience, along with 5-8 years of overall IT experience and 2-9 years in AWS services. Experience in agile development processes is preferred. You are expected to have skills in Snowflake, AWS services, complex SQL queries, and technologies like Hadoop, Kafka, HBase, Sqoop, and Scala. Your ability to analyze, research, and solve technical problems will be crucial for success in this role. Carelon promises limitless opportunities for its associates, emphasizing growth, well-being, purpose, and belonging. With a focus on learning and development, an innovative culture, and comprehensive rewards, Carelon offers a supportive environment for personal and professional growth. Carelon is an equal opportunity employer that values diversity and inclusivity. If you require accommodations due to a disability, you can request the Reasonable Accommodation Request Form. This is a full-time position that offers a competitive benefits package and a conducive work environment.,
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
chennai, tamil nadu
On-site
You will be a valuable member of the data engineering team, focusing on developing data pipelines, transforming data, exploring new data patterns, optimizing current data feeds, and implementing enhancements. Your primary responsibilities will involve utilizing your expertise in RDBMS concepts, hands-on experience with AWS Cloud platform and Services (including IAM, EC2, Lambda, RDS, Timestream, Glue, etc.), familiarity with data streaming tools like Kafka, practical knowledge of ETL/ELT tools, and understanding of Snowflake/PostgreSQL or any other database system. Ideally, you should also have a good grasp of data modeling techniques to further bolster your capabilities in this role.,
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
About Us 6thstreet.com is one of the largest omnichannel fashion & lifestyle destinations in the GCC, home to 1200+ international brands. The fashion-savvy destination offers collections from over 150 international fashion brands such as Dune London, ALDO, Naturalizer, Nine West, Charles & Keith, New Balance, Crocs, Birkenstock, Skechers, Levi's, Aeropostale, Garage, Nike, Adidas Originals, Rituals, and many more. The online fashion platform also provides free delivery, free returns, cash on delivery, and the option for click and collect. Job Description We are looking for a seasoned Data Engineer to design and manage data solutions. Expertise in SQL, Python, and AWS is essential. The role includes client communication, recommending modern data tools, and ensuring smooth data integration and visualization. Strong problem-solving and collaboration skills are crucial. Responsibilities Understand and analyze client business requirements to support data solutions. Recommend suitable modern data stack tools based on client needs. Develop and maintain data pipelines, ETL processes, and data warehousing. Create and optimize data models for client reporting and analytics. Ensure seamless data integration and visualization with cross-functional teams. Communicate with clients for project updates and issue resolution. Stay updated on industry best practices and emerging technologies. Skills Required 3-5 years in data engineering/analytics with a proven track record. Proficient in SQL and Python for data manipulation and analysis. Knowledge of Pyspark is a plus. Experience with data warehouse platforms like Redshift and Google BigQuery. Experience with AWS services like S3, Glue, Athena. Proficient in Airflow. Familiarity with event tracking platforms like GA or Amplitude is a plus. Strong problem-solving skills and adaptability. Excellent communication skills and proactive client engagement. Ability to get things done, unblock yourself, and effectively collaborate with team members and clients. Benefits Full-time role. Competitive salary + bonus. Company employee discounts across all brands. Medical & health insurance. Collaborative work environment. Good vibes work culture. Medical insurance.,
Posted 2 weeks ago
7.0 - 12.0 years
0 Lacs
maharashtra
On-site
As a Lead Data Engineer, you will be responsible for leveraging your 7 to 12+ years of hands-on experience in SQL database design, data architecture, ETL, Data Warehousing, Data Mart, Data Lake, Big Data, Cloud (AWS), and Data Governance domains. Your expertise in a modern programming language such as Scala, Python, or Java, with a preference for Spark/ Pyspark, will be crucial in this role. Your role will require you to have experience with configuration management and version control apps like Git, along with familiarity working within a CI/CD framework. If you have experience in building frameworks, it will be considered a significant advantage. A minimum of 8 years of recent hands-on SQL programming experience in a Big Data environment is necessary, with a preference for experience in Hadoop/Hive. Proficiency in PostgreSQL, RDBMS, NoSQL, and columnar databases will be beneficial for this role. Your hands-on experience in AWS Cloud data engineering components, including API Gateway, Glue, IoT Core, EKS, ECS, S3, RDS, Redshift, and EMR, will play a vital role in developing and maintaining ETL applications and data pipelines using big data technologies. Experience with Apache Kafka, Spark, and Airflow is a must-have for this position. If you are excited about this opportunity and possess the required skills and experience, please share your CV with us at omkar@hrworksindia.com. We look forward to potentially welcoming you to our team. Regards, Omkar,
Posted 3 weeks ago
10.0 - 14.0 years
0 Lacs
pune, maharashtra
On-site
As a Senior Cloud Data Integration Consultant, you will be responsible for leading a complex data integration project that involves API frameworks, a data lakehouse architecture, and middleware solutions. The project focuses on technologies such as AWS, Snowflake, Oracle ERP, and Salesforce, with a high transaction volume POS system. Your role will involve building reusable and scalable API frameworks, optimizing middleware, and ensuring security and compliance in a multi-cloud environment. Your expertise in API development and integration will be crucial for this project. You should have deep experience in managing APIs across multiple systems, building reusable components, and ensuring bidirectional data flow for real-time data synchronization. Additionally, your skills in middleware solutions and custom API adapters will be essential for integrating various systems seamlessly. In terms of cloud infrastructure and data processing, your strong experience with AWS services like S3, Lambda, Fargate, and Glue will be required for data processing, storage, and integration. You should also have hands-on experience in optimizing Snowflake for querying and reporting, as well as knowledge of Terraform for automating the provisioning and management of AWS resources. Security and compliance are critical aspects of the project, and your deep understanding of cloud security protocols, API security, and compliance enforcement will be invaluable. You should be able to set up audit logs, ensure traceability, and enforce compliance across cloud services. Handling high-volume transaction systems and real-time data processing requirements will be part of your responsibilities. You should be familiar with optimizing AWS Lambda and Fargate for efficient data processing and be skilled in operational monitoring and error handling mechanisms. Collaboration and support are essential for the success of the project. You will need to provide post-go-live support, collaborate with internal teams and external stakeholders, and ensure seamless integration between systems. To qualify for this role, you should have at least 10 years of experience in enterprise API integration, cloud architecture, and data management. Deep expertise in AWS services, Snowflake, Oracle ERP, and Salesforce integrations is required, along with a proven track record of delivering scalable API frameworks and handling complex middleware systems. Strong problem-solving skills, familiarity with containerization technologies, and experience in retail or e-commerce industries are also desirable. Your key responsibilities will include leading the design and implementation of reusable API frameworks, optimizing data flow through middleware systems, building robust security frameworks, and collaborating with the in-house team for seamless integration between systems. Ongoing support, monitoring, and optimization post-go-live will also be part of your role.,
Posted 3 weeks ago
8.0 - 13.0 years
30 - 35 Lacs
Bengaluru
Work from Office
About The Role Data Engineer -1 (Experience 0-2 years) What we offer Our mission is simple Building trust. Our customer's trust in us is not merely about the safety of their assets but also about how dependable our digital offerings are. That"s why, we at Kotak Group are dedicated to transforming banking by imbibing a technology-first approach in everything we do, with an aim to enhance customer experience by providing superior banking services. We welcome and invite the best technological minds in the country to come join us in our mission to make banking seamless and swift. Here, we promise you meaningful work that positively impacts the lives of many. About our team DEX is a central data org for Kotak Bank which manages entire data experience of Kotak Bank. DEX stands for Kotak"s Data Exchange. This org comprises of Data Platform, Data Engineering and Data Governance charter. The org sits closely with Analytics org. DEX is primarily working on greenfield project to revamp entire data platform which is on premise solutions to scalable AWS cloud-based platform. The team is being built ground up which provides great opportunities to technology fellows to build things from scratch and build one of the best-in-class data lake house solutions. The primary skills this team should encompass are Software development skills preferably Python for platform building on AWS; Data engineering Spark (pyspark, sparksql, scala) for ETL development, Advanced SQL and Data modelling for Analytics.The org size is expected to be around 100+ member team primarily based out of Bangalore comprising of ~10 sub teams independently driving their charter.As a member of this team, you get opportunity to learn fintech space which is most sought-after domain in current world, be a early member in digital transformation journey of Kotak, learn and leverage technology to build complex data data platform solutions including, real time, micro batch, batch and analytics solutions in a programmatic way and also be futuristic to build systems which can be operated by machines using AI technologies. The data platform org is divided into 3 key verticals: Data Platform This Vertical is responsible for building data platform which includes optimized storage for entire bank and building centralized data lake, managed compute and orchestrations framework including concepts of serverless data solutions, managing central data warehouse for extremely high concurrency use cases, building connectors for different sources, building customer feature repository, build cost optimization solutions like EMR optimizers, perform automations and build observability capabilities for Kotak"s data platform. The team will also be center for Data Engineering excellence driving trainings and knowledge sharing sessions with large data consumer base within Kotak. Data Engineering This team will own data pipelines for thousands of datasets, be skilled to source data from 100+ source systems and enable data consumptions for 30+ data analytics products. The team will learn and built data models in a config based and programmatic and think big to build one of the most leveraged data model for financial orgs. This team will also enable centralized reporting for Kotak Bank which cuts across multiple products and dimensions. Additionally, the data build by this team will be consumed by 20K + branch consumers, RMs, Branch Managers and all analytics usecases. Data Governance The team will be central data governance team for Kotak bank managing Metadata platforms, Data Privacy, Data Security, Data Stewardship and Data Quality platform.If you"ve right data skills and are ready for building data lake solutions from scratch for high concurrency systems involving multiple systems then this is the team for you. You day to day role will include Drive business decisions with technical input and lead the team. Design, implement, and support an data infrastructure from scratch. Manage AWS resources, including EC2, EMR, S3, Glue, Redshift, and MWAA. Extract, transform, and load data from various sources using SQL and AWS big data technologies. Explore and learn the latest AWS technologies to enhance capabilities and efficiency. Collaborate with data scientists and BI engineers to adopt best practices in reporting and analysis. Improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers. Build data platforms, data pipelines, or data management and governance tools. BASIC QUALIFICATIONS for Data Engineer/ SDE in Data Bachelor's degree in Computer Science, Engineering, or a related field Experience in data engineering Strong understanding of AWS technologies, including S3, Redshift, Glue, and EMR Experience with data pipeline tools such as Airflow and Spark Experience with data modeling and data quality best practices Excellent problem-solving and analytical skills Strong communication and teamwork skills Experience in at least one modern scripting or programming language, such as Python, Java, or Scala Strong advanced SQL skills PREFERRED QUALIFICATIONS AWS cloud technologiesRedshift, S3, Glue, EMR, Kinesis, Firehose, Lambda, IAM, Airflow Prior experience in Indian Banking segment and/or Fintech is desired. Experience with Non-relational databases and data stores Building and operating highly available, distributed data processing systems for large datasets Professional software engineering and best practices for the full software development life cycle Designing, developing, and implementing different types of data warehousing layers Leading the design, implementation, and successful delivery of large-scale, critical, or complex data solutions Building scalable data infrastructure and understanding distributed systems concepts SQL, ETL, and data modelling Ensuring the accuracy and availability of data to customers Proficient in at least one scripting or programming language for handling large volume data processing Strong presentation and communications skills.
Posted 3 weeks ago
8.0 - 10.0 years
13 - 18 Lacs
Chandigarh
Work from Office
Job Description Full-stack Architect Experience 8 - 10 years Architect, design, and oversee the development of full-stack applications using modern JS frameworks and cloud-native tools. Lead microservice architecture design, ensuring system scalability, reliability, and performance. Evaluate and implement AWS services (Lambda, ECS, Glue, Aurora, API Gateway, etc.) for backend solutions. Provide technical leadership to engineering teams across all layers (frontend, backend, database). Guide and review code, perform performance optimization, and define coding standards. Collaborate with DevOps and Data teams to integrate services (Redshift, OpenSearch, Batch). Translate business needs into technical solutions and communicate with cross-functional stakeholders.
Posted 3 weeks ago
6.0 - 11.0 years
11 - 16 Lacs
Gurugram
Work from Office
Project description We are looking for the star Python Developer who is not afraid of work and challenges! Gladly becoming a partner with famous financial institution, we are gathering a team of professionals with wide range of skills to successfully deliver business value to the client. Responsibilities Analyse existing SAS DI pipelines and SQL-based transformations. Translate and optimize SAS SQL logic into Python code using frameworks such as Pyspark. Develop and maintain scalable ETL pipelines using Python on AWS EMR. Implement data transformation, cleansing, and aggregation logic to support business requirements. Design modular and reusable code for distributed data processing tasks on EMR clusters. Integrate EMR jobs with upstream and downstream systems, including AWS S3, Snowflake, and Tableau. Develop Tableau reports for business reporting. Skills Must have 6+ years of experience in ETL development, with at least 5 years working with AWS EMR. Bachelor's degree in Computer Science, Data Science, Statistics, or a related field. Proficiency in Python for data processing and scripting. Proficient in SQL and experience with one or more ETL tools (e.g., SAS DI, Informatica)/. Hands-on experience with AWS servicesEMR, S3, IAM, VPC, and Glue. Familiarity with data storage systems such as Snowflake or RDS. Excellent communication skills and ability to work collaboratively in a team environment. Strong problem-solving skills and ability to work independently. Nice to have N/A
Posted 3 weeks ago
10.0 - 20.0 years
25 - 40 Lacs
Hyderabad
Hybrid
Role & responsibilities : We are seeking dynamic individuals to join our team as individual contributors, collaborating closely with stakeholders to drive impactful results. Working hours - 5:30 pm to 1:30 am (Hybrid model) Must have Skills* 1. 15 years of experience in design and delivery of Distributed Systems capable of handling petabytes of data in a distributed environment. 2. 10 years of experience in the development of Data Lakes with Data Ingestion from disparate data sources, including relational databases, flat files, APIs, and streaming data. 3. Experience in providing Design and development of Data Platforms and data ingestion from disparate data sources into the cloud. 4. Expertise in core AWS Services including AWS IAM, VPC, EC2, EKS/ECS, S3, RDS, DMS, Lambda, CloudWatch, CloudFormation, CloudTrail, CloudWatch. 5. Proficiency in programming languages like Python and PySpark to ensure efficient data processing. preferably Python. 6. Architect and implement robust ETL pipelines using AWS Glue, Lamda, and step-functions defining data extraction methods, transformation logic, and data loading procedures across different data sources 7. Experience in the development of Event-Driven Distributed Systems in the Cloud using Serverless Architecture. 8. Ability to work with Infrastructure team for AWS service provisioning for databases, services, network design, IAM roles and AWS cluster. 9. 2-3 years of experience working with Document DB or MongoDB environment. Nice to have Skills: 1. 10 years of experience in the development of Data Audit, Compliance and Retention standards for Data Governance, and automation of the governance processes. 2. Experience in data modelling with NoSQL Databases like Document DB. 3. Experience in using column-oriented data file format like Apache Parquet, and Apache Iceberg as the table format for analytical datasets. 4. Expertise in development of Retrieval-Augmented Generation (RAG) and Agentic Workflows for providing context to LLMs based on proprietary enterprise data. 5. Ability to develop re-ranking strategies using results from Index and Vector stores for LLMs to improve the quality of the output. 6. Knowledge of AWS AI Services like AWS Entity Resolution, AWS Comprehend.
Posted 3 weeks ago
4.0 - 6.0 years
13 - 18 Lacs
Bengaluru
Remote
About BNI: Established in 1985, BNI is the world’s largest business referral network. With over 325,000 small-to medium-size business Members in over 11,000 Chapters across 77 Countries, we are a global company with local footprints. Our proven approach provides Members with a structured, positive, and professional referral program that enables them to sharpen their business skills, develop meaningful, long-term relationships, and experience business growth. Visit to learn how BNI has impacted the lives of our Members and how it can help you achieve your business goals. Position Summary The Database Developer will be a part of BNI’s Global Information Technology Team and will primarily have responsibilities over the creation, development, maintenance, and enhancements for our databases, queries, routines and processes. The Database Developer will work closely with the Database Administrator, data team, software developers, QA engineers and DevOps Engineers located within the BNI office in Bangalore, as well as all levels of BNI Management and Leadership teams. This is an unparalleled opportunity to become part of a growing team and a growing global organization. High performers will have significant growth opportunities available to them. The candidate should be able to be an expert in both database and query design and should be able to write queries on the fly on demand, he/she should posses good hands on experience on data engineering and should be well versed with tools mentioned in the technical table below The person should be able to own the assignments and should be independent in terms of the development of queries and other aspects in data engineering. Roles and Responsibilities Design stable, reliable and effective databases Create, optimize and maintain queries, used in our software applications, as well as data extracts and ETL processes Modify and maintain databases, routines, queries in order to ensure accuracy, maintainability, scalability, and high performance of all our data systems Solve database usage issues and malfunctions Liaise with developers to improve applications and establish best practices Provide data management support for our users/clients Research, analyze and recommend upgrades to our data systems Prepare documentation and specifications for all deployed queries/routines/processes Profile, optimize and tweak queries and routines for optimal performance Support the Development and Quality Assurance teams with their needs for database development and access Be a team player and strong problem-solver to work with a diverse team Qualifications Required: Bachelor’s Degree or equivalent work experience Fluent in English, with excellent oral and written communication skills 5+ years of experience with Linux-based MySQL/MariaDB database development and maintenance 2+ years of experience with Database Design/Development/Scripting Proficient in writing and optimizing SQL Statements Strong proficiency in MySQL/MariaDB scripting, including functions, routines and complex data queries. Understanding of MySQL/MariaDB’s underlying storage engines, such as InnoDB and MyISAM Knowledge of standards and best practices in MySQL/MariaDB Knowledge of MySQL/MariaDB features, such as its event scheduler (Desired) Familiarity with other SQL/NoSQL databases such as PostgreSQL, MongoDB, Redis Experience with Amazon Web Services’ RDS offering Experience with Data Lakes and Big Data is a must Experience in Python is a must Experience with tools like Airflow/DBT/Data pipelines Experience with Apache Superset Knowledgeable with AWS services from Data Engineering point of view (Desired) Proficient Understanding of git/GitHub as a source control system Familiarity with working on an Agile/Iterative development framework Self-starter with positive attitude with the ability to collaborate with product managers and developers Strong SQL Experience and ability to write queries on demand. Primary Technologies: Database Stored Procedure SQL Optimization Database Management Airflow/DBT/ Data Warehousing with RedShift/Snowflake (Mandatory). Python/Linux/ Data Pipelines Physical Demands and Working Conditions Sedentary work. Exerting up to 10 pounds of force occasionally and/or negligible amount of force frequently or constantly to lift, carry, push, pull or otherwise move objects. Repetitive motion. Substantial movements (motions) of the wrists, hands, and/or fingers. The worker is required to have close visual acuity to perform an activity such as: preparing and analyzing data and figures; transcribing; viewing a computer terminal; extensive reading. External Posting Language This is a full-time position. This job description is not designed to cover or contain a comprehensive listing of activities, duties or responsibilities that are required of the employee for this job. Duties, responsibilities, and activities may change at any time with or without notice. Learn more at BNI.com
Posted 3 weeks ago
6.0 - 11.0 years
10 - 20 Lacs
Bengaluru
Work from Office
Required Skills: 6-12 years of experience in Azure or AWS. Please send profiles to payal.kumari@nam-it.com Regards, Payal Kumari Senior Executive Staffing NAM Info Pvt Ltd, 29/2B-01, 1st Floor, K.R. Road, Banashankari 2nd Stage, Bangalore - 560070. Email – payal.kumari@nam-it.com Website - www.nam-it.com USA | CANADA | INDIA
Posted 3 weeks ago
6.0 - 11.0 years
20 - 30 Lacs
Hyderabad, Bengaluru
Hybrid
Notice Period - Immediate to 15 days max Virtusa JD: 8+ years of experience in data engineering, specifically in cloud environments like AWS. Dont Not Share Data Science Profiles. Proficiency in Python and PySpark for data processing and transformation tasks. Solid experience with AWS Glue for ETL jobs and managing data workflows. Hands-on experience with AWS Data Pipeline (DPL) for workflow orchestration. Strong experience with AWS services such as S3, Lambda, Redshift, RDS, and EC2. Technical Skills: Deep understanding of ETL concepts and best practices.. Strong knowledge of SQL for querying and manipulating relational and semi-structured data. Experience with Data Warehousing and Big Data technologies, specifically within AWS. Additional Skills: Experience with AWS Lambda for serverless data processing and orchestration. Understanding of AWS Redshift for data warehousing and analytics. Familiarity with Data Lakes, Amazon EMR, and Kinesis for streaming data processing. Knowledge of data governance practices, including data lineage and auditing. Familiarity with CI/CD pipelines and Git for version control. Experience with Docker and containerization for building and deploying applications. Design and Build Data Pipelines: Design, implement, and optimize data pipelines on AWS using PySpark, AWS Glue, and AWS Data Pipeline to automate data integration, transformation, and storage processes. ETL Development: Develop and maintain Extract, Transform, and Load (ETL) processes using AWS Glue and PySpark to efficiently process large datasets. Data Workflow Automation: Build and manage automated data workflows using AWS Data Pipeline, ensuring seamless scheduling, monitoring, and management of data jobs. Data Integration: Work with different AWS data storage services (e.g., S3, Redshift, RDS) to ensure smooth integration and movement of data across platforms. Optimization and Scaling: Optimize and scale data pipelines for high performance and cost efficiency, utilizing AWS services like Lambda, S3, and EC2.
Posted 3 weeks ago
7.0 - 9.0 years
7 - 17 Lacs
Pune
Remote
Requirements for the candidate: The role will require deep knowledge of data engineering techniques to create data pipelines and build data assets. At least 4+ years of Strong hands on programming experience with Pyspark / Python / Boto3 including Python Frameworks, libraries according to python best practices. Strong experience in code optimization using spark SQL and pyspark. Understanding of Code versioning, Git repository, JFrog Artifactory. AWS Architecture knowledge specially on S3, EC2, Lambda, Redshift, CloudFormation etc and able to explain benefits of each Code Refactorization of Legacy Codebase: Clean, modernize, improve readability and maintainability. Unit Tests/TDD: Write tests before code, ensure functionality, catch bugs early. Fixing Difficult Bugs: Debug complex code, isolate issues, resolve performance, concurrency, or logic flaws.
Posted 3 weeks ago
8.0 - 10.0 years
12 - 18 Lacs
Zirakpur
Work from Office
AWS Services ( Lambda, Glue, S3, Dynamo, EventBridge, Appsync, Open search) Terraform Python React/Vite Unit testing (Jest, Pytest) Software development lifecycle
Posted 3 weeks ago
5.0 - 9.0 years
0 Lacs
maharashtra
On-site
Data Scientist (5+ Years of Experience) We are seeking a highly motivated Data Scientist with over 5 years of hands-on experience in data mining, statistical analysis, and developing high-quality machine learning models. The ideal candidate will have a passion for solving real-world problems using data-driven approaches and possess strong technical expertise across various data science domains. Key Responsibilities: Apply advanced data mining techniques and statistical analysis to extract actionable insights. Design, develop, and deploy robust machine learning models to address complex business challenges. Conduct A/B and multivariate experiments to evaluate model performance and optimize outcomes. Monitor, analyze, and enhance the performance of machine learning models post-deployment. Collaborate cross-functionally to build customer cohorts for CRM campaigns and conduct market basket analysis. Stay updated with state-of-the-art techniques in NLP, particularly within the e-commerce domain. Required Skills & Qualifications: Programming & Tools: Proficient in Python, PySpark, and SQL for data manipulation and analysis. Machine Learning & AI: Strong experience with ML libraries (e.g., Scikit-learn, TensorFlow, PyTorch) and expertise in NLP, Computer Vision, Recommender Systems, and Optimization techniques. Cloud & Big Data: Hands-on experience with AWS services, including Glue, EKS, S3, SageMaker, and Redshift. Model Deployment: Experience deploying pre-trained models from platforms like Hugging Face and AWS Bedrock. DevOps & MLOps: Understanding of Git, Docker, CI/CD pipelines, and deploying models with frameworks such as FastAPI. Advanced NLP: Experience in building, retraining, and optimizing NLP models for diverse use cases. Preferred Qualifications: Strong research mindset with a keen interest in exploring new data science methodologies. Background in e-commerce analytics is a plus. If youre passionate about leveraging data to drive impactful business decisions and thrive in a dynamic environment, wed love to hear from you!,
Posted 3 weeks ago
7.0 - 12.0 years
17 - 27 Lacs
Hyderabad
Work from Office
Job Title: Data Quality Engineer Mandatory Skills Data Engineer, Python, AWS, SQL, Glue, Lambda, S3, SNS, ML, SQS Job Summary: We are seeking a highly skilled Data Engineer (SDET) to join our team, responsible for ensuring the quality and reliability of complex data workflows, data migrations, and analytics solutions across both cloud and on-premises environments. The ideal candidate will have extensive experience in SQL, Python, AWS, and ETL testing, along with a strong background in data quality assurance, data science platforms, DevOps pipelines, and automation frameworks. This role involves close collaboration with business analysts, developers, and data architects to support end-to-end testing,data validation, and continuous integration for data products. Expertise in tools like Redshift, EMR,Athena, Jenkins, and various ETL platforms is essential, as is experience with NoSQL databases, big data technologies, and cloud-native testing strategies. Role and Responsibilities: Work with business stakeholders, Business Systems Analysts and Developers to ensure quality delivery of software. Interact with key business functions to confirm data quality policies and governed attributes. Follow quality management best practices and processes to bring consistency and completeness to integration service testing. Designing and managing the testing AWS environments of data workflows during development and deployment of data products Provide assistance to the team in Test Estimation & Test Planning Design, development of Reports and dashboards. Analyzing and evaluating data sources, data volume, and business rules. Proficiency with SQL, familiarity with Python, Scala, Athena, EMR, Redshift and AWS. No SQL data and unstructured data experience. Extensive experience in programming tools like Map Reduce to HIVEQL. Experience in data science platforms like SageMaker/Machine Learning Studio/ H2O. Should be well versed with the Data flow and Test Strategy for Cloud/ On Prem ETL Testing. Interpret and analyses data from various source systems to support data integration and data reporting needs. Experience in testing Database Application to validate source to destination data movement and transformation. Work with team leads to prioritize business and information needs. Develop complex SQL scripts (Primarily Advanced SQL) for Cloud ETL and On prem. Develop and summarize Data Quality analysis and dashboards. Knowledge of Data modeling and Data warehousing concepts with emphasis on Cloud/ On Prem ETL. Execute testing of data analytic and data integration on time and within budget. Work with team leads to prioritize business and information needs Troubleshoot & determine best resolution for data issues and anomalies Experience in Functional Testing, Regression Testing, System Testing, Integration Testing & End to End testing. Has deep understanding of data architecture & data modeling best practices and guidelines for different data and analytic platforms Required Skills and Qualifications: Extensive Experience in Data migration is a must (Teradata to Redshift preferred). Extensive testing Experience with SQL/Unix/Linux scripting is a must. Extensive experience testing Cloud/On Prem ETL (e.g. Abinitio, Informatica, SSIS, Datastage, Alteryx, Glu). Extensive experience DBMS like Oracle, Teradata, SQL Server, DB2, Redshift, Postgres and Sybase. Extensive experience using Python scripting and AWS and Cloud Technologies. Extensive experience using Athena, EMR, Redshift, AWS, and Cloud Technologies. Experienced in large-scale application development testing Cloud/ On Prem Data warehouse, Data Lake, Data science. Experience with multi-year, large-scale projects. Expert technical skills with hands-on testing experience using SQL queries. Extensive experience with both data migration and data transformation testing. Extensive experience DBMS like Oracle, Teradata, SQL Server, DB2, Redshift, Postgres and Sybase. Extensive testing Experience with SQL/Unix/Linux. Extensive experience testing Cloud/On Prem ETL (e.g. Abinitio, Informatica, SSIS, Datastage, Alteryx, Glu). Extensive experience using Python scripting and AWS and Cloud Technologies. Extensive experience using Athena, EMR , Redshift and AWS and Cloud Technologies. API/Rest Assured automation, building reusable frameworks, and good technical expertise/acumen. Java/Java Script - Implement core Java, Integration, Core Java and API. Functional/UI/ Selenium - BDD/Cucumber, Specflow, Data Validation/Kafka, BigData, also automation experience using Cypress. AWS/Cloud - Jenkins/ Gitlab/ EC2 machine, S3 and building Jenkins and CI/CD pipelines, SouceLabs. Preferred Skills: API/Rest API - Rest API and Micro Services using JSON, SoapUI. Extensive experience in DevOps/Data Ops space. Strong experience in working with DevOps and build pipelines. Strong experience of AWS data services including Redshift, Glue, Kinesis, Kafka (MSK) and EMR/Spark, Sage Maker etc. Experience with technologies like Kubeflow, EKS, Docker. Extensive experience using No SQL data and unstructured data experience like MongoDB, Cassandra, Redis, ZooKeeper. Extensive experience in Map reduce using tools like Hadoop, Hive, Pig, Kafka, S4, Map R. Experience using Jenkins and Gitlab. Experience using both Waterfall and Agile methodologies. Experience in testing storage tools like S3, HDFS. Experience with one or more industry-standard defect or Test Case management Tools. Great communication skills (regularly interacts with cross functional team members).
Posted 3 weeks ago
2.0 - 6.0 years
0 Lacs
maharashtra
On-site
As a DevOps Engineer or AWS Cloud Engineer, you will be responsible for setting up AWS Infrastructure using Terraform Enterprise and Concourse (CI/CD) Pipeline. Your role will involve configuring and managing various tools and infrastructure components, with a focus on automation wherever possible. You will also be troubleshooting code issues, managing databases such as PostgreSQL, DynamoDB, and Glue, and working with different cloud services. Your responsibilities will include striving for continuous improvement by implementing continuous integration, continuous development, and constant deployment pipelines. Additionally, you will be involved in incident management and root cause analysis of AWS-related issues. To excel in this role, you should have a Master of Science degree in Computer Science, Computer Engineering, or a relevant field. You must have prior work experience as a DevOps Engineer or AWS Cloud Engineer, with a strong understanding of Terraform, Terraform Enterprise, and AWS infrastructure. Proficiency in AWS services, Python, PySpark, and Agile Methodology is essential. Experience working with databases such as PostgreSQL, DynamoDB, and Glue will be advantageous. If you are passionate about building scalable and reliable infrastructure on AWS, automating processes, and continuously improving systems, this role offers a challenging yet rewarding opportunity to contribute to the success of the organization.,
Posted 3 weeks ago
13.0 - 17.0 years
0 Lacs
pune, maharashtra
On-site
You are an experienced professional with over 13 years of experience in engaging with clients and translating their business needs into technical solutions. You have a proven track record of working with cloud services on platforms like AWS, Azure, or GCP. Your expertise lies in utilizing AWS data services such as Redshift, Glue, Athena, and SageMaker. Additionally, you have a strong background in generative AI frameworks like GANs and VAEs and possess advanced skills in Python, including libraries like Pandas, NumPy, Scikit-learn, and TensorFlow. Your role involves designing and implementing advanced AI solutions, focusing on areas like NLP and innovative ML algorithms. You are proficient in developing and deploying NLP models and have experience in enhancing machine learning algorithms. Your knowledge extends to MLOps principles, best practices, and the development and maintenance of CI/CD pipelines. Your problem-solving skills enable you to analyze complex data sets and derive actionable insights. Moreover, your excellent communication skills allow you to effectively convey technical concepts to non-technical stakeholders. In this role, you will be responsible for understanding clients" business use cases and technical requirements, translating them into technical designs that elegantly meet their needs. You will be instrumental in mapping decisions with requirements, identifying optimal solutions, and setting guidelines for NFR considerations during project implementation. Your tasks will include writing and reviewing design documents, reviewing architecture and design aspects, and ensuring adherence to best practices. To excel in this position, you should hold a bachelor's or master's degree in computer science, Information Technology, or a related field. Additionally, relevant certifications in AI, cloud technologies, or related areas would be advantageous. Your ability to innovate, design, and implement cutting-edge solutions will be crucial in this role, as well as your skill in technology integration and problem resolution through systematic analysis. Conducting POCs to validate suggested designs and technologies will also be part of your responsibilities.,
Posted 3 weeks ago
10.0 - 17.0 years
20 - 27 Lacs
Hyderabad
Work from Office
Required Skills and Qualifications: Extensive Experience in Data migration is a must (Teradata to Redshift preferred). Extensive testing Experience with SQL/Unix/Linux scripting is a must. Extensive experience testing Cloud/On Prem ETL (e.g. Abinitio, Informatica, SSIS, Datastage, Alteryx, Glu). Extensive experience DBMS like Oracle, Teradata, SQL Server, DB2, Redshift, Postgres and Sybase. Extensive experience using Python scripting and AWS and Cloud Technologies. Extensive experience using Athena, EMR, Redshift, AWS, and Cloud Technologies. Experienced in large-scale application development testing Cloud/ On Prem Data warehouse, Data Lake, Data science. Experience with multi-year, large-scale projects. Expert technical skills with hands-on testing experience using SQL queries. Extensive experience with both data migration and data transformation testing. Extensive experience DBMS like Oracle, Teradata, SQL Server, DB2, Redshift, Postgres and Sybase. Extensive testing Experience with SQL/Unix/Linux. Extensive experience testing Cloud/On Prem ETL (e.g. Abinitio, Informatica, SSIS, Datastage, Alteryx, Glu). Extensive experience using Python scripting and AWS and Cloud Technologies. Extensive experience using Athena, EMR , Redshift and AWS and Cloud Technologies. API/Rest Assured automation, building reusable frameworks, and good technical expertise/acumen. Java/Java Script - Implement core Java, Integration, Core Java and API. Functional/UI/ Selenium - BDD/Cucumber, Specflow, Data Validation/Kafka, BigData, also automation experience using Cypress. AWS/Cloud - Jenkins/ Gitlab/ EC2 machine, S3 and building Jenkins and CI/CD pipelines, SouceLabs.
Posted 3 weeks ago
5.0 - 10.0 years
14 - 24 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
Greetings!!!! We Have Multiple Openings for AWS Databricks and AWS Glue. Fill the below form accordingly. Interested candidates can fill the form below. AWS Databricks:- https://forms.office.com/r/VmydKh6H8R AWS Glue:-https://forms.office.com/r/afGKQhARkm AWS Glue:- Primary Skill: AWS Glue, Python,Pyspark,Airflow Secondary Skill: RDS,Redshift,Snowflake Snowflake Understanding of its architecture, data ingestion, and query optimization. (existing data warehouse) AWS Services Extensive experience with AWS Glue, AWS Lambda, Amazon EMR, Amazon S3, and Apache Airflow for building data pipelines. Python & SQL Strong programming skills for data transformation and querying. Data Warehousing Experience in managing existing Snowflake data warehouses and optimizing performance. AWS Databricks:- Primarily looking for a Data Engineer with expertise in processing data pipelines using Databricks PySpark SQL on Cloud distributions like AWS Must have AWS Databricks Good to have PySpark Snowflake Talend Requirements- Candidate must be experienced working in projects involving Other ideal qualifications include experiences in Primarily looking for a data engineer with expertise in processing data pipelines using Databricks Spark SQL on Hadoop distributions like AWS EMR Data bricks Cloudera etc. Should be very proficient in doing large scale data operations using Databricks and overall very comfortable using Python Familiarity with AWS compute storage and IAM concepts Experience in working with S3 Data Lake as the storage tier Any ETL background Talend AWS Glue etc. is a plus but not required Cloud Warehouse experience Snowflake etc. is a huge plus Carefully evaluates alternative risks and solutions before taking action. Optimizes the use of all available resources Develops solutions to meet business needs that reflect a clear understanding of the objectives practices and procedures of the corporation department and business unit Skills Hands on experience on Databricks Spark SQL AWS Cloud platform especially S3 EMR Databricks Cloudera etc. Experience on Shell scripting Exceptionally strong analytical and problem-solving skills Relevant experience with ETL methods and with retrieving data from dimensional data models and data warehouses Strong experience with relational databases and data access methods especially SQL Excellent collaboration and cross functional leadership skills Excellent communication skills both written and verbal Ability to manage multiple initiatives and priorities in a fast-paced collaborative environment Ability to leverage data assets to respond to complex questions that require timely answers has working knowledge on migrating relational and dimensional databases on AWS Cloud platform Skills Mandatory Skills : Apache Spark, Databricks, Java, Python, Scala, Spark SQL.
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough