Jobs
Interviews

3632 Redshift Jobs - Page 44

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0.0 - 3.0 years

7 - 10 Lacs

Bengaluru

On-site

JLL empowers you to shape a brighter way . Our people at JLL and JLL Technologies are shaping the future of real estate for a better world by combining world class services, advisory and technology for our clients. We are committed to hiring the best, most talented people and empowering them to thrive, grow meaningful careers and to find a place where they belong. Whether you’ve got deep experience in commercial real estate, skilled trades or technology, or you’re looking to apply your relevant experience to a new industry, join our team as we help shape a brighter way forward. Data Engineer Find your next move at JLL and b uild a fulfilling career At JLL, we value what makes you unique, and we’re committed to giving you the opportunity, knowledge, and tools to own your success. E xplore opportunities to advance your career from within , whether you’re looking to move up, broaden your experience or deepen your expertise . JLL Technologies Enterprise Data team is a newly established central organization that oversees JLL’s data strategy. We are seeking data professionals to work with our colleagues at JLL around the globe in providing solutions, developing new products, and building enterprise reporting & analytics capability to reshape the business of Commercial Real Estate using the power of data, and we are just getting started on that journey!. We are looking for a Data Engineer who is a self-starter to work in a diverse and fast-paced environment and can join our Enterprise Data team. This is a role that is responsible for designing and developing data solutions that are strategic for the business and built on the latest technologies and patterns. This a global role that requires partnering with the broader JLLT team at the country, regional, and global levels by utilizing in-depth knowledge of data, infrastructure, technologies, and data engineering experience. Sound like you? To apply you need to be: Requirements: Bachelor's degree in Computer Science, Data Engineering, or a related field. (A master's degree is a plus.) 0- 3 years of experience in data engineering or full-stack development, with a focus on cloud-based environments. Strong expertise in SQL and PySpark, with a proven track record of working on large-scale data projects. Experience with cloud platforms (any 1) such as Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP). Experience in working with databases especially SQL server databases. Experience handling unstructured data, working in a data lake environment, leveraging data streaming and developing data pipelines driven by events/queues Proficiency in designing and implementing data pipelines, ETL processes, and workflow automation. Familiarity with data warehousing concepts, dimensional modelling, and data governance best practices. Strong problem-solving skills and ability to analyze complex data processing issues. Excellent communication and interpersonal skills to collaborate effectively with cross-functional teams. Attention to detail and a commitment to delivering high-quality, reliable data solutions. Ability to adapt to evolving technologies and work effectively in a fast-paced, dynamic environment. Preferred Qualifications: Experience with managing big data technologies (e.g., Spark, Python, Serverless Stack, API, etc.). Familiarity with cloud-based data warehousing platforms (e.g., AWS Redshift, Google BigQuery, Snowflake, etc.). Knowledge of data visualization tools (e.g., Tableau, Power BI) for creating meaningful data reports and dashboards is a plus. What you can expect from us: You’ll join an entrepreneurial, inclusive culture. One where we succeed together – across the desk and around the globe. Where like-minded people work naturally together to achieve great things. Our Total Rewards program reflects our commitment to helping you achieve your ambitions in career, recognition, well-being, benefits and pay. Join us to develop your strengths and enjoy a fulfilling career full of varied experiences. Keep those ambitions in sight and imagine where JLL can take you... A pply today! Location: On-site –Bengaluru, KA Scheduled Weekly Hours: 40 If this job description resonates with you, we encourage you to apply even if you don’t meet all of the requirements. We’re interested in getting to know you and what you bring to the table! JLL Privacy Notice Jones Lang LaSalle (JLL), together with its subsidiaries and affiliates, is a leading global provider of real estate and investment management services. We take our responsibility to protect the personal information provided to us seriously. Generally the personal information we collect from you are for the purposes of processing in connection with JLL’s recruitment process. We endeavour to keep your personal information secure with appropriate level of security and keep for as long as we need it for legitimate business or legal reasons. We will then delete it safely and securely. For additional details please see our career site pages for each country. . Jones Lang LaSalle (“JLL”) is an Equal Opportunity Employer and is committed to working with and providing reasonable accommodations to individuals with disabilities. If you need a reasonable accommodation because of a disability for any part of the employment process – including the online application and/or overall selection process – you may email us at accomodationrequest@am.jll.com . This email is only to request an accommodation. Please direct any other general recruiting inquiries to our Contact Us page > I want to work for JLL.

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

It's fun to work in a company where people truly BELIEVE in what they're doing! We're committed to bringing passion and customer focus to the business. Job Description This role requires working from our local Hyderabad office 2-3x a week. WHAT YOU’LL DO: Work with software development teams to enable reliable and stable software services Develop software solutions that enhance the reliability and performance of services Optimize software release and deployment of ABC systems and cloud infrastructure in AWS Be an advocate for availability, reliability and scalability practices Help teams define and adhere to Service Level Objectives and adhere to standard processes Enable the product engineering teams through support of the automated deployment pipelines Collaborate with product development as an advocate for scalable architectural approaches Advocate for infrastructure and application security practices in the development process Respond to production incidents in a balanced and compensated rotation with other SREs and Senior Engineers You will lead a culture of learning and continuous improvement through incident postmortems and retrospectives WHAT YOU’LL NEED: 5+ years of demonstrable experience in our tech stack Proficiency in one programming language: Go, PHP, NodeJS or Java Infrastructure running 100% in AWS Service oriented architecture deployed on ECS Fargate & Lambda Databases: MySQL, Postgres, MongoDB, DynamoDB, Redshift Infrastructure automation with Terraform Observability & Monitoring: Honeycomb, NewRelic, CloudWatch, Grafana CI/CD pipelines with GitHub, CircleCI and Jenkins Willing to be part of a rotating on-call schedule Open to irregular work hours to support teams in different time zones WHAT’S IN IT FOR YOU: Purpose led company with a Values focused culture – Best Life, One Team, Growth Mindset Time Off – competitive PTO plans with 15 Earned accrued leave, 12 days Sick leave, and 12 days Casual leave per year 11 Holidays plus 4 Days of Disconnect – once a quarter, we take a collective breather and enjoy a day off together around the globe. #oneteam Group Mediclaim insurance coverage of INR 500,000 for employee + spouse, 2 kids, and parents or parent-in-laws, and including EAP counseling Life Insurance and Personal Accident Insurance Best Life Perk – we are committed to meeting you wherever you are in your fitness journey with a quarterly reimbursement Premium Calm App – enjoy tranquility with a Calm App subscription for you and up to 4 dependents over the age of 16 Support for working women with financial aid towards crèche facility, ensuring a safe and nurturing environment for their little ones while they focus on their careers. We’re committed to diversity and passion, and encourage you to apply, even if you don’t demonstrate all the listed skillsets! ABC’S COMMITMENT TO DIVERSITY, EQUALITY, BELONGING AND INCLUSION: ABC is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. We are intentional about creating an environment where employees, our clients and other stakeholders feel valued and inspired to reach their full potential and make authentic connections. We foster a workplace culture that embraces each person’s diversity, including the extent to which they are similar or different. ABC leaders believe that an equitable and inclusive culture is not only the right thing to do, it is a business imperative. Read more about our commitment to diversity, equality, belonging and inclusion at abcfitness.com ABOUT ABC: ABC Fitness (abcfitness.com) is the premier provider of software and related services for the fitness industry and has built a reputation for excellence in support for clubs and their members. ABC is the trusted provider to boost performance and create a total fitness experience for over 41 million members of clubs of all sizes whether a multi-location chain, franchise or an independent gym. Founded in 1981, ABC helps over 31,000 gyms and health clubs globally perform better and more profitably offering a comprehensive SaaS club management solution that enables club operators to achieve optimal performance. ABC Fitness is a Thoma Bravo portfolio company, a private equity firm focused on investing in software and technology companies (thomabravo.com). If you like wild growth and working with happy, enthusiastic over-achievers, you'll enjoy your career with us!

Posted 3 weeks ago

Apply

6.0 - 9.0 years

20 - 25 Lacs

Bengaluru

Work from Office

Our engineering team is looking for a Data Engineer who is very proficient in python, has a very good understanding of AWS cloud computing, ETL Pipelines and demonstrates proficiency with SQL and relational database concepts. In this role you will be a very mid-to senior-level individual contributor guiding our migration efforts by serving as a Senior data engineer working closely with the Data architects to evaluate best-fit solutions and processes for our team. You will work with the rest of the team as we move away from legacy tech and introduce new tools and ETL pipeline solutions. You will collaborate with subject matter experts, data architects, informaticists and data scientists to evolve our current cloud based ETL to the next Generation. Responsibilities Independently prototypes/develop data solutions of high complexity to meet the needs of the organization and business customers. Designs proof-of-concept solutions utilizing an advanced understanding of multiple coding languages to meet technical and business requirements, with an ability to perform iterative solution testing to ensure specifications are met. Designs and develops data solutions that enables effective self-service data consumption, and can describe their value to the customer. Collaborates with stakeholders in defining metrics that are impactful to the business. Prioritize efforts based on customer value. Has an in-depth understanding of Agile techniques. Can set expectations for deliverables of high complexity. Can assist in the creation of roadmaps for data solutions. Can turn vague ideas or problems into data product solutions. Influences strategic thinking across the team and the broader organization. Maintains proof-of-concepts and prototype data solutions, and manages any assessment of their viability and scalability, with own team or in partnership with IT. Working with IT, assists in building robust systems focusing on long-term and ongoing maintenance and support. Ensure data solutions include deliverables required to achieve high quality data. Displays a strong understanding of complex multi-tier, multi-platform systems, and applies principles of metadata, lineage, business definitions, compliance, and data security to project work. Has an in-depth understanding of Business Intelligence tools, including visualization and user experience techniques. Can set expectations for deliverables of high complexity. Works with IT to help scale prototypes. Demonstrates a comprehensive understanding of new technologies as needed to progress initiatives. Requirements Expertise in Python programming, with demonstrated real-world experience building out data tools in a Python environment. Expertise in AWS Services , with demonstrated real-world experience building out data tools in a Python environment. Bachelor`s Degree in Computer Science, Computer Engineering, or related discipline preferred. Masters in same or related disciplines strongly preferred. 3+ years experience in coding for data management, data warehousing, or other data environments, including, but not limited to, Python and Spark. Experience with SAS is preferred. 3+ years experience as developer working in an AWS cloud computing environment. 3+ years experience using GIT or Bitbucket. Experience with Redshift, RDS, DynomoDB is preferred. Strong written and oral communication skills are required. Experience in the healthcare industry with healthcare data analytics products Experience with healthcare vocabulary and data standards (OMOP, FHIR) is a plus

Posted 3 weeks ago

Apply

8.0 years

0 Lacs

Chennai, Tamil Nadu, India

Remote

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We are looking for a seasoned and strategic-thinking Senior AWS DataOps Engineer to join our growing global data team. In this role, you will take ownership of critical data workflows and work closely with cross-functional teams to support, optimize, and scale cloud-based data pipelines. You will bring leadership to data operations, contribute to architectural decisions, and help ensure the integrity, availability, and performance of our AWS data infrastructure. Your Key Responsibilities Lead the design, monitoring, and optimization of AWS-based data pipelines using services like AWS Glue, EMR, Lambda, and Amazon S3. Oversee and enhance complex ETL workflows involving IICS (Informatica Intelligent Cloud Services), Databricks, and native AWS tools. Collaborate with data engineering and analytics teams to streamline ingestion into Amazon Redshift and lead data validation strategies. Manage job orchestration using Apache Airflow, AWS Data Pipeline, or equivalent tools, ensuring SLA adherence. Guide SQL query optimization across Redshift and other AWS databases for analytics and operational use cases. Perform root cause analysis of critical failures, mentor junior staff on best practices, and implement preventive measures. Lead deployment activities through robust CI/CD pipelines, applying DevOps principles and automation. Own the creation and governance of SOPs, runbooks, and technical documentation for data operations. Partner with vendors, security, and infrastructure teams to ensure compliance, scalability, and cost-effective architecture. Skills And Attributes For Success Expertise in AWS data services and ability to lead architectural discussions. Analytical thinker with the ability to design and optimize end-to-end data workflows. Excellent debugging and incident resolution skills in large-scale data environments. Strong leadership and mentoring capabilities, with clear communication across business and technical teams. A growth mindset with a passion for building reliable, scalable data systems. Proven ability to manage priorities and navigate ambiguity in a fast-paced environment. To qualify for the role, you must have 5–8 years of experience in DataOps, Data Engineering, or related roles. Strong hands-on expertise in Databricks. Deep understanding of ETL pipelines and modern data integration patterns. Proven experience with Amazon S3, EMR, Glue, Lambda, and Amazon Redshift in production environments. Experience in Airflow or AWS Data Pipeline for orchestration and scheduling. Advanced knowledge of IICS or similar ETL tools for data transformation and automation. SQL skills with emphasis on performance tuning, complex joins, and window functions. Technologies and Tools Must haves Proficient in Amazon S3, EMR (Elastic MapReduce), AWS Glue, and Lambda Expert in Databricks – ability to develop, optimize, and troubleshoot advanced notebooks Strong experience with Amazon Redshift for scalable data warehousing and analytics Solid understanding of orchestration tools like Apache Airflow or AWS Data Pipeline Hands-on with IICS (Informatica Intelligent Cloud Services) or comparable ETL platforms Good to have Exposure to Power BI or Tableau for data visualization Familiarity with CDI, Informatica, or other enterprise-grade data integration platforms Understanding of DevOps and CI/CD automation tools for data engineering workflows SQL familiarity across large datasets and distributed databases What We Look For Enthusiastic learners with a passion for data op’s and practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 3 weeks ago

Apply

7.0 years

0 Lacs

Coimbatore, Tamil Nadu, India

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY-Consulting - Data and Analytics –Manager – AWS EY's Consulting Services is a unique, industry-focused business unit that provides a broad range of integrated services that leverage deep industry experience with strong functional and technical capabilities and product knowledge. EY’s financial services practice provides integrated Consulting services to financial institutions and other capital markets participants, including commercial banks, retail banks, investment banks, broker-dealers & asset management firms, and insurance firms from leading Fortune 500 Companies. Within EY’s Consulting Practice, Data and Analytics team solves big, complex issues and capitalize on opportunities to deliver better working outcomes that help expand and safeguard the businesses, now and in the future. This way we help create a compelling business case for embedding the right analytical practice at the heart of client’s decision-making. The opportunity We’re looking for Senior – Cloud Experts with design experience in Bigdata cloud implementations. Your Key Responsibilities AWS Experience with Kafka, Flume and AWS tool stack such as Redshift and Kinesis are preferred. Experience building on AWS using S3, EC2, Redshift, Glue,EMR, DynamoDB, Lambda, QuickSight, etc. Experience in Pyspark/Spark / Scala Experience using software version control tools (Git, Jenkins, Apache Subversion) AWS certifications or other related professional technical certifications Experience with cloud or on-premise middleware and other enterprise integration technologies Experience in writing MapReduce and/or Spark jobs Demonstrated strength in architecting data warehouse solutions and integrating technical components Good analytical skills with excellent knowledge of SQL. 7+ years of work experience with very large data warehousing environment Excellent communication skills, both written and verbal 7+ years of experience with detailed knowledge of data warehouse technical architectures, infrastructure components, ETL/ ELT and reporting/analytic tools. 7+ years of experience data modelling concepts 7+ years of Python and/or Java development experience 7+ years’ experience in Big Data stack environments (EMR, Hadoop, MapReduce, Hive) Flexible and proactive/self-motivated working style with strong personal ownership of problem resolution. Excellent communicator (written and verbal formal and informal). Ability to multi-task under pressure and work independently with minimal supervision. Strong verbal and written communication skills. Must be a team player and enjoy working in a cooperative and collaborative team environment. Adaptable to new technologies and standards. Skills And Attributes For Success Use an issue-based approach to deliver growth, market and portfolio strategy engagements for corporates Strong communication, presentation and team building skills and experience in producing high quality reports, papers, and presentations. Experience in executing and managing research and analysis of companies and markets, preferably from a commercial due diligence standpoint. Exposure to tools like Tableau, Alteryx etc. To qualify for the role, you must have BE/BTech/MCA/MBA Minimum 4+ years hand-on experience in one or more key areas. Minimum 7+ years industry experience What We Look For A Team of people with commercial acumen, technical experience and enthusiasm to learn new things in this fast-moving environment An opportunity to be a part of market-leading, multi-disciplinary team of 1400 + professionals, in the only integrated global transaction business worldwide. Opportunities to work with EY Consulting practices globally with leading businesses across a range of industries What Working At EY Offers At EY, we’re dedicated to helping our clients, from start–ups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 3 weeks ago

Apply

4.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Description Amazon Transportation team is looking for an innovative, hands-on and customer-obsessed Business Analyst for Analytics team. Candidate must be detail oriented, have superior verbal and written communication skills, strong organizational skills, excellent technical skills and should be able to juggle multiple tasks at once. Ideal candidate must be able to identify problems before they happen and implement solutions that detect and prevent outages. The candidate must be able to accurately prioritize projects, make sound judgments, work to improve the customer experience and get the right things done. This job requires you to constantly hit the ground running and have the ability to learn quickly. Primary responsibilities include defining the problem and building analytical frameworks to help the operations to streamline the process, identifying gaps in the existing process by analyzing data and liaising with relevant team(s) to plug it and analyzing data and metrics and sharing update with the internal teams. Key job responsibilities Apply multi-domain/process expertise in day to day activities and own end to end roadmap. Translate complex or ambiguous business problem statements into analysis requirements and maintain high bar throughout the execution. Define analytical approach; review and vet analytical approach with stakeholders. Proactively and independently work with stakeholders to construct use cases and associated standardized outputs Scale data processes and reports; write queries that clients can update themselves; lead work with data engineering for full-scale automation Have a working knowledge of the data available or needed by the wider business for more complex or comparative analysis Work with a variety of data sources and Pull data using efficient query development that requires less post processing (e.g., Window functions, virt usage) When needed, pull data from multiple similar sources to triangulate on data fidelity Actively manage the timeline and deliverables of projects, focusing on interactions in the team Provide program communications to stakeholders Communicate roadblocks to stakeholders and propose solutions Represent team on medium-size analytical projects in own organization and effectively communicate across teams A day in the life Solve ambiguous analyses with less well-defined inputs and outputs; drive to the heart of the problem and identify root causes Have the capability to handle large data sets in analysis through the use of additional tools Derive recommendations from analysis that significantly impact a department, create new processes, or change existing processes Understand the basics of test and control comparison; may provide insights through basic statistical measures such as hypothesis testing Identify and implement optimal communication mechanisms based on the data set and the stakeholders involved Communicate complex analytical insights and business implications effectively About The Team AOP (Analytics Operations and Programs) team is missioned to standardize BI and analytics capabilities, and reduce repeat analytics/reporting/BI workload for operations across IN, AU, BR, MX, SG, AE, EG, SA marketplace. AOP is responsible to provide visibility on operations performance and implement programs to improve network efficiency and defect reduction. The team has a diverse mix of strong engineers, Analysts and Scientists who champion customer obsession. We enable operations to make data-driven decisions through developing near real-time dashboards, self-serve dive-deep capabilities and building advanced analytics capabilities. We identify and implement data-driven metric improvement programs in collaboration (co-owning) with Operations teams. Basic Qualifications 4+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience Experience with data visualization using Tableau, Quicksight, or similar tools Experience with data modeling, warehousing and building ETL pipelines Experience with AWS solutions such as EC2, DynamoDB, S3, and Redshift Experience using SQL to pull data from a database or data warehouse and scripting experience (Python) to process data for modeling Experience developing and presenting recommendations of new metrics allowing better understanding of the performance of the business 4+ years of ecommerce, transportation, finance or related analytical field experience Preferred Qualifications Experience in Statistical Analysis packages such as R, SAS and Matlab Experience in data mining, ETL, etc. and using databases in a business environment with large-scale, complex datasets Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI - Haryana Job ID: A2952421

Posted 3 weeks ago

Apply

8.0 years

0 Lacs

Coimbatore, Tamil Nadu, India

Remote

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We are looking for a seasoned and strategic-thinking Senior AWS DataOps Engineer to join our growing global data team. In this role, you will take ownership of critical data workflows and work closely with cross-functional teams to support, optimize, and scale cloud-based data pipelines. You will bring leadership to data operations, contribute to architectural decisions, and help ensure the integrity, availability, and performance of our AWS data infrastructure. Your Key Responsibilities Lead the design, monitoring, and optimization of AWS-based data pipelines using services like AWS Glue, EMR, Lambda, and Amazon S3. Oversee and enhance complex ETL workflows involving IICS (Informatica Intelligent Cloud Services), Databricks, and native AWS tools. Collaborate with data engineering and analytics teams to streamline ingestion into Amazon Redshift and lead data validation strategies. Manage job orchestration using Apache Airflow, AWS Data Pipeline, or equivalent tools, ensuring SLA adherence. Guide SQL query optimization across Redshift and other AWS databases for analytics and operational use cases. Perform root cause analysis of critical failures, mentor junior staff on best practices, and implement preventive measures. Lead deployment activities through robust CI/CD pipelines, applying DevOps principles and automation. Own the creation and governance of SOPs, runbooks, and technical documentation for data operations. Partner with vendors, security, and infrastructure teams to ensure compliance, scalability, and cost-effective architecture. Skills And Attributes For Success Expertise in AWS data services and ability to lead architectural discussions. Analytical thinker with the ability to design and optimize end-to-end data workflows. Excellent debugging and incident resolution skills in large-scale data environments. Strong leadership and mentoring capabilities, with clear communication across business and technical teams. A growth mindset with a passion for building reliable, scalable data systems. Proven ability to manage priorities and navigate ambiguity in a fast-paced environment. To qualify for the role, you must have 5–8 years of experience in DataOps, Data Engineering, or related roles. Strong hands-on expertise in Databricks. Deep understanding of ETL pipelines and modern data integration patterns. Proven experience with Amazon S3, EMR, Glue, Lambda, and Amazon Redshift in production environments. Experience in Airflow or AWS Data Pipeline for orchestration and scheduling. Advanced knowledge of IICS or similar ETL tools for data transformation and automation. SQL skills with emphasis on performance tuning, complex joins, and window functions. Technologies and Tools Must haves Proficient in Amazon S3, EMR (Elastic MapReduce), AWS Glue, and Lambda Expert in Databricks – ability to develop, optimize, and troubleshoot advanced notebooks Strong experience with Amazon Redshift for scalable data warehousing and analytics Solid understanding of orchestration tools like Apache Airflow or AWS Data Pipeline Hands-on with IICS (Informatica Intelligent Cloud Services) or comparable ETL platforms Good to have Exposure to Power BI or Tableau for data visualization Familiarity with CDI, Informatica, or other enterprise-grade data integration platforms Understanding of DevOps and CI/CD automation tools for data engineering workflows SQL familiarity across large datasets and distributed databases What We Look For Enthusiastic learners with a passion for data op’s and practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 3 weeks ago

Apply

7.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY-Consulting - Data and Analytics –Manager – AWS EY's Consulting Services is a unique, industry-focused business unit that provides a broad range of integrated services that leverage deep industry experience with strong functional and technical capabilities and product knowledge. EY’s financial services practice provides integrated Consulting services to financial institutions and other capital markets participants, including commercial banks, retail banks, investment banks, broker-dealers & asset management firms, and insurance firms from leading Fortune 500 Companies. Within EY’s Consulting Practice, Data and Analytics team solves big, complex issues and capitalize on opportunities to deliver better working outcomes that help expand and safeguard the businesses, now and in the future. This way we help create a compelling business case for embedding the right analytical practice at the heart of client’s decision-making. The opportunity We’re looking for Senior – Cloud Experts with design experience in Bigdata cloud implementations. Your Key Responsibilities AWS Experience with Kafka, Flume and AWS tool stack such as Redshift and Kinesis are preferred. Experience building on AWS using S3, EC2, Redshift, Glue,EMR, DynamoDB, Lambda, QuickSight, etc. Experience in Pyspark/Spark / Scala Experience using software version control tools (Git, Jenkins, Apache Subversion) AWS certifications or other related professional technical certifications Experience with cloud or on-premise middleware and other enterprise integration technologies Experience in writing MapReduce and/or Spark jobs Demonstrated strength in architecting data warehouse solutions and integrating technical components Good analytical skills with excellent knowledge of SQL. 7+ years of work experience with very large data warehousing environment Excellent communication skills, both written and verbal 7+ years of experience with detailed knowledge of data warehouse technical architectures, infrastructure components, ETL/ ELT and reporting/analytic tools. 7+ years of experience data modelling concepts 7+ years of Python and/or Java development experience 7+ years’ experience in Big Data stack environments (EMR, Hadoop, MapReduce, Hive) Flexible and proactive/self-motivated working style with strong personal ownership of problem resolution. Excellent communicator (written and verbal formal and informal). Ability to multi-task under pressure and work independently with minimal supervision. Strong verbal and written communication skills. Must be a team player and enjoy working in a cooperative and collaborative team environment. Adaptable to new technologies and standards. Skills And Attributes For Success Use an issue-based approach to deliver growth, market and portfolio strategy engagements for corporates Strong communication, presentation and team building skills and experience in producing high quality reports, papers, and presentations. Experience in executing and managing research and analysis of companies and markets, preferably from a commercial due diligence standpoint. Exposure to tools like Tableau, Alteryx etc. To qualify for the role, you must have BE/BTech/MCA/MBA Minimum 4+ years hand-on experience in one or more key areas. Minimum 7+ years industry experience What We Look For A Team of people with commercial acumen, technical experience and enthusiasm to learn new things in this fast-moving environment An opportunity to be a part of market-leading, multi-disciplinary team of 1400 + professionals, in the only integrated global transaction business worldwide. Opportunities to work with EY Consulting practices globally with leading businesses across a range of industries What Working At EY Offers At EY, we’re dedicated to helping our clients, from start–ups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 3 weeks ago

Apply

7.0 years

40 Lacs

Faridabad, Haryana, India

Remote

Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, PySpark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 weeks ago

Apply

0 years

0 Lacs

Kolkata, West Bengal, India

On-site

Role Description Role Proficiency: Independently develop data-driven solutions to difficult business challenges by utilize analytical statistical and programming skills to collect analyze and interpret large data sets under supervision. Outcomes Work with stakeholders throughout the organization to identify opportunities for leveraging data from our customers to make models that can generate business insights Create new experimental frameworks or build automated tools to collect data Correlate similar data sets to find actionable results Build predictive models and machine learning algorithms to analyse large amounts of information to discover trends and patterns. Mine and analyse data from company databases to drive optimization and improvement of product development marketing techniques business strategies etc Develop processes and tools to monitor and analyse model performance and data accuracy. Develop Data Visualization and illustrations on given business problem Use predictive modelling to increase and optimize customer experiences and other business outcomes. Coordinate with different functional teams to implement models and monitor outcomes. Set FAST goals and provide feedback on FAST goals of reportees Measures Of Outcomes Number of business processes changed due to vital analysis. Number of Business Intelligent Dashboards developed Number of productivity standards defined for project Number of Prediction and Modelling models used Number of new approaches applied to understand the business trends Quality of data visualization done to help non-technical stakeholders comprehend easily. Number of mandatory trainings completed Outputs Expected Statistical Techniques: Apply statistical techniques like regression properties of distributions statistical tests etc. to analyse data. Machine Learning Techniques Apply machine learning techniques like clustering decision tree learning artificial neural networks etc. to streamline data analysis. Creating Advanced Algorithms Create advanced algorithms and statistics using regression simulation scenario analysis modelling etc. Data Visualization Visualize and present data for stakeholders using: Periscope Business Objects D3 ggplot etc. Management And Strategy Oversees the activities of analyst personnel and ensures the efficient execution of their duties. Critical Business Insights Mines the business’s database in search of critical business insights and communicates findings to the relevant departments. Code Creating efficient and reusable code meant for the improvement manipulation and analysis of data. Version Control Manages project codebase through version control tools e.g. git bitbucket etc. Predictive Analytics Seeks to determine likely outcomes by detecting tendencies in descriptive and diagnostic analysis Prescriptive Analytics Attempts to identify what business action to take Create Reports Creates reports depicting the trends and behaviours from the analysed data Training end users on new reports and dashboards. Document Create documentation for own work as well as perform peer review of documentation of others' work Manage Knowledge Consume and contribute to project related documents share point libraries and client universities Status Reporting Report status of tasks assigned Comply with project related reporting standards and process Skill Examples Excellent pattern recognition and predictive modelling skills Extensive background in data mining and statistical analysis Expertise in machine learning techniques and creating algorithms. Analytical Skills: Ability to work with large amounts of data: facts figures and number crunching. Communication Skills: Communicate effectively with a diverse population at various organization levels with the right level of detail. Critical Thinking: Data Analysts must look at numbers trends and data and come to new conclusions based on the findings. Strong meeting facilitation skills as well as presentation skills. Attention to Detail: Making sure to be vigilant in the analysis to come to correct conclusions. Mathematical Skills to estimate numerical data. Work in a team environment and have strong interpersonal skills to work in collaborative environment Proactively ask for and offer help Knowledge Examples Knowledge Examples Programming languages – Java/ Python/ R. Web Services - Redshift S3 Spark DigitalOcean etc. Statistical and data mining techniques: GLM/Regression Random Forest Boosting Trees text mining social network analysis etc. Google Analytics Site Catalyst Coremetrics Adwords Crimson Hexagon Facebook Insights etc. Computing Tools - Map/Reduce Hadoop Hive Spark Gurobi MySQL etc. Database languages such as SQL NoSQL Analytical tools and languages such as SAS & Mahout. Practical experience with ETL data processing etc. Proficiency in MATLAB. Data visualization software such as Tableau or Qlik. Proficient in mathematics and calculations. Spreadsheet tools such as Microsoft Excel or Google Sheets DBMS Operating Systems and software platforms Knowledge about customer domain and about sub domain where problem is solved Proficient in at least 1 version control tool like git bitbucket Have experience working with project management tool like Jira Additional Comments Must have -Statistical Concepts, SQL, Machine Learning (Regression and Classification), Deep Learning (ANN, RNN, CNN), Advanced NLP, Computer Vision, Gen AI/LLM (Prompt Engineering, RAG, Fine Tuning), AWS Sagemaker/Azure ML/Google Vertex AI, Basic implementation experience of Docker, Kubernetes, kubeflow, MLOps, Python (numpy, panda, sklearn, streamlit, matplotlib, seaborn) Skills Data Management,Data Science,Python

Posted 3 weeks ago

Apply

7.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY-Consulting - Data and Analytics –Manager – AWS EY's Consulting Services is a unique, industry-focused business unit that provides a broad range of integrated services that leverage deep industry experience with strong functional and technical capabilities and product knowledge. EY’s financial services practice provides integrated Consulting services to financial institutions and other capital markets participants, including commercial banks, retail banks, investment banks, broker-dealers & asset management firms, and insurance firms from leading Fortune 500 Companies. Within EY’s Consulting Practice, Data and Analytics team solves big, complex issues and capitalize on opportunities to deliver better working outcomes that help expand and safeguard the businesses, now and in the future. This way we help create a compelling business case for embedding the right analytical practice at the heart of client’s decision-making. The opportunity We’re looking for Senior – Cloud Experts with design experience in Bigdata cloud implementations. Your Key Responsibilities AWS Experience with Kafka, Flume and AWS tool stack such as Redshift and Kinesis are preferred. Experience building on AWS using S3, EC2, Redshift, Glue,EMR, DynamoDB, Lambda, QuickSight, etc. Experience in Pyspark/Spark / Scala Experience using software version control tools (Git, Jenkins, Apache Subversion) AWS certifications or other related professional technical certifications Experience with cloud or on-premise middleware and other enterprise integration technologies Experience in writing MapReduce and/or Spark jobs Demonstrated strength in architecting data warehouse solutions and integrating technical components Good analytical skills with excellent knowledge of SQL. 7+ years of work experience with very large data warehousing environment Excellent communication skills, both written and verbal 7+ years of experience with detailed knowledge of data warehouse technical architectures, infrastructure components, ETL/ ELT and reporting/analytic tools. 7+ years of experience data modelling concepts 7+ years of Python and/or Java development experience 7+ years’ experience in Big Data stack environments (EMR, Hadoop, MapReduce, Hive) Flexible and proactive/self-motivated working style with strong personal ownership of problem resolution. Excellent communicator (written and verbal formal and informal). Ability to multi-task under pressure and work independently with minimal supervision. Strong verbal and written communication skills. Must be a team player and enjoy working in a cooperative and collaborative team environment. Adaptable to new technologies and standards. Skills And Attributes For Success Use an issue-based approach to deliver growth, market and portfolio strategy engagements for corporates Strong communication, presentation and team building skills and experience in producing high quality reports, papers, and presentations. Experience in executing and managing research and analysis of companies and markets, preferably from a commercial due diligence standpoint. Exposure to tools like Tableau, Alteryx etc. To qualify for the role, you must have BE/BTech/MCA/MBA Minimum 4+ years hand-on experience in one or more key areas. Minimum 7+ years industry experience What We Look For A Team of people with commercial acumen, technical experience and enthusiasm to learn new things in this fast-moving environment An opportunity to be a part of market-leading, multi-disciplinary team of 1400 + professionals, in the only integrated global transaction business worldwide. Opportunities to work with EY Consulting practices globally with leading businesses across a range of industries What Working At EY Offers At EY, we’re dedicated to helping our clients, from start–ups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 3 weeks ago

Apply

7.0 years

40 Lacs

Greater Hyderabad Area

Remote

Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, PySpark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 weeks ago

Apply

5.0 - 10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. AWS Data Engineer- Senior We are seeking a highly skilled and motivated Hands on AWS Data Engineer with 5-10 years of experience in AWS Glue, Pyspark ,AWS Redshift, S3, and Python to join our dynamic team. As a Data Engineer, you will be responsible for designing, developing, and optimizing data pipelines and solutions that support business intelligence, analytics, and large-scale data processing. You will work closely with data scientists, analysts, and other engineering teams to ensure seamless data flow across our systems. Technical Skills : Must have Strong experience in AWS Data Services like Glue , Lambda, Even bridge, Kinesis, S3/ EMR , Redshift , RDS, Step functions, Airflow & Pyspark Strong exposure to IAM, Cloud Trail , Cluster optimization , Python & SQL Should have expertise in Data design, STTM, understanding of Data models , Data component design, Automated testing, Code Coverage, UAT support , Deployment and go live Experience with version control systems like SVN, Git. Create and manage AWS Glue crawlers and jobs to automate data cataloging and ingestion processes across various structured and unstructured data sources. Strong experience with AWS Glue building ETL pipelines, managing crawlers, and working with Glue data catalogue. Proficiency in AWS Redshift designing and managing Redshift clusters, writing complex SQL queries, and optimizing query performance. Enable data consumption from reporting and analytics business applications using AWS services (ex: QuickSight, Sagemaker, JDBC / ODBC connectivity, etc.) Behavioural skills: Willing to work 5 days a week from ODC / client location ( based on project can be hybrid 3 days a week ) Ability to Lead developers and engage with client stakeholders to drive technical decisions Ability to do technical design and POCs- help build / analyse logical data model, required entities, relationships, data constraints and dependencies focused on enabling reporting and analytics business use cases Should be able to work in Agile environment Should have strong communication skills Good to have : Exposure to Financial Services , Wealth and Asset Management Exposure to Data science, Exposure to Fullstack technologies GenAI will be an added advantage EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 3 weeks ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

*Note : Only immediate joiners are required, project starting date is July 15th 2025. 🧠 About the Role We are looking for a skilled Automation Engineer with hands-on experience in data quality monitoring and observability tools like Soda Core (SODAS) , Great Expectations , or Monte Carlo . You will play a key role in ensuring the reliability, accuracy, and trustworthiness of our data pipelines by automating data quality checks and integrating them into our data workflows. 🔧 Key Responsibilities Design, implement, and maintain automated data quality checks using Soda Core or similar tools. Integrate data quality validation into ETL/ELT pipelines (e.g., Airflow, dbt). Collaborate with data engineers, analysts, and product teams to define data quality SLAs and KPIs . Monitor data pipelines for anomalies, schema changes, and data drift. Set up alerting and reporting mechanisms for data quality issues. Contribute to the development of a data observability framework across the organization. 🛠️ Required Skills & Qualifications 3+ years of experience in data engineering, analytics engineering, or automation . Strong knowledge of Soda Core , Great Expectations , or other data quality tools. Proficiency in SQL and Python . Experience with dbt , Airflow , or similar orchestration tools. Familiarity with cloud data platforms (Snowflake, BigQuery, Redshift, etc.). Understanding of CI/CD practices and version control (Git). 🌟 Nice to Have Experience with Monte Carlo , Anomalo , or SAS Viya . Exposure to data governance and metadata management . Knowledge of data cataloging tools (e.g., Alation, Collibra).

Posted 3 weeks ago

Apply

7.0 years

0 Lacs

Kanayannur, Kerala, India

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY-Consulting - Data and Analytics –Manager – AWS EY's Consulting Services is a unique, industry-focused business unit that provides a broad range of integrated services that leverage deep industry experience with strong functional and technical capabilities and product knowledge. EY’s financial services practice provides integrated Consulting services to financial institutions and other capital markets participants, including commercial banks, retail banks, investment banks, broker-dealers & asset management firms, and insurance firms from leading Fortune 500 Companies. Within EY’s Consulting Practice, Data and Analytics team solves big, complex issues and capitalize on opportunities to deliver better working outcomes that help expand and safeguard the businesses, now and in the future. This way we help create a compelling business case for embedding the right analytical practice at the heart of client’s decision-making. The opportunity We’re looking for Senior – Cloud Experts with design experience in Bigdata cloud implementations. Your Key Responsibilities AWS Experience with Kafka, Flume and AWS tool stack such as Redshift and Kinesis are preferred. Experience building on AWS using S3, EC2, Redshift, Glue,EMR, DynamoDB, Lambda, QuickSight, etc. Experience in Pyspark/Spark / Scala Experience using software version control tools (Git, Jenkins, Apache Subversion) AWS certifications or other related professional technical certifications Experience with cloud or on-premise middleware and other enterprise integration technologies Experience in writing MapReduce and/or Spark jobs Demonstrated strength in architecting data warehouse solutions and integrating technical components Good analytical skills with excellent knowledge of SQL. 7+ years of work experience with very large data warehousing environment Excellent communication skills, both written and verbal 7+ years of experience with detailed knowledge of data warehouse technical architectures, infrastructure components, ETL/ ELT and reporting/analytic tools. 7+ years of experience data modelling concepts 7+ years of Python and/or Java development experience 7+ years’ experience in Big Data stack environments (EMR, Hadoop, MapReduce, Hive) Flexible and proactive/self-motivated working style with strong personal ownership of problem resolution. Excellent communicator (written and verbal formal and informal). Ability to multi-task under pressure and work independently with minimal supervision. Strong verbal and written communication skills. Must be a team player and enjoy working in a cooperative and collaborative team environment. Adaptable to new technologies and standards. Skills And Attributes For Success Use an issue-based approach to deliver growth, market and portfolio strategy engagements for corporates Strong communication, presentation and team building skills and experience in producing high quality reports, papers, and presentations. Experience in executing and managing research and analysis of companies and markets, preferably from a commercial due diligence standpoint. Exposure to tools like Tableau, Alteryx etc. To qualify for the role, you must have BE/BTech/MCA/MBA Minimum 4+ years hand-on experience in one or more key areas. Minimum 7+ years industry experience What We Look For A Team of people with commercial acumen, technical experience and enthusiasm to learn new things in this fast-moving environment An opportunity to be a part of market-leading, multi-disciplinary team of 1400 + professionals, in the only integrated global transaction business worldwide. Opportunities to work with EY Consulting practices globally with leading businesses across a range of industries What Working At EY Offers At EY, we’re dedicated to helping our clients, from start–ups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 3 weeks ago

Apply

7.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY-Consulting - Data and Analytics –Manager – AWS EY's Consulting Services is a unique, industry-focused business unit that provides a broad range of integrated services that leverage deep industry experience with strong functional and technical capabilities and product knowledge. EY’s financial services practice provides integrated Consulting services to financial institutions and other capital markets participants, including commercial banks, retail banks, investment banks, broker-dealers & asset management firms, and insurance firms from leading Fortune 500 Companies. Within EY’s Consulting Practice, Data and Analytics team solves big, complex issues and capitalize on opportunities to deliver better working outcomes that help expand and safeguard the businesses, now and in the future. This way we help create a compelling business case for embedding the right analytical practice at the heart of client’s decision-making. The opportunity We’re looking for Senior – Cloud Experts with design experience in Bigdata cloud implementations. Your Key Responsibilities AWS Experience with Kafka, Flume and AWS tool stack such as Redshift and Kinesis are preferred. Experience building on AWS using S3, EC2, Redshift, Glue,EMR, DynamoDB, Lambda, QuickSight, etc. Experience in Pyspark/Spark / Scala Experience using software version control tools (Git, Jenkins, Apache Subversion) AWS certifications or other related professional technical certifications Experience with cloud or on-premise middleware and other enterprise integration technologies Experience in writing MapReduce and/or Spark jobs Demonstrated strength in architecting data warehouse solutions and integrating technical components Good analytical skills with excellent knowledge of SQL. 7+ years of work experience with very large data warehousing environment Excellent communication skills, both written and verbal 7+ years of experience with detailed knowledge of data warehouse technical architectures, infrastructure components, ETL/ ELT and reporting/analytic tools. 7+ years of experience data modelling concepts 7+ years of Python and/or Java development experience 7+ years’ experience in Big Data stack environments (EMR, Hadoop, MapReduce, Hive) Flexible and proactive/self-motivated working style with strong personal ownership of problem resolution. Excellent communicator (written and verbal formal and informal). Ability to multi-task under pressure and work independently with minimal supervision. Strong verbal and written communication skills. Must be a team player and enjoy working in a cooperative and collaborative team environment. Adaptable to new technologies and standards. Skills And Attributes For Success Use an issue-based approach to deliver growth, market and portfolio strategy engagements for corporates Strong communication, presentation and team building skills and experience in producing high quality reports, papers, and presentations. Experience in executing and managing research and analysis of companies and markets, preferably from a commercial due diligence standpoint. Exposure to tools like Tableau, Alteryx etc. To qualify for the role, you must have BE/BTech/MCA/MBA Minimum 4+ years hand-on experience in one or more key areas. Minimum 7+ years industry experience What We Look For A Team of people with commercial acumen, technical experience and enthusiasm to learn new things in this fast-moving environment An opportunity to be a part of market-leading, multi-disciplinary team of 1400 + professionals, in the only integrated global transaction business worldwide. Opportunities to work with EY Consulting practices globally with leading businesses across a range of industries What Working At EY Offers At EY, we’re dedicated to helping our clients, from start–ups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 3 weeks ago

Apply

6.0 - 8.0 years

22 - 25 Lacs

Bengaluru

Work from Office

Responsibilities Independently prototypes/develop data solutions of high complexity to meet the needs of the organization and business customers. Designs proof-of-concept solutions utilizing an advanced understanding of multiple coding languages to meet technical and business requirements, with an ability to perform iterative solution testing to ensure specifications are met. Designs and develops data solutions that enables effective self-service data consumption, and can describe their value to the customer. Collaborates with stakeholders in defining metrics that are impactful to the business. Prioritize efforts based on customer value. Has an in-depth understanding of Agile techniques. Can set expectations for deliverables of high complexity. Can assist in the creation of roadmaps for data solutions. Can turn vague ideas or problems into data product solutions. Influences strategic thinking across the team and the broader organization. Maintains proof-of-concepts and prototype data solutions, and manages any assessment of their viability and scalability, with own team or in partnership with IT. Working with IT, assists in building robust systems focusing on long-term and ongoing maintenance and support. Ensure data solutions include deliverables required to achieve high quality data. Displays a strong understanding of complex multi-tier, multi-platform systems, and applies principles of metadata, lineage, business definitions, compliance, and data security to project work. Has an in-depth understanding of Business Intelligence tools, including visualization and user experience techniques. Can set expectations for deliverables of high complexity. Works with IT to help scale prototypes. Demonstrates a comprehensive understanding of new technologies as needed to progress initiatives. Requirements Expertise in Python programming, with demonstrated real-world experience building out data tools in a Python environment. Expertise in AWS Services , with demonstrated real-world experience building out data tools in a Python environment. Bachelor`s Degree in Computer Science, Computer Engineering, or related discipline preferred. Masters in same or related disciplines strongly preferred. 3+ years experience in coding for data management, data warehousing, or other data environments, including, but not limited to, Python and Spark. Experience with SAS is preferred. 3+ years experience as developer working in an AWS cloud computing environment. 3+ years experience using GIT or Bitbucket. Experience with Redshift, RDS, DynomoDB is preferred. Strong written and oral communication skills are required. Experience in the healthcare industry with healthcare data analytics products Experience with healthcare vocabulary and data standards (OMOP, FHIR) is a plus

Posted 3 weeks ago

Apply

0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

About Credgenics: Credgenics is India’s first of its kind NPA resolution platform backed by credible investors including Accel Partners and Titan Capital. We work with financial institutions, Banks, NBFCs & Digital lending firms to improve the efficiency of their collection using technology, automation intelligence and optimal legal routes to facilitate the resolution of stressed assets. With all major banks and NBFCs as our clients, our SaaS-based collections platform helps them efficiently improve their NPA, geographic reach and customer experience. We count most of India's lending majors as our clients such as ICICI Bank, Axis Bank, Bank of Baroda, etc and have been able to grow 100% MoM consistently even among the pandemic. About the Role We’re looking for a passionate and hands-on Data Engineer (Exp -1 to 3 Yrs) to join our data team. You’ll be responsible for building and maintaining reliable data pipelines and working with modern data tools to support analytics and business decisions. Key Responsibilities ● Design and develop data pipelines using Python, SQL, and Airflow ● Transform and move data efficiently using PySpark and Redshift ● Work closely with analysts and stakeholders to deliver clean, usable datasets ● Monitor and optimize data workflows for performance and reliability ● Ensure data integrity, quality, and security across pipelines Must-Have Skills ● Proficient in SQL for data transformation and querying ● Strong Python programming skills, especially for automation and ETL tasks ● Experience with Redshift or similar data warehouse platforms ● Hands-on experience with Airflow for workflow orchestration ● Working knowledge of PySpark for distributed data processing ● Understanding of data pipeline architecture and best practice

Posted 3 weeks ago

Apply

2.0 - 5.0 years

6 - 10 Lacs

Kochi

Work from Office

Job description: Seeking a skilled & proactive Data Engineer with 24 years of experience to support our enterprise data warehousing and analytics initiatives. The Candidate will be responsible for building scalable data pipelines, transforming data for analytics, and enabling data integration across cloud and On-premise systems. Key Responsibilities: Build and manage data lakes and data warehouses using services like Amazon S3, Redshift, and Athena Design and build secure, scalable, and efficient ETL/ELT pipelines on AWS using services like Glue, Lambda, Step Functions Work on SAP Datasphere to build and maintain Spaces, Data Builders, Views, and Consumption Layers Develop and maintain scalable data models and optimize queries for performance Monitor and optimize data workflows to ensure reliability, performance, and cost-efficiency Collaborate with Data Analysts and BI teams to provide clean, validated, and well-documented datasets Monitor, troubleshoot, and enhance data workflows and pipelines Ensure data quality, integrity, and governance policies are met Required Skills Strong SQL skills and experience with relational databases like MySQL, or SQL Server Proficient in Python or Scala for data transformation and scripting Familiarity with cloud platforms like AWS (S3, Redshift, Glue), Azure Good-to-Have Skills AWS Certification AWS Certified Data Analytics Exposure to modern data stack tools like Snowflake Experience in cloud-based projects and working in an Agile environment Understanding of data governance, security best practices, and compliance standards

Posted 3 weeks ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Responsibilities As a Database Engineer supporting the bank’s Analytics platforms, you will be a part of a centralized team of database engineers who are responsible for the maintenance and support of Citizens’ most critical databases. A Database Engineer will be responsible for: Requires conceptual knowledge of database practices and procedures such as DDL, DML and DCL. Requires how to use basic SQL skills including SELECT, FROM, WHERE and ORDER BY. Ability to code SQL Joins, subqueries, aggregate functions (AVG, SUM, COUNT), and use data manipulation techniques (UPDATE, DELETE). Understanding basic data relationships and schemas. Develop Basic Entity-Relationship diagrams. Conceptual understanding of cloud computing Can solves routine problems using existing procedures and standard practices. Can look up error codes and open tickets with vendors Ability to execute explains and identify poorly written queries Review data structures to ensure they adhere to database design best practices. Develop a comprehensive backup plan. Understanding the different cloud models (IaaS, PaaS, SaaS), service models, and deployment options (public, private, hybrid). Solves standard problems by analyzing possible solutions using experience, judgment and precedents. Troubleshoot database issues, such as integrity issues, blocking/deadlocking issues, log shipping issues, connectivity issues, security issues, memory issues, disk space, etc. Understanding of basic security concepts like user access control and data encryption. Complies with the regulations for data security. Ensures personal data is gathered legally and under strict conditions. Protects the data against accidental loss, destruction, or damage. Ability to write simple stored procedures for reusable code and improved performance. Run simple data loads and unloads Attention to detail and demonstrate a customer centric approach. Required Qualifications 3+ years of experience with database management/administration, Redshift, Snowflake or Neo4J 3+ years of experience working with incident, change and problem management processes and procedures. Experience maintaining and supporting large-scale critical database systems in the cloud. 2+ years of experience working with AWS cloud hosted databases An understanding of multiple programming languages, including at least one front Experience with agile development methodology SQL performance & tuning skills Excellent communication and client interfacing skills Desired Qualifications Experience working in an agile development environment Experience working in the banking industry Experience working in cloud environments such as AWS, Azure or Google Experience with CI/CD pipeline (Jenkins, Liquibase or equivalent) Education and Certifications Bachelor’s degree in computer science or related discipline

Posted 3 weeks ago

Apply

10.0 years

0 Lacs

India

Remote

Role: - Senior Grafana Engineer At Global Analytics, we’re driving HEINEKEN’s transformation into the world’s leading data-driven brewer. Our innovative spirit flows through the entire company, promoting a data-first approach in every aspect of our business. From brewery operations and logistics to IoT systems and sustainability monitoring, our smart data products are instrumental in accelerating growth and operational excellence. As we scale our analytics and observability solutions globally, we are seeking a Senior Grafana Engineer to join our dynamic Global Analytics team. About the Team: The Global Analytics team at HEINEKEN is a diverse group of Data Scientists, Data Engineers, BI Specialists, and Translators, collaborating across continents. Our culture promotes experimentation, agility, and bold thinking. Together, we transform raw data into impactful decisions that support HEINEKEN’s vision for sustainable, intelligent brewing. Grafana Developer We are looking for a Senior -level Grafana Developer to build and maintain real-time dashboards that support our IoT monitoring, time series analytics, and operational excellence initiatives. This is a hands-on technical role where you will collaborate with multiple teams to bring visibility to complex data across global operations. If you are excited to: Build real-time dashboards and monitoring solutions using Grafana. Work with InfluxDB, Redshift, and other time-series and SQL-based data sources. Translate complex system metrics into clear visual insights that support global operations. Collaborate with engineers, DevOps, IT Operations, and product teams to bring data to life. Be part of HEINEKEN’s digital transformation journey focused on data and sustainability. And if you like: A remote, flexible work environment with access to cutting-edge technologies. Working on impactful projects that monitor and optimize global brewery operations. A non-hierarchical, inclusive, and innovation-driven culture. Opportunities for professional development, global exposure, and knowledge sharing. Your Responsibilities: Design, develop, and maintain Grafana dashboards and visualizations for system monitoring and analytics. Work with time-series data from InfluxDB, Prometheus, Elasticsearch, and relational databases like MySQL, PostgreSQL, and Redshift. Optimize dashboard performance by managing queries, data sources, and caching mechanisms. Configure alerts and notifications to support proactive operational monitoring. Collaborate with cross-functional teams, including DevOps, IT Operations, and Data Analytics, to understand and address their observability needs. Utilize Power BI (optional) to supplement dashboarding with additional reports. Customize and extend Grafana using plugins, scripts, or automation tools as needed. Stay current with industry trends in data visualization, real-time analytics, and Grafana/Power BI ecosystem. We Expect: 5–10 years of experience developing Grafana dashboards and time-series visualizations. Strong SQL/MySQL skills and experience working with multiple data sources. Hands-on experience with Grafana and common data backends such as InfluxDB, Prometheus, PostgreSQL, Elasticsearch, or Redshift. Understanding of time-series data vs. traditional data warehouse architecture. Familiarity with scripting languages (e.g., JavaScript, Python, Golang) and query languages like PromQL. Experience configuring alerts and automating monitoring workflows. Exposure to Power BI (nice-to-have) for report building. Experience with DevOps/IT Ops concepts (monitoring, alerting, and observability tooling). Knowledge of version control (Git) and working in Agile/Scrum environments. Strong problem-solving mindset, clear communication skills, and a proactive attitude. Why Join Us: Be part of a globally recognized brand committed to innovation and sustainability. Join a team that values data transparency, experimentation, and impact. Shape the future of brewing by enabling data-driven visibility across all operations. Work in an international, collaborative environment that encourages learning and growth. If you are passionate about monitoring systems, making time-series data actionable, and enabling real-time decision-making, we invite you to join Global Analytics at HEINEKEN. Your expertise will help shape the future of our digital brewery operations.

Posted 3 weeks ago

Apply

3.0 years

0 Lacs

Coimbatore, Tamil Nadu, India

On-site

Project Role : Software Development Engineer Project Role Description : Analyze, design, code and test multiple components of application code across one or more clients. Perform maintenance, enhancements and/or development work. Must have skills : Python (Programming Language) Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Software Engineer with Python expertise, you will develop data-driven applications on AWS. Responsible for the creation of scalable data pipelines and algorithms to process and deliver actionable vehicle data insights. Roles & Responsibilities: 1. Lead the design and development of Python based applications and services 2. Architect and implement cloud-native solutions using AWS services 3. Mentor and guide the Python development team, promoting best practices and code quality 4. Collaborate with data scientists and analysts to implement data processing pipelines 5. Participate in architecture discussions and contribute to technical decision-making 6. Ensure the scalability, reliability, and performance of Python applications on AWS 7. Stay current with Python ecosystem developments, AWS services, and industry best practices Professional & Technical Skills: 1. Python Programming. 2. Web framework expertise (Django, Flask, or FastAPI) 3. Data processing and analysis 4. Database technologies (SQL and NoSQL) 5. API development 6. Significant experience working with AWS Lambda 7. AWS services (e.g., EC2, S3, RDS, Lambda, SageMaker, EMR) with Any AWS certification is a plus. 8. Infrastructure as Code (e.g., AWS CloudFormation, Terraform) 9. Test-Driven Development (TDD) 10. DevOps practices 11. Agile methodologies. 12. Experience with big data technologies and data warehousing solutions on AWS (e.g., Redshift, EMR, Athena). 13. Strong knowledge of AWS platform and services (e.g., EC2, S3, RDS, Lambda, API Gateway, VPC, IAM). Additional Information: 1. The candidate should have a minimum of 5 years of experience in Python Programming. 2. This position is based at our Hyderabad office 3. A 15 years full time education is required (Bachelor of computer science, or any related stream. master’s degree preferred.) 15 years full time education

Posted 3 weeks ago

Apply

8.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

iMerit is a leading AI data solutions company specializing in transforming unstructured data into structured intelligence for advanced machine learning and analytics applications. Our clients span autonomous mobility, medical AI, agriculture, and more—powering next-generation AI systems with high-quality data services. About the Role We are seeking a skilled Data Engineer to help scale and enhance our internal data observability and analytics platform. This platform integrates with data annotation tools and ML pipelines to provide visibility, insights, and automation across large-scale data operations. You will design and optimize robust data pipelines, build integrations with internal platforms (e.g., AngoHub, 3DPCT) and customer platforms, and support real-time metrics, dashboards, and workflows critical to customer delivery and operational excellence. Key Responsibilities ● Design and build scalable batch and real-time data pipelines across structured and unstructured sources. ● Integrate analytics and observability services with upstream annotation tools and downstream ML validation systems to enable full-cycle traceability. ● Collaborate with product, platform, and analytics teams to define event models, metrics, and data contracts. ● Develop ETL/ELT workflows using tools like AWS Glue, PySpark, or Airflow; ensure data quality, lineage, and reconciliation. ● Implement observability pipelines and alerts for mission-critical metrics (e.g., annotation throughput, quality KPIs, latency). ● Build data models and queries to power dashboards and insights via tools like Athena, QuickSight, or Redash. ● Contribute to infrastructure-as-code and CI/CD practices for deployment across cloud environments (preferably AWS). ● Document architecture, data flow, and support runbooks; continuously improve platform performance and resilience. ● Integrate with customer data platforms and pipelines, including bespoke data frameworks. Minimum Qualifications ● 4–8 years of experience in data engineering or backend development in data-intensive environments. ● Proficient in Python and SQL; familiarity with PySpark or other distributed processing frameworks. ● Strong experience with cloud-native data tools and services (S3, Lambda, Glue, Kinesis, Firehose, RDS). ● Familiarity with frameworks like Apache Hadoop, Apache Spark, and related tools for handling large datasets. ● Experience with data lake and warehouse patterns (e.g., Delta Lake, Redshift, Snowflake). ● Solid understanding of data modeling, schema design, and versioned datasets. ● Data Governance and Security: Understanding and implementing data governanc policies and security measures. ● Proven experience in building resilient, production-grade pipelines and troubleshooting live systems. ● Working knowledge of messaging frameworks like Kafka, Firehose etc ● Working knowledge of API frameworks, robust and performant API design ● Good working knowledge of Database fundamentals, relational databases and SQL Preferred Qualifications ● Experience with observability/monitoring systems (e.g., Prometheus, Grafana, OpenTelemetry) is a plus. ● Familiarity with data governance, RBAC, PII redaction, or compliance in analytics platforms. ● Exposure to annotation/ML workflow tools or ML model validation platforms. ● Comfort working in Agile, distributed teams using tools like Git, JIRA, and Slack. Why Join Us? You’ll work at the intersection of AI, data infrastructure, and impact—contributing to platforms that ensure AI is explainable, auditable, and ethical at scale. Join a team building the next generation of intelligent data operations.

Posted 3 weeks ago

Apply

5.0 - 6.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Data Engineer - AWS (Financial Data Reconciliation) Exp 5 -6 years Location - On-Site Ahmedabad Technical Skills: • AWS Stack: Redshift, Glue (PySpark), Lambda, Step Functions, CloudWatch, S3, Athena • Languages: Python (Pandas, PySpark), SQL (Redshift/PostgreSQL) • ETL & Orchestration: Apache Airflow (MWAA), AWS Glue Workflows, AWS Step Functions • Data Modeling: Experience with financial/transactional data schemas. • Data Architecture: Medallion (bronze/silver/gold) design, lakehouse patterns, slowly changing dimensions

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

India

On-site

About Straive: Straive is a market leading Content and Data Technology company providing data services, subject matter expertise, & technology solutions to multiple domains. Data Analytics & Al Solutions, Data Al Powered Operations and Education & Learning form the core pillars of the company’s long-term vision. The company is a specialized solutions provider to business information providers in finance, insurance, legal, real estate, life sciences and logistics. Straive continues to be the leading content services provider to research and education publishers. Data Analytics & Al Services: Our Data Solutions business has become critical to our client's success. We use technology and Al with human experts-in loop to create data assets that our clients use to power their data products and their end customers' workflows. As our clients expect us to become their future-fit Analytics and Al partner, they look to us for help in building data analytics and Al enterprise capabilities for them. With a client-base scoping 30 countries worldwide, Straive’s multi-geographical resource pool is strategically located in eight countries - India, Philippines, USA, Nicaragua, Vietnam, United Kingdom, and the company headquarters in Singapore. Website: https://www.straive.com/ Overview /Objective: We is looking for a Data Engineer with experience in building modern data platforms from the ground up. The successful candidate will build and maintain cloud-centric data processing capabilities that unleash the value of the League’s data assets to gain competitive advantage in the marketplace. They will be a hands-on contributor in the design and implementation of our cloud data platform that powers advanced analytics workloads. The Data Engineer will work in an agile environment and will be responsible for building and maintaining data integration, ingestion, curation and pipeline orchestration capabilities. They are comfortable challenging assumptions to improve existing solutions and ensure the team is building the best scalable and cost-efficient product. Responsibilities: Develop, test, and deploy software to generate data assets (relational, non-relational) for use by downstream BI engineers and data scientists Work with big data and cloud technologies such as EC2, Lambda, AWS Glue, Airflow, dbt, Redshift etc. Work closely with stakeholders to ensure successful data asset design and development Create software artifacts and patterns for reuse within the Data Engineering team. Ensure data pipelines are scalable, resilient and produced with the highest quality standards, metadata and validated for completeness and accuracy Work on a cross-functional Agile team responsible for end-to-end delivery of business needs Help improve data management processes - acquiring, transforming and storing massive volumes of structured and unstructured data Work closely with development teams to learn about needs, current processes and to promote best practices. Required Qualifications: University degree in Computer Science, Mathematics, Engineering, or related field. 5+ years of experience in software engineering with strong focus on data. Experience working with Cloud data platforms, preferably AWS (Lambda, Step Functions, S3, AWS Glue, Athena, Redshift). An expert in Python and SQL, including query optimization for relational, NoSQL and columnar databases. Sound knowledge of CI workflows and build/test/deploy automation. Strong understanding of data modelling concepts and best practices. Relevant experience with IaC (Terraform, Cloud Formation) Relevant experience with modern big data processing and orchestration tools such as dbt and Airflow. A great teammate and self-starter, strong detail orientation is critical in this role .

Posted 3 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies