Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
0 years
0 Lacs
Mumbai, Maharashtra, India
Remote
Role: Database Engineer Location : Remote Skills and Experience β Bachelor's degree in Computer Science, Information Systems, or a related field is desirable but not essential. β Experience with data warehousing concepts and tools (e.g., Snowflake, Redshift) to support advanced analytics and reporting, aligning with the teamβs data presentation goals. β Skills in working with APIs for data ingestion or connecting third-party systems, which could streamline data acquisition processes. β Proficiency with tools like Prometheus, Grafana, or ELK Stack for real-time database monitoring and health checks beyond basic troubleshooting. β Familiarity with continuous integration/continuous deployment (CI/CD) tools (e.g., Jenkins, GitHub Actions). β Deeper expertise in cloud platforms (e.g., AWS Lambda, GCP Dataflow) for serverless data processing or orchestration. β Knowledge of database development and administration concepts, especially with relational databases like PostgreSQL and MySQL. β Knowledge of Python programming, including data manipulation, automation, and object-oriented programming (OOP), with experience in modules such as Pandas, SQLAlchemy, gspread, PyDrive, and PySpark. β Knowledge of SQL and understanding of database design principles, normalization, and indexing. β Knowledge of data migration, ETL (Extract, Transform, Load) processes, or integrating data from various sources. β Knowledge of cloud-based databases, such as AWS RDS and Google BigQuery. β Eagerness to develop import workflows and scripts to automate data import processes. β Knowledge of data security best practices, including access controls, encryption, and compliance standards. β Strong problem-solving and analytical skills with attention to detail. β Creative and critical thinking. β Strong willingness to learn and expand knowledge in data engineering. β Familiarity with Agile development methodologies is a plus. β Experience with version control systems, such as Git, for collaborative development. β Ability to thrive in a fast-paced environment with rapidly changing priorities. β Ability to work collaboratively in a team environment. β Good and effective communication skills. β Comfortable with autonomy and ability to work independently. Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Ahmedabad, Gujarat, India
Remote
About Qualitrol Qualitrol is a leader in providing condition monitoring solutions for the electricity industry, ensuring reliability and efficiency in high-voltage electrical assets. We leverage cutting-edge technology, data analytics, and AI to transform how utilities manage their assets and make data-driven decisions. Role Summary We are looking for a highly skilled Senior Data Engineer to join our team and drive the development of our data engineering capabilities. This role involves designing, developing, and maintaining scalable data pipelines, optimizing data infrastructure, and ensuring high-quality data for analytics and AI-driven solutions. The ideal candidate will have deep expertise in data modeling, cloud-based data platforms, and best practices in data engineering. Key Responsibilities Design, develop, and optimize scalable ETL/ELT pipelines for large-scale industrial data. Architect and maintain data warehouses, lakes, and streaming solutions to support analytics and AI-driven insights. Implement data governance, security, and quality best practices to ensure data integrity and compliance. Work closely with Data Scientists, AI Engineers, and Software Developers to build robust data solutions. Optimize data infrastructure performance for real-time and batch processing. Leverage cloud-based technologies (AWS, Azure, GCP) to develop and deploy scalable data solutions. Develop and maintain APIs and data access layers for seamless integration across platforms. Collaborate with cross-functional teams to define and implement data strategy and architecture. Stay up to date with emerging data engineering technologies and best practices. Required Qualifications & Experience 5+ years of experience in data engineering, software development, or related fields. Proficiency in programming languages such as Python, Scala, or Java. Expertise in SQL and database technologies (PostgreSQL, MySQL, NoSQL, etc.). Hands-on experience with big data technologies (e.g., Spark, Kafka, Hadoop). Strong understanding of data warehousing (e.g., Snowflake, Redshift, BigQuery) and data lake architectures. Experience with cloud platforms (AWS, Azure, or GCP) and cloud-native data solutions. Knowledge of CI/CD pipelines, DevOps, and infrastructure as code (Terraform, Kubernetes, Docker). Familiarity with ML Ops and AI-driven data workflows is a plus. Strong problem-solving skills, ability to work independently, and excellent communication skills. Preferred Qualifications Experience in the electricity, utilities, or industrial sectors. Knowledge of IoT data ingestion and edge computing. Familiarity with GraphQL and RESTful API development. Experience in data visualization and business intelligence tools (Power BI, Tableau, etc.). Contributions to open-source data engineering projects. What We Offer Competitive salary and performance-based incentives. Comprehensive benefits package, including health, dental, and retirement plans. Opportunities for career growth and professional development. A dynamic work environment focused on innovation and cutting-edge technology. Hybrid/remote work flexibility (depending on location and project needs). How To Apply Interested candidates should submit their resume and a cover letter detailing their experience and qualifications. Fortive Corporation Overview Fortiveβs essential technology makes the world stronger, safer, and smarter. We accelerate transformation across a broad range of applications including environmental, health and safety compliance, industrial condition monitoring, next-generation product design, and healthcare safety solutions. We are a global industrial technology innovator with a startup spirit. Our forward-looking companies lead the way in software-powered workflow solutions, data-driven intelligence, AI-powered automation, and other disruptive technologies. Weβre a force for progress, working alongside our customers and partners to solve challenges on a global scale, from workplace safety in the most demanding conditions to groundbreaking sustainability solutions. We are a diverse team 17,000 strong, united by a dynamic, inclusive culture and energized by limitless learning and growth. We use the proven Fortive Business System (FBS) to accelerate our positive impact. At Fortive, we believe in you. We believe in your potentialβyour ability to learn, grow, and make a difference. At Fortive, we believe in us. We believe in the power of people working together to solve problems no one could solve alone. At Fortive, we believe in growth. Weβre honest about whatβs working and what isnβt, and we never stop improving and innovating. Fortive: For you, for us, for growth. About Qualitrol QUALITROL manufactures monitoring and protection devices for high value electrical assets and OEM manufacturing companies. Established in 1945, QUALITROL produces thousands of different types of products on demand and customized to meet our individual customersβ needs. We are the largest and most trusted global leader for partial discharge monitoring, asset protection equipment and information products across power generation, transmission, and distribution. At Qualitrol, we are redefining condition-based monitoring. We Are an Equal Opportunity Employer. Fortive Corporation and all Fortive Companies are proud to be equal opportunity employers. We value and encourage diversity and solicit applications from all qualified applicants without regard to race, color, national origin, religion, sex, age, marital status, disability, veteran status, sexual orientation, gender identity or expression, or other characteristics protected by law. Fortive and all Fortive Companies are also committed to providing reasonable accommodations for applicants with disabilities. Individuals who need a reasonable accommodation because of a disability for any part of the employment application process, please contact us at applyassistance@fortive.com. Bonus or Equity This position is also eligible for bonus as part of the total compensation package. QUALITROL manufactures monitoring and protection devices for high value electrical assets and OEM manufacturing companies. Established in 1945, QUALITROL produces thousands of different types of products on demand and customized to meet our individual customersβ needs. We are the largest and most trusted global leader for partial discharge monitoring, asset protection equipment and information products across power generation, transmission, and distribution. At Qualitrol, we are redefining condition-based monitoring. We Are an Equal Opportunity Employer. Fortive Corporation and all Fortive Companies are proud to be equal opportunity employers. We value and encourage diversity and solicit applications from all qualified applicants without regard to race, color, national origin, religion, sex, age, marital status, disability, veteran status, sexual orientation, gender identity or expression, or other characteristics protected by law. Fortive and all Fortive Companies are also committed to providing reasonable accommodations for applicants with disabilities. Individuals who need a reasonable accommodation because of a disability for any part of the employment application process, please contact us at applyassistance@fortive.com. This position is also eligible for bonus as part of the total compensation package. Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Indore, Madhya Pradesh, India
Remote
AgileEngine is an Inc. 5000 company that creates award-winning software for Fortune 500 brands and trailblazing startups across 17+ industries. We rank among the leaders in areas like application development and AI/ML, and our people-first culture has earned us multiple Best Place to Work awards. If you're looking for a place to grow, make an impact, and work with people who care, we'd love to meet you! WHAT YOU WILL DO - Be part of a small team developing multi-cloud platform services; - Build and maintain automation frameworks to execute developer-written tests in private and public cloud environments; - Optimize code, ensure best coding practices are followed, and support the existing team in overcoming technical challenges; - Monitor and support service providers using the app in the field. MUST HAVES - 5+ years of experience in web development in similar environments; - Bachelorβs degree in Computer Science, Information Security, or a related technology field; - Strong knowledge of Java 8 and 17, Spring , and Spring Boot ; - Experience with microservices and events ; - Great experience and passion for creating documentation for code and business processes; - Expertise in architectural design and code review, with a strong grasp of SOLID principles; - Skilled in gathering and analyzing complex requirements and business processes; - Contribute to the development of our internal tools and reusable architecture; - Experience creating optimized code and performance improvement for production systems and applications; - Experience debugging, refactoring applications, and replicating scenarios to solve issues and understand the business; - Familiarity with unit and system testing frameworks ( e.g., JUnit, Mockito ); - Proficient in using Git ; - Dedicated: own the apps you and your team are developing and take quality very seriously; - Problem Solving: proactively solve problems before they can become real problems; - Constantly upgrading your skill set and applying those practices; - Upper-Intermediate English level. NICE TO HAVES - Experience with Test Driven Development; - Experience with logistics software (delivery, transportation, route planning), RSA domain; - Experience with AWS, like ECS, SNS, SQS, and RedShift. THE BENEFITS OF JOINING US - Remote work & Local connection: Work where you feel most productive and connect with your team in periodic meet-ups to strengthen your network and connect with other top experts. - Legal presence in India: We ensure full local compliance with a structured, secure work environment tailored to Indian regulations. - Competitive Compensation in INR: Fair compensation in INR with dedicated budgets for your personal growth, education, and wellness. - Innovative Projects: Leverage the latest tech and create cutting-edge solutions for world-recognized clients and the hottest startups. Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Hubballi Urban, Karnataka, India
Remote
AgileEngine is an Inc. 5000 company that creates award-winning software for Fortune 500 brands and trailblazing startups across 17+ industries. We rank among the leaders in areas like application development and AI/ML, and our people-first culture has earned us multiple Best Place to Work awards. If you're looking for a place to grow, make an impact, and work with people who care, we'd love to meet you! WHAT YOU WILL DO - Be part of a small team developing multi-cloud platform services; - Build and maintain automation frameworks to execute developer-written tests in private and public cloud environments; - Optimize code, ensure best coding practices are followed, and support the existing team in overcoming technical challenges; - Monitor and support service providers using the app in the field. MUST HAVES - 5+ years of experience in web development in similar environments; - Bachelorβs degree in Computer Science, Information Security, or a related technology field; - Strong knowledge of Java 8 and 17, Spring , and Spring Boot ; - Experience with microservices and events ; - Great experience and passion for creating documentation for code and business processes; - Expertise in architectural design and code review, with a strong grasp of SOLID principles; - Skilled in gathering and analyzing complex requirements and business processes; - Contribute to the development of our internal tools and reusable architecture; - Experience creating optimized code and performance improvement for production systems and applications; - Experience debugging, refactoring applications, and replicating scenarios to solve issues and understand the business; - Familiarity with unit and system testing frameworks ( e.g., JUnit, Mockito ); - Proficient in using Git ; - Dedicated: own the apps you and your team are developing and take quality very seriously; - Problem Solving: proactively solve problems before they can become real problems; - Constantly upgrading your skill set and applying those practices; - Upper-Intermediate English level. NICE TO HAVES - Experience with Test Driven Development; - Experience with logistics software (delivery, transportation, route planning), RSA domain; - Experience with AWS, like ECS, SNS, SQS, and RedShift. THE BENEFITS OF JOINING US - Remote work & Local connection: Work where you feel most productive and connect with your team in periodic meet-ups to strengthen your network and connect with other top experts. - Legal presence in India: We ensure full local compliance with a structured, secure work environment tailored to Indian regulations. - Competitive Compensation in INR: Fair compensation in INR with dedicated budgets for your personal growth, education, and wellness. - Innovative Projects: Leverage the latest tech and create cutting-edge solutions for world-recognized clients and the hottest startups. Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Description Amazon Retail Financial Intelligence Systems is seeking a seasoned and talented Senior Data Engineer to join the Fortune Platform team. Fortune is a fast growing team with a mandate to build tools to automate profit-and-loss forecasting and planning for the Physical Consumer business. We are building the next generation Business Intelligence solutions using big data technologies such as Apache Spark, Hive/Hadoop, and distributed query engines. As a Data Engineer in Amazon, you will be working in a large, extremely complex and dynamic data environment. You should be passionate about working with big data and are able to learn new technologies rapidly and evaluate them critically. You should have excellent communication skills and be able to work with business owners to translate business requirements into system solutions. You are a self-starter, comfortable with ambiguity, and working in a fast-paced and ever-changing environment. Ideally, you are also experienced with at least one of the programming languages such as Java, C++, Spark/Scala, Python, etc. Major Responsibilities Work with a team of product and program managers, engineering leaders, and business leaders to build data architectures and platforms to support business Design, develop, and operate high-scalable, high-performance, low-cost, and accurate data pipelines in distributed data processing platforms Recognize and adopt best practices in data processing, reporting, and analysis: data integrity, test design, analysis, validation, and documentation Keep up to date with big data technologies, evaluate and make decisions around the use of new or existing software products to design the data architecture Design, build and own all the components of a high-volume data warehouse end to end. Provide end-to-end data engineering support for project lifecycle execution (design, execution and risk assessment) Continually improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers Interface with other technology teams to extract, transform, and load (ETL) data from a wide variety of data sources Own the functional and nonfunctional scaling of software systems in your ownership area. Implement big data solutions for distributed computing. Key job responsibilities As a DE on our team, you will be responsible for leading the data modelling, database design, and launch of some of the core data pipelines. You will have significant influence on our overall strategy by helping define the data model, drive the database design, and spearhead the best practices to delivery high quality products. About The Team Profit intelligence systems measures, predicts true profit(/loss) for each item as a result of a specific shipment to an Amazon customer. Profit Intelligence is all about providing intelligent ways for Amazon to understand profitability across retail business. What are the hidden factors driving the growth or profitability across millions of shipments each day? We compute the profitability of each and every shipment that gets shipped out of Amazon. Guess what, we predict the profitability of future possible shipments too. We are a team of agile, can-do engineers, who believe that not only are moon shots possible but that they can be done before lunch. All it takes is finding new ideas that challenge our preconceived notions of how things should be done. Process and procedure matter less than ideas and the practical work of getting stuff done. This is a place for exploring the new and taking risks. We push the envelope in using cloud services in AWS as well as the latest in distributed systems, forecasting algorithms, and data mining. Basic Qualifications 3+ years of data engineering experience Experience with data modeling, warehousing and building ETL pipelines Experience with SQL Preferred Qualifications Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases) Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region youβre applying in isnβt listed, please contact your Recruiting Partner. Company - ADCI MAA 12 SEZ Job ID: A3006789 Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Title: Lead Full Stack / Senior Full Stack Developer Experience: 5+ years Location: Noida (Work from Office) Job Overview: We are seeking a highly skilled candidate with expertise in NodeJS, NestJS, React.js, Next.js, MySQL, Redshift, NoSQL, System Design, and Architecture . The ideal candidate will have strong workflow design and implementation skills , experience with queueing, caching, scalability, microservices, and AWS , and team leadership experience to manage a team of 10 developers. Knowledge of React Native and automation testing would be an added advantage. Key Responsibilities: Architect and develop scalable backend systems using NestJS with a focus on high performance. Lead a team of developers, ensuring adherence to best practices and Agile methodologies. Work with databases including MySQL and NoSQL to ensure data integrity and performance. Optimize system design for scalability, caching, and queueing mechanisms. Collaborate with the frontend team working on Next.js and ensure seamless integration. Ensure robust microservices architecture with proper API design and inter-service communication. Work in an Agile environment, driving sprints, standups, and ensuring timely delivery of projects. Required Skills & Experience: Experience in software development, System Design and System Architecture. Strong expertise in NodeJS, NestJS, React.js, Next.js, MySQL, Redshift, NoSQL, AWS, and Microservices Architecture . Expertise in queueing mechanisms, caching, scalability, and system performance optimization . Good to have knowledge of React Native and Automation Testing . Strong leadership and management skills with experience in leading development teams. Proficiency in Agile methodologies and sprint planning. Excellent problem-solving skills and ability to work under pressure. Qualifications: B.E / B.Tech / MCA or equivalent degree in IT/CS with good academic credentials. Show more Show less
Posted 1 week ago
1.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Description Business Data Technologies (BDT) makes it easier for teams across Amazon to produce, store, catalog, secure, move, and analyze data at massive scale. Our managed solutions combine standard AWS tooling, open-source products, and custom services to free teams from worrying about the complexities of operating at Amazon scale. This lets BDT customers move beyond the engineering and operational burden associated with managing and scaling platforms, and instead focus on scaling the value they can glean from their data, both for their customers and their teams. We own the one of the biggest (largest) data lakes for Amazon where 1000βs of Amazon teams can search, share, and store EB (Exabytes) of data in a secure and seamless way; using our solutions, teams around the world can schedule/process millions of workloads on a daily basis. We provide enterprise solutions that focus on compliance, security, integrity, and cost efficiency of operating and managing EBs of Amazon data. Key job responsibilities Core Responsibilities Be hands-on with ETL to build data pipelines to support automated reporting Interface with other technology teams to extract, transform, and load data from a wide variety of data sources Implement data structures using best practices in data modeling, ETL/ELT processes, and SQL, Redshift. Model data and metadata for ad-hoc and pre-built reporting Interface with business customers, gathering requirements and delivering complete reporting solutions Build robust and scalable data integration (ETL) pipelines using SQL, Python and Spark. Build and deliver high quality data sets to support business analyst, data scientists, and customer reporting needs. Continually improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers Participate in strategic & tactical planning discussions A day in the life As a Data Engineer, You Will Be Working With Cross-functional Partners From Science, Product, SDEs, Operations And Leadership To Translate Raw Data Into Actionable Insights For Stakeholders, Empowering Them To Make Data-driven Decisions. Some Of The Key Activities Include Crafting the Data Flow: Design and build data pipelines, the backbone of our data ecosystem. Ensure the integrity of the data journey by implementing robust data quality checks and monitoring processes. Architect for Insights: Translate complex business requirements into efficient data models that optimize data analysis and reporting. Automate data processing tasks to streamline workflows and improve efficiency. Become a data detective! ensuring data availability and performance Basic Qualifications 1+ years of data engineering experience Experience with SQL Experience with data modeling, warehousing and building ETL pipelines Experience with one or more query language (e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala) Experience with one or more scripting language (e.g., Python, KornShell) Preferred Qualifications Experience with big data technologies such as: Hadoop, Hive, Spark, EMR Experience with any ETL tool like, Informatica, ODI, SSIS, BODI, Datastage, etc. Knowledge of cloud services such as AWS or equivalent Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region youβre applying in isnβt listed, please contact your Recruiting Partner. Company - ADCI HYD 13 SEZ Job ID: A3006419 Show more Show less
Posted 1 week ago
0.0 years
0 Lacs
Bengaluru, Karnataka
On-site
- 5+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience - Experience with data visualization using Tableau, Quicksight, or similar tools - Experience with data modeling, warehousing and building ETL pipelines - Experience in Statistical Analysis packages such as R, SAS and Matlab - Experience using SQL to pull data from a database or data warehouse and scripting experience (Python) to process data for modeling The role of the Sub Same Day business is to provide ultrafast speeds (2 hour and same day scheduled) and reliable delivery for selection that customers fast. Customers find their daily essentials and a curated selection of Amazonβs top-selling items with sub same day promises. The program is highly cross-functional in nature, operations intensive and requires a number of India-first solutions to be created, which then need to be scaled WW. In this context, SSD is looking for a talented, driven and experienced Business Analyst It is a pivotal role that will contribute to the evolution and success of one of the fastest growing businesses in the company. Joining the Amazon team means partnering with a dynamic and creative group who set a high bar for innovation and success in a fast-paced and changing environment. The Business Analyst is responsible for being able to influence critical business decisions using data and providing insight to category teams to be able to act. The successful candidate needs to have : - A passion for numbers, data and challenges. - High attention to detail and proven ability to manage multiple, competing priorities simultaneously. - Excellent verbal and written communications skills. - An ability to work in a fast-paced, complex environment where continuous innovation is desired. - Bias for action and ownership. - A history of teamwork and willingness to roll up oneβs sleeves to get the job done. - Ability to work with diverse teams and people across levels in an organization - Proven analytical and quantitative skills (includes the ability to effectively use tools such as Excel and SQL) and an ability to use hard data and metrics to back up assumptions and justify business decisions. Key job responsibilities Key Responsibilities - Influence business decisions with data. - Use data resources to accomplished assigned analytical tasks relating to critical business metrics. - Monitor key metrics and escalate anomalies as needed - Provide input on suggested business actions based on analytical findings. Experience in data mining, ETL, etc. and using databases in a business environment with large-scale, complex datasets Experience with AWS solutions such as EC2, DynamoDB, S3, and Redshift Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region youβre applying in isnβt listed, please contact your Recruiting Partner.
Posted 1 week ago
0.0 years
0 Lacs
Bengaluru, Karnataka
On-site
- 3+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience - Experience with data visualization using Tableau, Quicksight, or similar tools - Experience with data modeling, warehousing and building ETL pipelines - Experience in Statistical Analysis packages such as R, SAS and Matlab - Experience using SQL to pull data from a database or data warehouse and scripting experience (Python) to process data for modeling At Amazon, we're working to be the most customer-centric company on earth. To get there, we need exceptionally talented, bright, and driven people. We are looking for a Business Intelligence Engineer in the Amazon Grocery Data and Tech. As a Business Intelligence Engineer, you will be responsible for execution of rapidly evolving complex demands, design, development, testing, deployment and operations of multiple analytical solutions. You will be responsible for delivering some of our most strategic analytics initiatives. You will create insights that support all of Amazonβs grocery businesses and have a significant impact on Amazonβs top-line and competitive position. A successful candidate will bring deep technical expertise, strong business acumen and judgment, ability to define ground breaking products, desire to have an industry wide impact and ability to work within a fast-moving environment in a large company to rapidly deliver services that have a broad business impact. They will define metrics to measure business performance and drive efficiency improvements. About the team Amazon Grocery Data and Technology (GDT) team is the central data team for Amazon Grocery, consisting of engineering teams (BIEs/DEs/SDEs). We are the direct owner of (a) Infrastructure and services used for reading, processing, reporting and enrichment of grocery data; (b) Productivity tools; (c) Reporting solutions for worldwide 1P and 3P customers across the following businesses: Amazon Fresh Grocery (AFG), Whole Foods Market (WFM), Amazon Go, Amazon Grocery Partners, etc. Experience with AWS solutions such as EC2, DynamoDB, S3, and Redshift Experience in data mining, ETL, etc. and using databases in a business environment with large-scale, complex datasets Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region youβre applying in isnβt listed, please contact your Recruiting Partner.
Posted 1 week ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Title : Data Testing Engineer Exp : 8+ years Location : Hyderabad and Gurgaon (Hybrid) Notice Period : Immediate to 15 days Job Description : Develop, maintain, and execute test cases to validate the accuracy, completeness, and consistency of data across different layers of the data warehouse. β Test ETL processes to ensure that data is correctly extracted, transformed, and loaded from source to target systems while adhering to business rules β Perform source-to-target data validation to ensure data integrity and identify any discrepancies or data quality issues. β Develop automated data validation scripts using SQL, Python, or testing frameworks to streamline and scale testing efforts. β Conduct testing in cloud-based data platforms (e.g., AWS Redshift, Google BigQuery, Snowflake), ensuring performance and scalability. β Familiarity with ETL testing tools and frameworks (e.g., Informatica, Talend, dbt). β Experience with scripting languages to automate data testing. β Familiarity with data visualization tools like Tableau, Power BI, or Looker Show more Show less
Posted 1 week ago
0 years
0 Lacs
Gurugram, Haryana, India
Remote
IMEA (India, Middle East, Africa) India LIXIL INDIA PVT LTD Employee Assignment Fully remote possible Full Time 1 May 2025 Title Senior Data Engineer Job Description A Data Engineer is responsible for designing, building, and maintaining large-scale data systems and infrastructure. Their primary goal is to ensure that data is properly collected, stored, processed, and retrieved to support business intelligence, analytics, and data-driven decision-making. Key Responsibilities Design and Develop Data Pipelines: Create data pipelines to extract data from various sources, transform it into a standardized format, and load it into a centralized data repository. Build and Maintain Data Infrastructure: Design, implement, and manage data warehouses, data lakes, and other data storage solutions. Ensure Data Quality and Integrity: Develop data validation, cleansing, and normalization processes to ensure data accuracy and consistency. Collaborate with Data Analysts and Business Process Owners: Work with data analysts and business process owners to understand their data requirements and provide data support for their projects. Optimize Data Systems for Performance: Continuously monitor and optimize data systems for performance, scalability, and reliability. Develop and Maintain Data Governance Policies: Create and enforce data governance policies to ensure data security, compliance, and regulatory requirements. Experience & Skills Hands-on experience in implementing, supporting, and administering modern cloud-based data solutions (Google BigQuery, AWS Redshift, Azure Synapse, Snowflake, etc.). Strong programming skills in SQL, Java, and Python. Experience in configuring and managing data pipelines using Apache Airflow, Informatica, Talend, SAP BODS or API-based extraction. Expertise in real-time data processing frameworks. Strong understanding of Git and CI/CD for automated deployment and version control. Experience with Infrastructure-as-Code tools like Terraform for cloud resource management. Good stakeholder management skills to collaborate effectively across teams. Solid understanding of SAP ERP data and processes to integrate enterprise data sources. Exposure to data visualization and front-end tools (Tableau, Looker, etc.). Strong command of English with excellent communication skills. Show more Show less
Posted 1 week ago
0 years
0 Lacs
Gurugram, Haryana, India
Remote
IMEA (India, Middle East, Africa) India LIXIL INDIA PVT LTD Employee Assignment Fully remote possible Full Time 1 May 2025 Title Data Engineer Job Description A Data Engineer is responsible for designing, building, and maintaining large-scale data systems and infrastructure. Their primary goal is to ensure that data is properly collected, stored, processed, and retrieved to support business intelligence, analytics, and data-driven decision-making. Key Responsibilities Design and Develop Data Pipelines: Create data pipelines to extract data from various sources, transform it into a standardized format, and load it into a centralized data repository. Build and Maintain Data Infrastructure: Design, implement, and manage data warehouses, data lakes, and other data storage solutions. Ensure Data Quality and Integrity: Develop data validation, cleansing, and normalization processes to ensure data accuracy and consistency. Collaborate with Data Analysts and Business Process Owners: Work with data analysts and business process owners to understand their data requirements and provide data support for their projects. Optimize Data Systems for Performance: Continuously monitor and optimize data systems for performance, scalability, and reliability. Develop and Maintain Data Governance Policies: Create and enforce data governance policies to ensure data security, compliance, and regulatory requirements. Experience & Skills Hands-on experience in implementing, supporting, and administering modern cloud-based data solutions (Google BigQuery, AWS Redshift, Azure Synapse, Snowflake, etc.). Strong programming skills in SQL, Java, and Python. Experience in configuring and managing data pipelines using Apache Airflow, Informatica, Talend, SAP BODS or API-based extraction. Expertise in real-time data processing frameworks. Strong understanding of Git and CI/CD for automated deployment and version control. Experience with Infrastructure-as-Code tools like Terraform for cloud resource management. Good stakeholder management skills to collaborate effectively across teams. Solid understanding of SAP ERP data and processes to integrate enterprise data sources. Exposure to data visualization and front-end tools (Tableau, Looker, etc.). Strong command of English with excellent communication skills. Show more Show less
Posted 1 week ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
Roles And Responsibilities Proficiency in building highly scalable ETL and streaming-based data pipelines using Google Cloud Platform (GCP) services and products like Biquark, Cloud Dataflow Proficiency in large scale data platforms and data processing systems such as Google Big Query, Amazon Redshift, Azure Data Lake Excellent Python, PySpark and SQL development and debugging skills, exposure to other Big Data frameworks like Hadoop Hive would be added advantage Experience building systems to retrieve and aggregate data from event-driven messaging frameworks (e.g. RabbitMQ and Pub/Sub) Secondary Skills : Cloud Big Table, AI/ML solutions, Compute Engine, Cloud Fusion (ref:hirist.tech) Show more Show less
Posted 1 week ago
0 years
0 Lacs
Gurugram, Haryana, India
Remote
IMEA (India, Middle East, Africa) India LIXIL INDIA PVT LTD Employee Assignment Hybrid Full Time 30 June 2025 Location: Remote Working Internal Customers: Regional Quality Teams Team: QM System & Processes - Analytics & Digitalization. Job Description: Sr. Data Analyst Job Summary We are seeking a skilled Data Analyst to join our team, focusing on building and maintaining scalable, high-quality data warehouses and databases. This individual will be responsible for developing ETL processes, optimizing data pipelines, and transforming raw data from multiple sources into structured formats for analysis and reporting. The Data Analyst will also contribute to driving analytics and digitalization tools across various regions and provide support for specific data requests. What will you do? Design and Build Data Warehouses: Architect and develop scalable, robust, and efficient data warehouses that support business intelligence and data analytics needs. Develop and Optimize ETL Processes: Design, develop, and optimize ETL (Extract, Transform, Load) processes and data pipelines to ensure seamless, efficient, and timely data flow between systems, enabling high-quality insights and reporting. Data Transformation: Transform raw data from multiple channels into structured, clean formats suitable for analysis, reporting, and decision-making. Ensure data integrity and consistency throughout the transformation process. Analytics & Digitalization Tools Development: Further develop and drive the adoption of analytics and digitalization tools to support data-driven decision-making and operational improvements across all regions. Work to optimize and scale these tools as business needs evolve. Support Regional Data Requests: Act as a key point of contact for supporting specific quality data deep-dive requests from regional teams. Analyze and respond to their unique data needs, offering insights and actionable recommendations. Continuous Improvement: Work to continuously improve data quality, performance, and efficiency within the data ecosystem, implementing new tools or strategies to optimize overall data management and analysis. What are we looking for? Bachelorβs degree in Data Science, Computer Science or a related field. Proven experience in data analysis, data engineering, or similar roles with a focus on building and maintaining data warehouses. Hands-on experience with ETL processesand data pipeline development. Strong background in data transformationand structuring raw data for reporting and analysis. Experience in developing or supporting analyticsand digitalization tools across multiple regions or business units. Experience with data visualization tools(e.g., Tableau, Power BI) and proficiency in SQL Familiarity with data modelling, database optimization, and data management best practices. Good To Have Familiarity with cloud platforms(AWS, GCP, Azure) and related data services. Experience with data warehousing solutionslike Amazon Redshift, Snowflake, or Google BigQuery Show more Show less
Posted 1 week ago
0.0 - 3.0 years
0 Lacs
India
On-site
Description GroundTruth is an advertising platform that turns real-world behavior into marketing that drives in-store visits and other real business results. We use observed real-world consumer behavior, including location and purchase data, to create targeted advertising campaigns across all screens, measure how consumers respond, and uncover unique insights to help optimize ongoing and future marketing efforts. With this focus on media, measurement, and insights, we provide marketers with tools to deliver media campaigns that drive measurable impact, such as in-store visits, sales, and more. Learn more at groundtruth.com. We believe that innovative technology starts with the best talent and have been ranked one of Ad Ageβs Best Places to Work in 2021, 2022, 2023 & 2025! Learn more about the perks of joining our team here. About Team GroundTruth seeks an Associate Software Engineer to join our Reporting team. The Reporting Team at GroundTruth is responsible for designing, building, and maintaining data pipelines and dashboards that deliver actionable insights. We ensure accurate and timely reporting to drive data-driven decisions for advertisers and publishers. We take pride in building an Engineering Team composed of strong communicators who collaborate with multiple business and engineering stakeholders to find compromises and solutions. Our engineers are organised and detail-oriented team players who are problem solvers with a maker mindset. As an Associate Software Engineer (ASE) on our Integration Team, you will build solutions that add new capabilities to our platform. You Will Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS βbig dataβ technologies. Lead engineering efforts across multiple software components. Write excellent production code and tests, and help others improve in code reviews. Analyse high-level requirements to design, document, estimate, and build systems. Continuously improve the team's practices in code quality, reliability, performance, testing, automation, logging, monitoring, alerting, and build processes You Have B.Tech./B.E./M.Tech./MCA or equivalent in computer science 0-3 years of experience in Data Engineering Experience with AWS Stack used for Data engineering EC2, S3, Athena, Redshift, EMR, ECS, Lambda, and Step functions Experience in MapReduce, Spark, and Glue Hands-on experience with Java/Python for the orchestration of data pipelines and Data engineering tasks Experience in writing analytical queries using SQL Experience in Airflow Experience in Docker Proficient in Git How can you impress us? Knowledge of REST APIs The following skills/certifications: Python, SQL/MySQL, AWS, Git Additional nice-to-have skills/certifications: Flask, Fast API Knowledge of shell scripting. Experience with BI tools like Looker. Experience with DB maintenance Experience with Amazon Web Services and Docker Configuration management and QA practices Benefits At GroundTruth, we want our employees to be comfortable with their benefits so they can focus on doing the work they love. Parental leave- Maternity and Paternity Flexible Time Offs (Earned Leaves, Sick Leaves, Birthday leave, Bereavement leave & Company Holidays) In Office Daily Catered Breakfast, Lunch, Snacks and Beverages Health cover for any hospitalization. Covers both nuclear family and parents Tele-med for free doctor consultation, discounts on health checkups and medicines Wellness/Gym Reimbursement Pet Expense Reimbursement Childcare Expenses and reimbursements Employee referral program Education reimbursement program Skill development program Cell phone reimbursement (Mobile Subsidy program). Internet reimbursement/Postpaid cell phone bill/or both. Birthday treat reimbursement Employee Provident Fund Scheme offering different tax saving options such as Voluntary Provident Fund and employee and employer contribution up to 12% Basic Creche reimbursement Co-working space reimbursement National Pension System employer match Meal card for tax benefit Special benefits on salary account Show more Show less
Posted 1 week ago
10.0 years
0 Lacs
India
Remote
Role: Senior Azure / Data Engineer with (ETL/ Data warehouse background) Location: Remote, India Duration: Long Term Contract Need with 10+ years of experience Must have Skills : β’ Min 5 years of experience in modern data engineering/data warehousing/data lakes technologies on cloud platforms like Azure, AWS, GCP, Data Bricks, etc. Azure experience is preferred over other cloud platforms. β’ 10 + years of proven experience with SQL, schema design, and dimensional data modeling β’ Solid knowledge of data warehouse best practices, development standards, and methodologies β’ Experience with ETL/ELT tools like ADF, Informatica, Talend, etc., and data warehousing technologies like Azure Synapse, Azure SQL, Amazon Redshift, Snowflake, Google Big Query, etc.. β’ Strong experience with big data tools(Databricks, Spark, etc..) and programming skills in PySpark and Spark SQL. β’ Be an independent self-learner with a βletβs get this doneβ approach and the ability to work in Fast paced and Dynamic environment. β’ Excellent communication and teamwork abilities. Nice-to-Have Skills: β’ Event Hub, IOT Hub, Azure Stream Analytics, Azure Analysis Service, Cosmo DB knowledge. β’ SAP ECC /S/4 and Hana knowledge. β’ Intermediate knowledge on Power BI β’ Azure DevOps and CI/CD deployments, Cloud migration methodologies and processes Show more Show less
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
Delhi, India
On-site
What is Findem: Findem is the only talent data platform that combines 3D data with AI. It automates and consolidates top-of-funnel activities across your entire talent ecosystem, bringing together sourcing, CRM, and analytics into one place. Only 3D data connects people and company data over time - making an individualβs entire career instantly accessible in a single click, removing the guesswork, and unlocking insights about the market and your competition no one else can. Powered by 3D data, Findemβs automated workflows across the talent lifecycle are the ultimate competitive advantage. Enabling talent teams to deliver continuous pipelines of top, diverse candidates while creating better talent experiences, Findem transforms the way companies plan, hire, and manage talent. Learn more at www.findem.ai Experience - 5 - 9 years We are looking for an experienced Big Data Engineer, who will be responsible for building, deploying and managing various data pipelines, data lake and Big data processing solutions using Big data and ETL technologies. Location- Delhi, India Hybrid- 3 days onsite Responsibilities Build data pipelines, Big data processing solutions and data lake infrastructure using various Big data and ETL technologies Assemble and process large, complex data sets that meet functional non-functional business requirements ETL from a wide variety of sources like MongoDB, S3, Server-to-Server, Kafka etc., and processing using SQL and big data technologies Build analytical tools to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics Build interactive and ad-hoc query self-serve tools for analytics use cases Build data models and data schema for performance, scalability and functional requirement perspective Build processes supporting data transformation, metadata, dependency and workflow management Research, experiment and prototype new tools/technologies and make them successful Skill Requirements Must have-Strong in Python/Scala Must have experience in Big data technologies like Spark, Hadoop, Athena / Presto, Redshift, Kafka etc Experience in various file formats like parquet, JSON, Avro, orc etc Experience in workflow management tools like airflow Experience with batch processing, streaming and message queues Any of visualization tools like Redash, Tableau, Kibana etc Experience in working with structured and unstructured data sets Strong problem solving skills Good to have Exposure to NoSQL like MongoDB Exposure to Cloud platforms like AWS, GCP, etc Exposure to Microservices architecture Exposure to Machine learning techniques The role is full-time and comes with full benefits. We are globally headquartered in the San Francisco Bay Area with our India headquarters in Bengaluru. Equal Opportunity As an equal opportunity employer, we do not discriminate on the basis of race, color, religion, national origin, age, sex (including pregnancy), physical or mental disability, medical condition, genetic information, gender identity or expression, sexual orientation, marital status, protected veteran status or any other legally-protected characteristic. Show more Show less
Posted 1 week ago
6.0 - 8.0 years
13 - 23 Lacs
Bengaluru
Hybrid
Job description Primary skillsets 5 years hands on experience in Informatica PWC ETL development 7 years of experience in SQL analytical STAR schema data modeling and Informatica PowerCenter 5 years of Redshift Oracle or comparable database experience with BIDW deployments Secondary skillsets Good to know cloud like AWS Services Must have proven experience with STAR and SNOWFLAKE schema techniques Good to know cloud like AWS Services Proven track record as an ETL developer potentially to grow as an Architect leading development teams to deliver successful business intelligence solutions with complex data sources Strong analytical skills and enjoys solving complex technical problems Knowledge on additional ETL tools Qlik Replicate End to End understanding of data from ingestion to transformation to consumption in Analytics will be great benefits
Posted 1 week ago
5.0 - 7.0 years
10 - 15 Lacs
Bengaluru
Work from Office
We are looking for a skilled Data Analyst with excellent communication skills and deep expertise in SQL, Tableau, and modern data warehousing technologies. This role involves designing data models, building insightful dashboards, ensuring data quality, and extracting meaningful insights from large datasets to support strategic business decisions. Key Responsibilities: Write advanced SQL queries to retrieve and manipulate data from cloud data warehouses such as Snowflake, Redshift, or BigQuery. Design and develop data models that support analytics and reporting needs. Build dynamic, interactive dashboards and reports using tools like Tableau, Looker, or Domo. Perform advanced analytics techniques including cohort analysis, time series analysis, scenario analysis, and predictive analytics. Validate data accuracy and perform thorough data QA to ensure high-quality output. Investigate and troubleshoot data issues; perform root cause analysis in collaboration with BI or data engineering teams. Communicate analytical insights clearly and effectively to stakeholders. Required Skills & Qualifications: Excellent communication skills are mandatory for this role. 5+ years of experience in data analytics, BI analytics, or BI engineering roles. Expert-level skills in SQL, with experience writing complex queries and building views. Proven experience using data visualization tools like Tableau, Looker, or Domo. Strong understanding of data modeling principles and best practices. Hands-on experience working with cloud data warehouses such as Snowflake, Redshift, BigQuery, SQL Server, or Oracle. Intermediate-level proficiency with spreadsheet tools like Excel, Google Sheets, or Power BI, including functions, pivots, and lookups. Bachelor's or advanced degree in a relevant field such as Data Science, Computer Science, Statistics, Mathematics, or Information Systems. Ability to collaborate with cross-functional teams, including BI engineers, to optimize reporting solutions. Experience in handling large-scale enterprise data environments. Familiarity with data governance, data cataloging, and metadata management tools (a plus but not required).
Posted 1 week ago
4.0 years
0 Lacs
Kochi, Kerala, India
On-site
Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your Role And Responsibilities As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform Responsibilities Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Preferred Education Master's Degree Required Technical And Professional Expertise Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala. Minimum 3 years of experience on Cloud Data Platforms on AWS Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB Good to excellent SQL skills Exposure to streaming solutions and message brokers like Kafka technologies Preferred Technical And Professional Experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Technology @Dream11: Technology is at the core of everything we do. Our technology team helps us deliver a mobile-first experience across platforms (Android & iOS) while managing over 700 million rpm (requests per minute) at peak with a user concurrency of over 16.5 million. We have over 190+ micro-services written in Java and backed by a Vert.x framework. These work with isolated product features with discrete architectures to cater to the respective use cases. We work with terabytes of data, the infrastructure for which is built on top of Kafka, Redshift, Spark, Druid, etc. and it powers a number of use cases like Machine Learning and Predictive Analytics. Our tech stack is hosted on AWS, with distributed systems like Cassandra, Aerospike, Akka, Voltdb, Ignite, etc. Your Role: Analyze requirements and design software solutions basis first design principles (e.g. Object Oriented Design and Analysis, E-R Modeling) Build resilient, event-driven microservices using reactive Java based framework, sql and no-sql datastores, caches, messaging and big-data processing frameworks Deploy and configure cloud-native software services on public cloud Operate and support software services in production based on on-call schedules, using observability tools such as Datadog for logging, alerting, monitoring Qualifiers: 3+ years coding experience with at least one object oriented programming language, preferably Java, relational databases, database modeling (E-R modeling), SQL Familiarity with no-SQL databases and caching frameworks preferred Working experience of messaging frameworks such as Kafka or MQ Familiarity with object oriented design patterns Working experience with AWS or any cloud infrastructure About Dream Sports: Dream Sports is Indiaβs leading sports technology company with 250 million users, housing brands such as Dream11 , the worldβs largest fantasy sports platform, FanCode , a premier sports content & commerce platform and DreamSetGo , a sports experiences platform. Dream Sports is based in Mumbai and has a workforce of close to 1,000 βSportansβ. Founded in 2008 by Harsh Jain and Bhavit Sheth, Dream Sportsβ vision is to βMake Sports Betterβ for fans through the confluence of sports and technology. For more information: https://dreamsports.group/ Dream11 is the worldβs largest fantasy sports platform with 230 million users playing fantasy cricket, football, basketball & hockey on it. Dream11 is the flagship brand of Dream Sports, Indiaβs leading Sports Technology company and has partnerships with several national & international sports bodies and cricketers. Show more Show less
Posted 1 week ago
3.0 - 6.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Responsibilities: Design, develop, and manage databases on the AWS cloud platform Develop and maintain automation scripts or jobs to perform routine database tasks such as provisioning, backups, restores, and data migrations. Build and maintain automated testing frameworks for database changes and upgrades to minimize the risk of introducing errors. Implement self-healing mechanisms to automatically recover from database failures or performance degradation. Integrate database automation tools with CI/CD pipelines to enable continuous delivery and deployment of database changes. Collaborate with cross-functional teams to understand their data requirements and ensure that the databases meet their needs Implement and manage database security policies, including access control, data encryption, and backup and recovery procedures Ensure that database backups and disaster recovery procedures are in place and tested regularly Develop and maintain database documentation, including data dictionaries, data models, and technical specifications Stay up-to-date with the latest cloud technologies and trends and evaluate new tools and products that could improve database performance and scalability. Requirements: (Postgres/MySQL/SQL Server, AWS CloudFormation/CDK, Python) Bachelor's degree in Computer Science, Information Technology, or a related field Minimum of 3-6 years of experience in designing, building, and administering databases on the AWS cloud platform Strong experience with Infra as Code (CloudFormation/AWS CDK) and automation experience in Python In-depth knowledge of AWS database services such as Amazon RDS, EC2, S3, Amazon Aurora, and Amazon Redshift and Postgres/Mysql/SqlServer Strong understanding of database design principles, data modelling, and normalisation Experience with database migration to AWS cloud platform Strong understanding of database security principles and best practices Excellent troubleshooting and problem-solving skills Ability to work independently and in a team environment Good to have : AWS certifications such as AWS Certified Solutions Architect, AWS Certified DevOps Engineer, or AWS Certified Database Specialty are a plus. Show more Show less
Posted 1 week ago
8.0 - 12.0 years
25 - 40 Lacs
Chennai
Work from Office
We are seeking a highly skilled Data Architect to design and implement robust, scalable, and secure data solutions on AWS Cloud. The ideal candidate should have expertise in AWS services, data modeling, ETL processes, and big data technologies, with hands-on experience in Glue, DMS, Python, PySpark, and MPP databases like Snowflake, Redshift, or Databricks. Key Responsibilities: Architect and implement data solutions leveraging AWS services such as EC2, S3, IAM, Glue (Mandatory), and DMS for efficient data processing and storage. Develop scalable ETL pipelines using AWS Glue, Lambda, and PySpark to support data transformation, ingestion, and migration. Design and optimize data models following Medallion architecture, Data Mesh, and Enterprise Data Warehouse (EDW) principles. Implement data governance, security, and compliance best practices using IAM policies, encryption, and data masking. Work with MPP databases such as Snowflake, Redshift, or Databricks, ensuring performance tuning, indexing, and query optimization. Collaborate with cross-functional teams, including data engineers, analysts, and business stakeholders, to design efficient data integration strategies. Ensure high availability and reliability of data solutions by implementing monitoring, logging, and automation in AWS. Evaluate and recommend best practices for ETL workflows, data pipelines, and cloud-based data warehousing solutions. Troubleshoot performance bottlenecks and optimize query execution plans, indexing strategies, and data partitioning. Job Requirement Required Qualifications & Skills: Strong expertise in AWS Cloud Services: Compute (EC2), Storage (S3), and security (IAM). Proficiency in programming languages: Python, PySpark, and AWS Lambda. Mandatory experience in ETL tools: AWS Glue and DMS for data migration and transformation. Expertise in MPP databases: Snowflake, Redshift, or Databricks; knowledge of RDBMS (Oracle, SQL Server) is a plus. Deep understanding of data modeling techniques: Medallion architecture, Data Mesh, EDW principles. Experience in designing and implementing large-scale, high-performance data solutions. Strong analytical and problem-solving skills, with the ability to optimize data pipelines and storage solutions. Excellent communication and collaboration skills, with experience working in agile environments. Preferred Qualifications: AWS Certification (AWS Certified Data Analytics, AWS Certified Solutions Architect, or equivalent). Experience with real-time data streaming (Kafka, Kinesis, or similar). Familiarity with Infrastructure as Code (Terraform, CloudFormation). Understanding of data governance frameworks and compliance standards (GDPR, HIPAA, etc.
Posted 1 week ago
5.0 - 8.0 years
15 - 27 Lacs
Bengaluru
Work from Office
Strong experience with Python, SQL, pySpark, AWS Glue. Good to have - Shell Scripting, Kafka Good knowledge of DevOps pipeline usage (Jenkins, Bitbucket, EKS, Lightspeed) Experience of AWS tools (AWS S3, EC2, Athena, Redshift, Glue, EMR, Lambda, RDS, Kinesis, DynamoDB, QuickSight etc.). Orchestration using Airflow Good to have - Streaming technologies and processing engines, Kinesis, Kafka, Pub/Sub and Spark Streaming Good debugging skills Should have strong hands-on design and engineering background in AWS, across a wide range of AWS services with the ability to demonstrate working on large engagements. Strong experience and implementation of Data lakes, Data warehousing, Data Lakehouse architectures. Ensure data accuracy, integrity, privacy, security, and compliance through quality control procedures. Monitor data systems performance and implement optimization strategies. Leverage data controls to maintain data privacy, security, compliance, and quality for allocated areas of ownership. Demonstrable knowledge of applying Data Engineering best practices (coding practices to DS, unit testing, version control, code review). Experience in Insurance domain preferred.
Posted 1 week ago
5.0 - 8.0 years
10 - 20 Lacs
Hyderabad
Work from Office
Business Data Analyst - HealthCare Job Summary We are seeking an experienced and results-driven Business Data Analyst with 5+ years of hands-on experience in data analytics, visualization, and business insight generation. This role is ideal for someone who thrives at the intersection of business and datatranslating complex data sets into compelling insights, dashboards, and strategies that support decision-making across the organization. You will collaborate closely with stakeholders across departments to identify business needs, design and build analytical solutions, and tell compelling data stories using advanced visualization tools. Key Responsibilities Data Analytics & Insights Analyze large and complex data sets to identify trends, anomalies, and opportunities that help drive business strategy and operational efficiency. β’ Dashboard Development & Data Visualization Design, develop, and maintain interactive dashboards and visual reports using tools like Power BI, Tableau, or Looker to enable data-driven decisions. β’ Business Stakeholder Engagement Collaborate with cross-functional teams to understand business goals, define metrics, and convert ambiguous requirements into concrete analytical deliverables. β’ KPI Definition & Performance Monitoring Define, track, and report key performance indicators (KPIs), ensuring alignment with business objectives and consistent measurement across teams. β’ Data Modeling & Reporting Automation Work with data engineering and BI teams to create scalable, reusable data models and automate recurring reports and analysis processes. β’ Storytelling with Data Communicate findings through clear narratives supported by data visualizations and actionable recommendations to both technical and non-technical audiences. β’ Data Quality & Governance Ensure accuracy, consistency, and integrity of data through validation, testing, and documentation practices. Required Qualifications Bachelorβs or Masterβs degree in Business, Economics, Statistics, Computer Science, Information Systems, or a related field. β’ 5+ years of professional experience in a data analyst or business analyst role with a focus on data visualization and analytics. β’ Proficiency in data visualization tools: Power BI, Tableau, Looker (at least one). β’ Strong experience in SQL and working with relational databases to extract, manipulate, and analyze data. β’ Deep understanding of business processes, KPIs, and analytical methods. β’ Excellent problem-solving skills with attention to detail and accuracy. β’ Strong communication and stakeholder management skills with the ability to explain technical concepts in a clear and business-friendly manner. β’ Experience working in Agile or fast-paced environments. Preferred Qualifications Experience working with cloud data platforms (e.g., Snowflake, BigQuery, Redshift). β’ Exposure to Python or R for data manipulation and statistical analysis. β’ Knowledge of data warehousing, dimensional modeling, or ELT/ETL processes. β’ Domain experience in Healthcare is a plus.
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
The job market for redshift professionals in India is growing rapidly as more companies adopt cloud data warehousing solutions. Redshift, a powerful data warehouse service provided by Amazon Web Services, is in high demand due to its scalability, performance, and cost-effectiveness. Job seekers with expertise in redshift can find a plethora of opportunities in various industries across the country.
The average salary range for redshift professionals in India varies based on experience and location. Entry-level positions can expect a salary in the range of INR 6-10 lakhs per annum, while experienced professionals can earn upwards of INR 20 lakhs per annum.
In the field of redshift, a typical career path may include roles such as: - Junior Developer - Data Engineer - Senior Data Engineer - Tech Lead - Data Architect
Apart from expertise in redshift, proficiency in the following skills can be beneficial: - SQL - ETL Tools - Data Modeling - Cloud Computing (AWS) - Python/R Programming
As the demand for redshift professionals continues to rise in India, job seekers should focus on honing their skills and knowledge in this area to stay competitive in the job market. By preparing thoroughly and showcasing their expertise, candidates can secure rewarding opportunities in this fast-growing field. Good luck with your job search!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.