Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 years
0 Lacs
Kochi, Kerala, India
Remote
Role Open Positions Mandatory Skillset Experience Work Location NP Budget Data Analyst 1 SQL, Power BI, Python, Amazon Athena 5+ Years TVM/Kochi/Remote Immediate only Max 19 LPA Job Purpose We are seeking an experienced and analytical Senior Data Analyst to join our Data & Analytics team. The ideal candidate will have a strong background in data analysis, visualization, and stakeholder communication. You will be responsible for turning data into actionable insights that help shape strategic and operational decisions across the organization. Job Description / Duties & Responsibilities Collaborate with business stakeholders to understand data needs and translate them into analytical requirements. Analyze large datasets to uncover trends, patterns, and actionable insights. Design and build dashboards and reports using Power BI. Perform ad-hoc analysis and develop data-driven narratives to support decision-making. Ensure data accuracy, consistency, and integrity through data validation and quality checks. Build and maintain SQL queries, views, and data models for reporting purposes. Communicate findings clearly through presentations, visualizations, and written summaries. Partner with data engineers and architects to improve data pipelines and architecture. Contribute to the definition of KPIs, metrics, and data governance standards. Job Specification / Skills and Competencies Bachelor’s or Master’s degree in Statistics, Mathematics, Computer Science, Economics, or a related field. 5+ years of experience in a data analyst or business intelligence role. Advanced proficiency in SQL and experience working with relational databases (e.g., SQL Server, Redshift, Snowflake). Hands-on experience in Power BI. Proficiency in Python, Excel and data storytelling. Understanding of data modelling, ETL concepts, and basic data architecture. Strong analytical thinking and problem-solving skills. Excellent communication and stakeholder management skills To adhere to the Information Security Management policies and procedures. Soft Skills Required ▪ Must be a good team player with good communication skills ▪ Must have good presentation skills ▪ Must be a pro-active problem solver and a leader by self ▪ Manage & nurture a team of data engineers Skills: data storytelling,sql,athena,power bi,excel,python,aws,amazon athena Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Kerala, India
Remote
Hiring: Senior Data Analyst (5+ Years) – Remote/Kochi/Trivandrum/Bangalore/Chennai 📍 Location: Remote / Kochi / Trivandrum 💰 Budget: Up to 19 LPA 📆 Immediate Joiners Preferred 🚀 About the Role: We’re looking for a Senior Data Analyst to join our Data & Analytics team! You’ll transform complex data into actionable insights, drive strategic decisions, and empower stakeholders with intuitive dashboards and reports. If you love digging into data, solving business problems, and communicating insights effectively, this role is for you! 🔧 Mandatory Key Skills Required: 5 years mandatory ✔ SQL (Advanced) ✔ Power BI (Dashboarding & Visualization) ✔ Python (Data Analysis) ✔ Amazon Athena (or similar cloud data tools) ✔ 5+ years in Data Analysis/Business Intelligence Job Description / Duties & Responsibilities • Collaborate with business stakeholders to understand data needs and translate them into analytical requirements. • Analyze large datasets to uncover trends, patterns, and actionable insights. • Design and build dashboards and reports using Power BI. • Perform ad-hoc analysis and develop data-driven narratives to support decision-making. • Ensure data accuracy, consistency, and integrity through data validation and quality checks. • Build and maintain SQL queries, views, and data models for reporting purposes. • Communicate findings clearly through presentations, visualizations, and written summaries. • Partner with data engineers and architects to improve data pipelines and architecture. • Contribute to the definition of KPIs, metrics, and data governance standards. Job Specification / Skills and Competencies • Bachelor’s or Master’s degree in Statistics, Mathematics, Computer Science, Economics, or a related field. • 5+ years of experience in a data analyst or business intelligence role. • Advanced proficiency in SQL and experience working with relational databases (e.g., SQL Server, Redshift, Snowflake). • Hands-on experience in Power BI. • Proficiency in Python, Excel and data storytelling. • Understanding of data modelling, ETL concepts, and basic data architecture. • Strong analytical thinking and problem-solving skills. • Excellent communication and stakeholder management skills • To adhere to the Information Security Management policies and procedures. Soft Skills Required Must be a good team player with good communication skills Must have good presentation skills Must be a pro-active problem solver and a leader by self Manage & nurture a team of data engineers Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Thiruvananthapuram, Kerala, India
Remote
Role Open Positions Mandatory Skillset Experience Work Location NP Budget Data Analyst 1 SQL, Power BI, Python, Amazon Athena 5+ Years TVM/Kochi/Remote Immediate only Max 19 LPA Job Purpose We are seeking an experienced and analytical Senior Data Analyst to join our Data & Analytics team. The ideal candidate will have a strong background in data analysis, visualization, and stakeholder communication. You will be responsible for turning data into actionable insights that help shape strategic and operational decisions across the organization. Job Description / Duties & Responsibilities Collaborate with business stakeholders to understand data needs and translate them into analytical requirements. Analyze large datasets to uncover trends, patterns, and actionable insights. Design and build dashboards and reports using Power BI. Perform ad-hoc analysis and develop data-driven narratives to support decision-making. Ensure data accuracy, consistency, and integrity through data validation and quality checks. Build and maintain SQL queries, views, and data models for reporting purposes. Communicate findings clearly through presentations, visualizations, and written summaries. Partner with data engineers and architects to improve data pipelines and architecture. Contribute to the definition of KPIs, metrics, and data governance standards. Job Specification / Skills and Competencies Bachelor’s or Master’s degree in Statistics, Mathematics, Computer Science, Economics, or a related field. 5+ years of experience in a data analyst or business intelligence role. Advanced proficiency in SQL and experience working with relational databases (e.g., SQL Server, Redshift, Snowflake). Hands-on experience in Power BI. Proficiency in Python, Excel and data storytelling. Understanding of data modelling, ETL concepts, and basic data architecture. Strong analytical thinking and problem-solving skills. Excellent communication and stakeholder management skills To adhere to the Information Security Management policies and procedures. Soft Skills Required ▪ Must be a good team player with good communication skills ▪ Must have good presentation skills ▪ Must be a pro-active problem solver and a leader by self ▪ Manage & nurture a team of data engineers Skills: data storytelling,sql,athena,power bi,excel,python,aws,amazon athena Show more Show less
Posted 1 week ago
4.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Role Summary Pfizer’s purpose is to deliver breakthroughs that change patients’ lives. Research and Development is at the heart of fulfilling Pfizer’s purpose as we work to translate advanced science and technologies into the therapies and vaccines that matter most. Whether you are in the discovery sciences, ensuring drug safety and efficacy or supporting clinical trials, you will apply cutting edge design and process development capabilities to accelerate and bring the best in class medicines to patients around the world. Pfizer is seeking a highly skilled and motivated AI Engineer to join our advanced technology team. The successful candidate will be responsible for developing, implementing, and optimizing artificial intelligence models and algorithms to drive innovation and efficiency in our Data Analytics and Supply Chain solutions. This role demands a collaborative mindset, a passion for cutting-edge technology, and a commitment to improving patient outcomes. Role Responsibilities Lead data modeling and engineering efforts within advanced data platforms teams to achieve digital outcomes. Provides guidance and may lead/co-lead moderately complex projects. Oversee the development and execution of test plans, creation of test scripts, and thorough data validation processes. Lead the architecture, design, and implementation of Cloud Data Lake, Data Warehouse, Data Marts, and Data APIs. Lead the development of complex data products that benefit PGS and ensure reusability across the enterprise. Collaborate effectively with contractors to deliver technical enhancements. Oversee the development of automated systems for building, testing, monitoring, and deploying ETL data pipelines within a continuous integration environment. Collaborate with backend engineering teams to analyze data, enhancing its quality and consistency. Conduct root cause analysis and address production data issues. Lead the design, develop, and implement AI models and algorithms to solve sophisticated data analytics and supply chain initiatives. Stay abreast of the latest advancements in AI and machine learning technologies and apply them to Pfizer's projects. Provide technical expertise and guidance to team members and stakeholders on AI-related initiatives. Document and present findings, methodologies, and project outcomes to various stakeholders. Integrate and collaborate with different technical teams across Digital to drive overall implementation and delivery. Ability to work with large and complex datasets, including data cleaning, preprocessing, and feature selection. Basic Qualifications A bachelor's or master’s degree in computer science, Artificial Intelligence, Machine Learning, or a related discipline. Over 4 years of experience as a Data Engineer, Data Architect, or in Data Warehousing, Data Modeling, and Data Transformations. Over 2 years of experience in AI, machine learning, and large language models (LLMs) development and deployment. Proven track record of successfully implementing AI solutions in a healthcare or pharmaceutical setting is preferred. Strong understanding of data structures, algorithms, and software design principles Programming Languages: Proficiency in Python, SQL, and familiarity with Java or Scala AI and Automation: Knowledge of AI-driven tools for data pipeline automation, such as Apache Airflow or Prefect. Ability to use GenAI or Agents to augment data engineering practices Preferred Qualifications Data Warehousing: Experience with data warehousing solutions such as Amazon Redshift, Google BigQuery, or Snowflake. ETL Tools: Knowledge of ETL tools like Apache NiFi, Talend, or Informatica. Big Data Technologies: Familiarity with Hadoop, Spark, and Kafka for big data processing. Cloud Platforms: Hands-on experience with cloud platforms such as AWS, Azure, or Google Cloud Platform (GCP). Containerization: Understanding of Docker and Kubernetes for containerization and orchestration. Data Integration: Skills in integrating data from various sources, including APIs, databases, and external files. Data Modeling: Understanding of data modeling and database design principles, including graph technologies like Neo4j or Amazon Neptune. Structured Data: Proficiency in handling structured data from relational databases, data warehouses, and spreadsheets. Unstructured Data: Experience with unstructured data sources such as text, images, and log files, and tools like Apache Solr or Elasticsearch. Data Excellence: Familiarity with data excellence concepts, including data governance, data quality management, and data stewardship. Non-standard Work Schedule, Travel Or Environment Requirements Occasionally travel required Work Location Assignment: Hybrid The annual base salary for this position ranges from $96,300.00 to $160,500.00. In addition, this position is eligible for participation in Pfizer’s Global Performance Plan with a bonus target of 12.5% of the base salary and eligibility to participate in our share based long term incentive program. We offer comprehensive and generous benefits and programs to help our colleagues lead healthy lives and to support each of life’s moments. Benefits offered include a 401(k) plan with Pfizer Matching Contributions and an additional Pfizer Retirement Savings Contribution, paid vacation, holiday and personal days, paid caregiver/parental and medical leave, and health benefits to include medical, prescription drug, dental and vision coverage. Learn more at Pfizer Candidate Site – U.S. Benefits | (uscandidates.mypfizerbenefits.com). Pfizer compensation structures and benefit packages are aligned based on the location of hire. The United States salary range provided does not apply to Tampa, FL or any location outside of the United States. Relocation assistance may be available based on business needs and/or eligibility. Sunshine Act Pfizer reports payments and other transfers of value to health care providers as required by federal and state transparency laws and implementing regulations. These laws and regulations require Pfizer to provide government agencies with information such as a health care provider’s name, address and the type of payments or other value received, generally for public disclosure. Subject to further legal review and statutory or regulatory clarification, which Pfizer intends to pursue, reimbursement of recruiting expenses for licensed physicians may constitute a reportable transfer of value under the federal transparency law commonly known as the Sunshine Act. Therefore, if you are a licensed physician who incurs recruiting expenses as a result of interviewing with Pfizer that we pay or reimburse, your name, address and the amount of payments made currently will be reported to the government. If you have questions regarding this matter, please do not hesitate to contact your Talent Acquisition representative. EEO & Employment Eligibility Pfizer is committed to equal opportunity in the terms and conditions of employment for all employees and job applicants without regard to race, color, religion, sex, sexual orientation, age, gender identity or gender expression, national origin, disability or veteran status. Pfizer also complies with all applicable national, state and local laws governing nondiscrimination in employment as well as work authorization and employment eligibility verification requirements of the Immigration and Nationality Act and IRCA. Pfizer is an E-Verify employer. This position requires permanent work authorization in the United States. Information & Business Tech Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Summary: We are seeking a highly skilled and innovative Data Scientist to join our team and drive data-centric initiatives by leveraging AI/ML models , Big Data technologies , and cloud platforms like AWS . The ideal candidate will be proficient in Python , experienced in designing end-to-end machine learning pipelines, and comfortable working with large-scale data systems. Key Responsibilities: Design, develop, and deploy machine learning models and AI-based solutions for business problems. Build robust ETL pipelines to process structured and unstructured data using tools like PySpark , Airflow , or Glue . Work with AWS cloud services (e.g., S3, Lambda, SageMaker, Redshift, EMR) to build scalable data science solutions. Perform exploratory data analysis (EDA) and statistical modeling to uncover actionable insights. Collaborate with data engineers, product managers, and stakeholders to identify use cases and deliver impactful data-driven solutions. Optimize model performance and ensure model explainability, fairness, and reproducibility. Maintain and improve existing data science solutions through MLOps practices (e.g., model monitoring, retraining, CI/CD for ML). Required Skills and Qualifications: Bachelor’s or Master’s degree in Computer Science, Statistics, Data Science, or related field. 3+ years of experience in data science or machine learning roles. Strong programming skills in Python and experience with libraries like Pandas, NumPy, Scikit-learn, TensorFlow, or PyTorch . Show more Show less
Posted 1 week ago
4.0 years
0 Lacs
Coimbatore, Tamil Nadu, India
On-site
We are seeking a highly skilled Product Data Engineer with expertise in building, maintaining, and optimizing data pipelines using Python scripting. The ideal candidate will have experience working in a Linux environment, managing large-scale data ingestion, processing files in S3, and balancing disk space and warehouse storage efficiently. This role will be responsible for ensuring seamless data movement across systems while maintaining performance, scalability, and reliability. Key Responsibilities: ETL Pipeline Development: Design, develop, and maintain efficient ETL workflows using Python to extract, transform, and load data into structured data warehouses. Data Pipeline Optimization: Monitor and optimize data pipeline performance, ensuring scalability and reliability in handling large data volumes. Linux Server Management: Work in a Linux-based environment, executing command-line operations, managing processes, and troubleshooting system performance issues. File Handling & Storage Management: Efficiently manage data files in Amazon S3, ensuring proper storage organization, retrieval, and archiving of data. Disk Space & Warehouse Balancing: Proactively monitor and manage disk space usage, preventing storage bottlenecks and ensuring warehouse efficiency. Error Handling & Logging: Implement robust error-handling mechanisms and logging systems to monitor data pipeline health. Automation & Scheduling: Automate ETL processes using cron jobs, Airflow, or other workflow orchestration tools. Data Quality & Validation: Ensure data integrity and consistency by implementing validation checks and reconciliation processes. Security & Compliance: Follow best practices in data security, access control, and compliance while handling sensitive data. Collaboration with Teams: Work closely with data engineers, analysts, and product teams to align data processing with business needs. Skills Required: Proficiency in Python: Strong hands-on experience in writing Python scripts for ETL processes. Linux Expertise: Experience working with Linux servers, command-line operations, and system performance tuning. Cloud Storage Management: Hands-on experience with Amazon S3, including handling file storage, retrieval, and lifecycle policies. Data Pipeline Management: Experience with ETL frameworks, data pipeline automation, and workflow scheduling (e.g., Apache Airflow, Luigi, or Prefect). SQL & Database Handling: Strong SQL skills for data extraction, transformation, and loading into relational databases and data warehouses. Disk Space & Storage Optimization: Ability to manage disk space efficiently, balancing usage across different systems. Error Handling & Debugging: Strong problem-solving skills to troubleshoot ETL failures, debug logs, and resolve data inconsistencies. Nice to Have: Experience with cloud data warehouses (e.g., Snowflake, Redshift, BigQuery). Knowledge of message queues (Kafka, RabbitMQ) for data streaming. Familiarity with containerization tools (Docker, Kubernetes) for deployment. Exposure to infrastructure automation tools (Terraform, Ansible). Qualifications: Bachelor’s degree in Computer Science, Data Engineering, or a related field. 4+ years of experience in ETL development, data pipeline management, or backend data engineering. Strong analytical mindset and ability to handle large-scale data processing efficiently. Ability to work independently in a fast-paced, product-driven environment. Show more Show less
Posted 1 week ago
4.0 years
0 Lacs
Kochi, Kerala, India
On-site
Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your Role And Responsibilities As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform Responsibilities Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Preferred Education Master's Degree Required Technical And Professional Expertise Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala ; Minimum 3 years of experience on Cloud Data Platforms on AWS; Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB Good to excellent SQL skills Exposure to streaming solutions and message brokers like Kafka technologies Preferred Technical And Professional Experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers Show more Show less
Posted 1 week ago
0.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Software Engineer (Backend) (SDE-1) DViO is one of the largest independent, highly awarded, digital first marketing companies with a team of 175+ people operating across India, Middle East and South East Asia. We are a full-service digital marketing agency with a focus on ROI driven marketing. We are looking for a Software Engineer (Backend) to join our team. The ideal candidate will have a strong background in software development and experience with backend technologies. We are looking for someone who is passionate about backend system design and is looking to grow in this field. Responsibilities You will be working with a team that will be responsible for developing services for various applications, like marketing automation, campaign optimization, recommendation & analytical systems, etc. The candidate will work on developing backend services, including REST APIs, data processing pipelines, and database management. Develop backend services for various business use cases Write clean, maintainable code Collaborate with other team members Improvise code based on feedback Work on bug fixes, refactoring and performance improvements Tracking technology changes and keeping our applications up-to-date Requirements Qualifications: Bachelor's degree in Computer Science, Engineering, or related field 0-1 year of experience in software development Must-have skills: Proficient in either PHP, Python, or Node.js Experience with any backend MVC frameworks like Laravel, Rails, Express, Django etc. Experience with any database like MySQL, PostgreSQL, MongoDB, etc. Experience with REST APIs, Docker, Bash and Git Good-to-have skills: Experience with WebSockets, Socket.io, etc. Experience with search technologies like Meilisearch, Typesense, Elasticsearch, etc. Experience with caching technologies like Redis, Memcached, etc. Experience with cloud platforms like AWS, GCP, Azure, etc. Experience with monolithic architecture Experience with data warehouses or data lakes like Snowflake, Amazon Redshift, Google BigQuery, Databricks, etc. Benefits DViO offers innovative and challenging work environment with the opportunity to work on cutting-edge technologies. Join us and be a part of a dynamic team that is passionate about software development and build applications that will shape the future of digital marketing. Show more Show less
Posted 1 week ago
0 years
0 Lacs
Mumbai, Maharashtra, India
Remote
Role: Database Engineer Location : Remote Skills and Experience ● Bachelor's degree in Computer Science, Information Systems, or a related field is desirable but not essential. ● Experience with data warehousing concepts and tools (e.g., Snowflake, Redshift) to support advanced analytics and reporting, aligning with the team’s data presentation goals. ● Skills in working with APIs for data ingestion or connecting third-party systems, which could streamline data acquisition processes. ● Proficiency with tools like Prometheus, Grafana, or ELK Stack for real-time database monitoring and health checks beyond basic troubleshooting. ● Familiarity with continuous integration/continuous deployment (CI/CD) tools (e.g., Jenkins, GitHub Actions). ● Deeper expertise in cloud platforms (e.g., AWS Lambda, GCP Dataflow) for serverless data processing or orchestration. ● Knowledge of database development and administration concepts, especially with relational databases like PostgreSQL and MySQL. ● Knowledge of Python programming, including data manipulation, automation, and object-oriented programming (OOP), with experience in modules such as Pandas, SQLAlchemy, gspread, PyDrive, and PySpark. ● Knowledge of SQL and understanding of database design principles, normalization, and indexing. ● Knowledge of data migration, ETL (Extract, Transform, Load) processes, or integrating data from various sources. ● Knowledge of cloud-based databases, such as AWS RDS and Google BigQuery. ● Eagerness to develop import workflows and scripts to automate data import processes. ● Knowledge of data security best practices, including access controls, encryption, and compliance standards. ● Strong problem-solving and analytical skills with attention to detail. ● Creative and critical thinking. ● Strong willingness to learn and expand knowledge in data engineering. ● Familiarity with Agile development methodologies is a plus. ● Experience with version control systems, such as Git, for collaborative development. ● Ability to thrive in a fast-paced environment with rapidly changing priorities. ● Ability to work collaboratively in a team environment. ● Good and effective communication skills. ● Comfortable with autonomy and ability to work independently. Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Ahmedabad, Gujarat, India
Remote
About Qualitrol Qualitrol is a leader in providing condition monitoring solutions for the electricity industry, ensuring reliability and efficiency in high-voltage electrical assets. We leverage cutting-edge technology, data analytics, and AI to transform how utilities manage their assets and make data-driven decisions. Role Summary We are looking for a highly skilled Senior Data Engineer to join our team and drive the development of our data engineering capabilities. This role involves designing, developing, and maintaining scalable data pipelines, optimizing data infrastructure, and ensuring high-quality data for analytics and AI-driven solutions. The ideal candidate will have deep expertise in data modeling, cloud-based data platforms, and best practices in data engineering. Key Responsibilities Design, develop, and optimize scalable ETL/ELT pipelines for large-scale industrial data. Architect and maintain data warehouses, lakes, and streaming solutions to support analytics and AI-driven insights. Implement data governance, security, and quality best practices to ensure data integrity and compliance. Work closely with Data Scientists, AI Engineers, and Software Developers to build robust data solutions. Optimize data infrastructure performance for real-time and batch processing. Leverage cloud-based technologies (AWS, Azure, GCP) to develop and deploy scalable data solutions. Develop and maintain APIs and data access layers for seamless integration across platforms. Collaborate with cross-functional teams to define and implement data strategy and architecture. Stay up to date with emerging data engineering technologies and best practices. Required Qualifications & Experience 5+ years of experience in data engineering, software development, or related fields. Proficiency in programming languages such as Python, Scala, or Java. Expertise in SQL and database technologies (PostgreSQL, MySQL, NoSQL, etc.). Hands-on experience with big data technologies (e.g., Spark, Kafka, Hadoop). Strong understanding of data warehousing (e.g., Snowflake, Redshift, BigQuery) and data lake architectures. Experience with cloud platforms (AWS, Azure, or GCP) and cloud-native data solutions. Knowledge of CI/CD pipelines, DevOps, and infrastructure as code (Terraform, Kubernetes, Docker). Familiarity with ML Ops and AI-driven data workflows is a plus. Strong problem-solving skills, ability to work independently, and excellent communication skills. Preferred Qualifications Experience in the electricity, utilities, or industrial sectors. Knowledge of IoT data ingestion and edge computing. Familiarity with GraphQL and RESTful API development. Experience in data visualization and business intelligence tools (Power BI, Tableau, etc.). Contributions to open-source data engineering projects. What We Offer Competitive salary and performance-based incentives. Comprehensive benefits package, including health, dental, and retirement plans. Opportunities for career growth and professional development. A dynamic work environment focused on innovation and cutting-edge technology. Hybrid/remote work flexibility (depending on location and project needs). How To Apply Interested candidates should submit their resume and a cover letter detailing their experience and qualifications. Fortive Corporation Overview Fortive’s essential technology makes the world stronger, safer, and smarter. We accelerate transformation across a broad range of applications including environmental, health and safety compliance, industrial condition monitoring, next-generation product design, and healthcare safety solutions. We are a global industrial technology innovator with a startup spirit. Our forward-looking companies lead the way in software-powered workflow solutions, data-driven intelligence, AI-powered automation, and other disruptive technologies. We’re a force for progress, working alongside our customers and partners to solve challenges on a global scale, from workplace safety in the most demanding conditions to groundbreaking sustainability solutions. We are a diverse team 17,000 strong, united by a dynamic, inclusive culture and energized by limitless learning and growth. We use the proven Fortive Business System (FBS) to accelerate our positive impact. At Fortive, we believe in you. We believe in your potential—your ability to learn, grow, and make a difference. At Fortive, we believe in us. We believe in the power of people working together to solve problems no one could solve alone. At Fortive, we believe in growth. We’re honest about what’s working and what isn’t, and we never stop improving and innovating. Fortive: For you, for us, for growth. About Qualitrol QUALITROL manufactures monitoring and protection devices for high value electrical assets and OEM manufacturing companies. Established in 1945, QUALITROL produces thousands of different types of products on demand and customized to meet our individual customers’ needs. We are the largest and most trusted global leader for partial discharge monitoring, asset protection equipment and information products across power generation, transmission, and distribution. At Qualitrol, we are redefining condition-based monitoring. We Are an Equal Opportunity Employer. Fortive Corporation and all Fortive Companies are proud to be equal opportunity employers. We value and encourage diversity and solicit applications from all qualified applicants without regard to race, color, national origin, religion, sex, age, marital status, disability, veteran status, sexual orientation, gender identity or expression, or other characteristics protected by law. Fortive and all Fortive Companies are also committed to providing reasonable accommodations for applicants with disabilities. Individuals who need a reasonable accommodation because of a disability for any part of the employment application process, please contact us at applyassistance@fortive.com. Bonus or Equity This position is also eligible for bonus as part of the total compensation package. QUALITROL manufactures monitoring and protection devices for high value electrical assets and OEM manufacturing companies. Established in 1945, QUALITROL produces thousands of different types of products on demand and customized to meet our individual customers’ needs. We are the largest and most trusted global leader for partial discharge monitoring, asset protection equipment and information products across power generation, transmission, and distribution. At Qualitrol, we are redefining condition-based monitoring. We Are an Equal Opportunity Employer. Fortive Corporation and all Fortive Companies are proud to be equal opportunity employers. We value and encourage diversity and solicit applications from all qualified applicants without regard to race, color, national origin, religion, sex, age, marital status, disability, veteran status, sexual orientation, gender identity or expression, or other characteristics protected by law. Fortive and all Fortive Companies are also committed to providing reasonable accommodations for applicants with disabilities. Individuals who need a reasonable accommodation because of a disability for any part of the employment application process, please contact us at applyassistance@fortive.com. This position is also eligible for bonus as part of the total compensation package. Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Indore, Madhya Pradesh, India
Remote
AgileEngine is an Inc. 5000 company that creates award-winning software for Fortune 500 brands and trailblazing startups across 17+ industries. We rank among the leaders in areas like application development and AI/ML, and our people-first culture has earned us multiple Best Place to Work awards. If you're looking for a place to grow, make an impact, and work with people who care, we'd love to meet you! WHAT YOU WILL DO - Be part of a small team developing multi-cloud platform services; - Build and maintain automation frameworks to execute developer-written tests in private and public cloud environments; - Optimize code, ensure best coding practices are followed, and support the existing team in overcoming technical challenges; - Monitor and support service providers using the app in the field. MUST HAVES - 5+ years of experience in web development in similar environments; - Bachelor’s degree in Computer Science, Information Security, or a related technology field; - Strong knowledge of Java 8 and 17, Spring , and Spring Boot ; - Experience with microservices and events ; - Great experience and passion for creating documentation for code and business processes; - Expertise in architectural design and code review, with a strong grasp of SOLID principles; - Skilled in gathering and analyzing complex requirements and business processes; - Contribute to the development of our internal tools and reusable architecture; - Experience creating optimized code and performance improvement for production systems and applications; - Experience debugging, refactoring applications, and replicating scenarios to solve issues and understand the business; - Familiarity with unit and system testing frameworks ( e.g., JUnit, Mockito ); - Proficient in using Git ; - Dedicated: own the apps you and your team are developing and take quality very seriously; - Problem Solving: proactively solve problems before they can become real problems; - Constantly upgrading your skill set and applying those practices; - Upper-Intermediate English level. NICE TO HAVES - Experience with Test Driven Development; - Experience with logistics software (delivery, transportation, route planning), RSA domain; - Experience with AWS, like ECS, SNS, SQS, and RedShift. THE BENEFITS OF JOINING US - Remote work & Local connection: Work where you feel most productive and connect with your team in periodic meet-ups to strengthen your network and connect with other top experts. - Legal presence in India: We ensure full local compliance with a structured, secure work environment tailored to Indian regulations. - Competitive Compensation in INR: Fair compensation in INR with dedicated budgets for your personal growth, education, and wellness. - Innovative Projects: Leverage the latest tech and create cutting-edge solutions for world-recognized clients and the hottest startups. Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Hubballi Urban, Karnataka, India
Remote
AgileEngine is an Inc. 5000 company that creates award-winning software for Fortune 500 brands and trailblazing startups across 17+ industries. We rank among the leaders in areas like application development and AI/ML, and our people-first culture has earned us multiple Best Place to Work awards. If you're looking for a place to grow, make an impact, and work with people who care, we'd love to meet you! WHAT YOU WILL DO - Be part of a small team developing multi-cloud platform services; - Build and maintain automation frameworks to execute developer-written tests in private and public cloud environments; - Optimize code, ensure best coding practices are followed, and support the existing team in overcoming technical challenges; - Monitor and support service providers using the app in the field. MUST HAVES - 5+ years of experience in web development in similar environments; - Bachelor’s degree in Computer Science, Information Security, or a related technology field; - Strong knowledge of Java 8 and 17, Spring , and Spring Boot ; - Experience with microservices and events ; - Great experience and passion for creating documentation for code and business processes; - Expertise in architectural design and code review, with a strong grasp of SOLID principles; - Skilled in gathering and analyzing complex requirements and business processes; - Contribute to the development of our internal tools and reusable architecture; - Experience creating optimized code and performance improvement for production systems and applications; - Experience debugging, refactoring applications, and replicating scenarios to solve issues and understand the business; - Familiarity with unit and system testing frameworks ( e.g., JUnit, Mockito ); - Proficient in using Git ; - Dedicated: own the apps you and your team are developing and take quality very seriously; - Problem Solving: proactively solve problems before they can become real problems; - Constantly upgrading your skill set and applying those practices; - Upper-Intermediate English level. NICE TO HAVES - Experience with Test Driven Development; - Experience with logistics software (delivery, transportation, route planning), RSA domain; - Experience with AWS, like ECS, SNS, SQS, and RedShift. THE BENEFITS OF JOINING US - Remote work & Local connection: Work where you feel most productive and connect with your team in periodic meet-ups to strengthen your network and connect with other top experts. - Legal presence in India: We ensure full local compliance with a structured, secure work environment tailored to Indian regulations. - Competitive Compensation in INR: Fair compensation in INR with dedicated budgets for your personal growth, education, and wellness. - Innovative Projects: Leverage the latest tech and create cutting-edge solutions for world-recognized clients and the hottest startups. Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Description Amazon Retail Financial Intelligence Systems is seeking a seasoned and talented Senior Data Engineer to join the Fortune Platform team. Fortune is a fast growing team with a mandate to build tools to automate profit-and-loss forecasting and planning for the Physical Consumer business. We are building the next generation Business Intelligence solutions using big data technologies such as Apache Spark, Hive/Hadoop, and distributed query engines. As a Data Engineer in Amazon, you will be working in a large, extremely complex and dynamic data environment. You should be passionate about working with big data and are able to learn new technologies rapidly and evaluate them critically. You should have excellent communication skills and be able to work with business owners to translate business requirements into system solutions. You are a self-starter, comfortable with ambiguity, and working in a fast-paced and ever-changing environment. Ideally, you are also experienced with at least one of the programming languages such as Java, C++, Spark/Scala, Python, etc. Major Responsibilities Work with a team of product and program managers, engineering leaders, and business leaders to build data architectures and platforms to support business Design, develop, and operate high-scalable, high-performance, low-cost, and accurate data pipelines in distributed data processing platforms Recognize and adopt best practices in data processing, reporting, and analysis: data integrity, test design, analysis, validation, and documentation Keep up to date with big data technologies, evaluate and make decisions around the use of new or existing software products to design the data architecture Design, build and own all the components of a high-volume data warehouse end to end. Provide end-to-end data engineering support for project lifecycle execution (design, execution and risk assessment) Continually improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers Interface with other technology teams to extract, transform, and load (ETL) data from a wide variety of data sources Own the functional and nonfunctional scaling of software systems in your ownership area. Implement big data solutions for distributed computing. Key job responsibilities As a DE on our team, you will be responsible for leading the data modelling, database design, and launch of some of the core data pipelines. You will have significant influence on our overall strategy by helping define the data model, drive the database design, and spearhead the best practices to delivery high quality products. About The Team Profit intelligence systems measures, predicts true profit(/loss) for each item as a result of a specific shipment to an Amazon customer. Profit Intelligence is all about providing intelligent ways for Amazon to understand profitability across retail business. What are the hidden factors driving the growth or profitability across millions of shipments each day? We compute the profitability of each and every shipment that gets shipped out of Amazon. Guess what, we predict the profitability of future possible shipments too. We are a team of agile, can-do engineers, who believe that not only are moon shots possible but that they can be done before lunch. All it takes is finding new ideas that challenge our preconceived notions of how things should be done. Process and procedure matter less than ideas and the practical work of getting stuff done. This is a place for exploring the new and taking risks. We push the envelope in using cloud services in AWS as well as the latest in distributed systems, forecasting algorithms, and data mining. Basic Qualifications 3+ years of data engineering experience Experience with data modeling, warehousing and building ETL pipelines Experience with SQL Preferred Qualifications Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases) Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI MAA 12 SEZ Job ID: A3006789 Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Title: Lead Full Stack / Senior Full Stack Developer Experience: 5+ years Location: Noida (Work from Office) Job Overview: We are seeking a highly skilled candidate with expertise in NodeJS, NestJS, React.js, Next.js, MySQL, Redshift, NoSQL, System Design, and Architecture . The ideal candidate will have strong workflow design and implementation skills , experience with queueing, caching, scalability, microservices, and AWS , and team leadership experience to manage a team of 10 developers. Knowledge of React Native and automation testing would be an added advantage. Key Responsibilities: Architect and develop scalable backend systems using NestJS with a focus on high performance. Lead a team of developers, ensuring adherence to best practices and Agile methodologies. Work with databases including MySQL and NoSQL to ensure data integrity and performance. Optimize system design for scalability, caching, and queueing mechanisms. Collaborate with the frontend team working on Next.js and ensure seamless integration. Ensure robust microservices architecture with proper API design and inter-service communication. Work in an Agile environment, driving sprints, standups, and ensuring timely delivery of projects. Required Skills & Experience: Experience in software development, System Design and System Architecture. Strong expertise in NodeJS, NestJS, React.js, Next.js, MySQL, Redshift, NoSQL, AWS, and Microservices Architecture . Expertise in queueing mechanisms, caching, scalability, and system performance optimization . Good to have knowledge of React Native and Automation Testing . Strong leadership and management skills with experience in leading development teams. Proficiency in Agile methodologies and sprint planning. Excellent problem-solving skills and ability to work under pressure. Qualifications: B.E / B.Tech / MCA or equivalent degree in IT/CS with good academic credentials. Show more Show less
Posted 1 week ago
1.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Description Business Data Technologies (BDT) makes it easier for teams across Amazon to produce, store, catalog, secure, move, and analyze data at massive scale. Our managed solutions combine standard AWS tooling, open-source products, and custom services to free teams from worrying about the complexities of operating at Amazon scale. This lets BDT customers move beyond the engineering and operational burden associated with managing and scaling platforms, and instead focus on scaling the value they can glean from their data, both for their customers and their teams. We own the one of the biggest (largest) data lakes for Amazon where 1000’s of Amazon teams can search, share, and store EB (Exabytes) of data in a secure and seamless way; using our solutions, teams around the world can schedule/process millions of workloads on a daily basis. We provide enterprise solutions that focus on compliance, security, integrity, and cost efficiency of operating and managing EBs of Amazon data. Key job responsibilities Core Responsibilities Be hands-on with ETL to build data pipelines to support automated reporting Interface with other technology teams to extract, transform, and load data from a wide variety of data sources Implement data structures using best practices in data modeling, ETL/ELT processes, and SQL, Redshift. Model data and metadata for ad-hoc and pre-built reporting Interface with business customers, gathering requirements and delivering complete reporting solutions Build robust and scalable data integration (ETL) pipelines using SQL, Python and Spark. Build and deliver high quality data sets to support business analyst, data scientists, and customer reporting needs. Continually improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers Participate in strategic & tactical planning discussions A day in the life As a Data Engineer, You Will Be Working With Cross-functional Partners From Science, Product, SDEs, Operations And Leadership To Translate Raw Data Into Actionable Insights For Stakeholders, Empowering Them To Make Data-driven Decisions. Some Of The Key Activities Include Crafting the Data Flow: Design and build data pipelines, the backbone of our data ecosystem. Ensure the integrity of the data journey by implementing robust data quality checks and monitoring processes. Architect for Insights: Translate complex business requirements into efficient data models that optimize data analysis and reporting. Automate data processing tasks to streamline workflows and improve efficiency. Become a data detective! ensuring data availability and performance Basic Qualifications 1+ years of data engineering experience Experience with SQL Experience with data modeling, warehousing and building ETL pipelines Experience with one or more query language (e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala) Experience with one or more scripting language (e.g., Python, KornShell) Preferred Qualifications Experience with big data technologies such as: Hadoop, Hive, Spark, EMR Experience with any ETL tool like, Informatica, ODI, SSIS, BODI, Datastage, etc. Knowledge of cloud services such as AWS or equivalent Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI HYD 13 SEZ Job ID: A3006419 Show more Show less
Posted 1 week ago
0.0 years
0 Lacs
Bengaluru, Karnataka
On-site
- 5+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience - Experience with data visualization using Tableau, Quicksight, or similar tools - Experience with data modeling, warehousing and building ETL pipelines - Experience in Statistical Analysis packages such as R, SAS and Matlab - Experience using SQL to pull data from a database or data warehouse and scripting experience (Python) to process data for modeling The role of the Sub Same Day business is to provide ultrafast speeds (2 hour and same day scheduled) and reliable delivery for selection that customers fast. Customers find their daily essentials and a curated selection of Amazon’s top-selling items with sub same day promises. The program is highly cross-functional in nature, operations intensive and requires a number of India-first solutions to be created, which then need to be scaled WW. In this context, SSD is looking for a talented, driven and experienced Business Analyst It is a pivotal role that will contribute to the evolution and success of one of the fastest growing businesses in the company. Joining the Amazon team means partnering with a dynamic and creative group who set a high bar for innovation and success in a fast-paced and changing environment. The Business Analyst is responsible for being able to influence critical business decisions using data and providing insight to category teams to be able to act. The successful candidate needs to have : - A passion for numbers, data and challenges. - High attention to detail and proven ability to manage multiple, competing priorities simultaneously. - Excellent verbal and written communications skills. - An ability to work in a fast-paced, complex environment where continuous innovation is desired. - Bias for action and ownership. - A history of teamwork and willingness to roll up one’s sleeves to get the job done. - Ability to work with diverse teams and people across levels in an organization - Proven analytical and quantitative skills (includes the ability to effectively use tools such as Excel and SQL) and an ability to use hard data and metrics to back up assumptions and justify business decisions. Key job responsibilities Key Responsibilities - Influence business decisions with data. - Use data resources to accomplished assigned analytical tasks relating to critical business metrics. - Monitor key metrics and escalate anomalies as needed - Provide input on suggested business actions based on analytical findings. Experience in data mining, ETL, etc. and using databases in a business environment with large-scale, complex datasets Experience with AWS solutions such as EC2, DynamoDB, S3, and Redshift Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Posted 1 week ago
0.0 years
0 Lacs
Bengaluru, Karnataka
On-site
- 3+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience - Experience with data visualization using Tableau, Quicksight, or similar tools - Experience with data modeling, warehousing and building ETL pipelines - Experience in Statistical Analysis packages such as R, SAS and Matlab - Experience using SQL to pull data from a database or data warehouse and scripting experience (Python) to process data for modeling At Amazon, we're working to be the most customer-centric company on earth. To get there, we need exceptionally talented, bright, and driven people. We are looking for a Business Intelligence Engineer in the Amazon Grocery Data and Tech. As a Business Intelligence Engineer, you will be responsible for execution of rapidly evolving complex demands, design, development, testing, deployment and operations of multiple analytical solutions. You will be responsible for delivering some of our most strategic analytics initiatives. You will create insights that support all of Amazon’s grocery businesses and have a significant impact on Amazon’s top-line and competitive position. A successful candidate will bring deep technical expertise, strong business acumen and judgment, ability to define ground breaking products, desire to have an industry wide impact and ability to work within a fast-moving environment in a large company to rapidly deliver services that have a broad business impact. They will define metrics to measure business performance and drive efficiency improvements. About the team Amazon Grocery Data and Technology (GDT) team is the central data team for Amazon Grocery, consisting of engineering teams (BIEs/DEs/SDEs). We are the direct owner of (a) Infrastructure and services used for reading, processing, reporting and enrichment of grocery data; (b) Productivity tools; (c) Reporting solutions for worldwide 1P and 3P customers across the following businesses: Amazon Fresh Grocery (AFG), Whole Foods Market (WFM), Amazon Go, Amazon Grocery Partners, etc. Experience with AWS solutions such as EC2, DynamoDB, S3, and Redshift Experience in data mining, ETL, etc. and using databases in a business environment with large-scale, complex datasets Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Posted 1 week ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Title : Data Testing Engineer Exp : 8+ years Location : Hyderabad and Gurgaon (Hybrid) Notice Period : Immediate to 15 days Job Description : Develop, maintain, and execute test cases to validate the accuracy, completeness, and consistency of data across different layers of the data warehouse. ● Test ETL processes to ensure that data is correctly extracted, transformed, and loaded from source to target systems while adhering to business rules ● Perform source-to-target data validation to ensure data integrity and identify any discrepancies or data quality issues. ● Develop automated data validation scripts using SQL, Python, or testing frameworks to streamline and scale testing efforts. ● Conduct testing in cloud-based data platforms (e.g., AWS Redshift, Google BigQuery, Snowflake), ensuring performance and scalability. ● Familiarity with ETL testing tools and frameworks (e.g., Informatica, Talend, dbt). ● Experience with scripting languages to automate data testing. ● Familiarity with data visualization tools like Tableau, Power BI, or Looker Show more Show less
Posted 1 week ago
0 years
0 Lacs
Gurugram, Haryana, India
Remote
IMEA (India, Middle East, Africa) India LIXIL INDIA PVT LTD Employee Assignment Fully remote possible Full Time 1 May 2025 Title Senior Data Engineer Job Description A Data Engineer is responsible for designing, building, and maintaining large-scale data systems and infrastructure. Their primary goal is to ensure that data is properly collected, stored, processed, and retrieved to support business intelligence, analytics, and data-driven decision-making. Key Responsibilities Design and Develop Data Pipelines: Create data pipelines to extract data from various sources, transform it into a standardized format, and load it into a centralized data repository. Build and Maintain Data Infrastructure: Design, implement, and manage data warehouses, data lakes, and other data storage solutions. Ensure Data Quality and Integrity: Develop data validation, cleansing, and normalization processes to ensure data accuracy and consistency. Collaborate with Data Analysts and Business Process Owners: Work with data analysts and business process owners to understand their data requirements and provide data support for their projects. Optimize Data Systems for Performance: Continuously monitor and optimize data systems for performance, scalability, and reliability. Develop and Maintain Data Governance Policies: Create and enforce data governance policies to ensure data security, compliance, and regulatory requirements. Experience & Skills Hands-on experience in implementing, supporting, and administering modern cloud-based data solutions (Google BigQuery, AWS Redshift, Azure Synapse, Snowflake, etc.). Strong programming skills in SQL, Java, and Python. Experience in configuring and managing data pipelines using Apache Airflow, Informatica, Talend, SAP BODS or API-based extraction. Expertise in real-time data processing frameworks. Strong understanding of Git and CI/CD for automated deployment and version control. Experience with Infrastructure-as-Code tools like Terraform for cloud resource management. Good stakeholder management skills to collaborate effectively across teams. Solid understanding of SAP ERP data and processes to integrate enterprise data sources. Exposure to data visualization and front-end tools (Tableau, Looker, etc.). Strong command of English with excellent communication skills. Show more Show less
Posted 1 week ago
0 years
0 Lacs
Gurugram, Haryana, India
Remote
IMEA (India, Middle East, Africa) India LIXIL INDIA PVT LTD Employee Assignment Fully remote possible Full Time 1 May 2025 Title Data Engineer Job Description A Data Engineer is responsible for designing, building, and maintaining large-scale data systems and infrastructure. Their primary goal is to ensure that data is properly collected, stored, processed, and retrieved to support business intelligence, analytics, and data-driven decision-making. Key Responsibilities Design and Develop Data Pipelines: Create data pipelines to extract data from various sources, transform it into a standardized format, and load it into a centralized data repository. Build and Maintain Data Infrastructure: Design, implement, and manage data warehouses, data lakes, and other data storage solutions. Ensure Data Quality and Integrity: Develop data validation, cleansing, and normalization processes to ensure data accuracy and consistency. Collaborate with Data Analysts and Business Process Owners: Work with data analysts and business process owners to understand their data requirements and provide data support for their projects. Optimize Data Systems for Performance: Continuously monitor and optimize data systems for performance, scalability, and reliability. Develop and Maintain Data Governance Policies: Create and enforce data governance policies to ensure data security, compliance, and regulatory requirements. Experience & Skills Hands-on experience in implementing, supporting, and administering modern cloud-based data solutions (Google BigQuery, AWS Redshift, Azure Synapse, Snowflake, etc.). Strong programming skills in SQL, Java, and Python. Experience in configuring and managing data pipelines using Apache Airflow, Informatica, Talend, SAP BODS or API-based extraction. Expertise in real-time data processing frameworks. Strong understanding of Git and CI/CD for automated deployment and version control. Experience with Infrastructure-as-Code tools like Terraform for cloud resource management. Good stakeholder management skills to collaborate effectively across teams. Solid understanding of SAP ERP data and processes to integrate enterprise data sources. Exposure to data visualization and front-end tools (Tableau, Looker, etc.). Strong command of English with excellent communication skills. Show more Show less
Posted 1 week ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
Roles And Responsibilities Proficiency in building highly scalable ETL and streaming-based data pipelines using Google Cloud Platform (GCP) services and products like Biquark, Cloud Dataflow Proficiency in large scale data platforms and data processing systems such as Google Big Query, Amazon Redshift, Azure Data Lake Excellent Python, PySpark and SQL development and debugging skills, exposure to other Big Data frameworks like Hadoop Hive would be added advantage Experience building systems to retrieve and aggregate data from event-driven messaging frameworks (e.g. RabbitMQ and Pub/Sub) Secondary Skills : Cloud Big Table, AI/ML solutions, Compute Engine, Cloud Fusion (ref:hirist.tech) Show more Show less
Posted 1 week ago
0 years
0 Lacs
Gurugram, Haryana, India
Remote
IMEA (India, Middle East, Africa) India LIXIL INDIA PVT LTD Employee Assignment Hybrid Full Time 30 June 2025 Location: Remote Working Internal Customers: Regional Quality Teams Team: QM System & Processes - Analytics & Digitalization. Job Description: Sr. Data Analyst Job Summary We are seeking a skilled Data Analyst to join our team, focusing on building and maintaining scalable, high-quality data warehouses and databases. This individual will be responsible for developing ETL processes, optimizing data pipelines, and transforming raw data from multiple sources into structured formats for analysis and reporting. The Data Analyst will also contribute to driving analytics and digitalization tools across various regions and provide support for specific data requests. What will you do? Design and Build Data Warehouses: Architect and develop scalable, robust, and efficient data warehouses that support business intelligence and data analytics needs. Develop and Optimize ETL Processes: Design, develop, and optimize ETL (Extract, Transform, Load) processes and data pipelines to ensure seamless, efficient, and timely data flow between systems, enabling high-quality insights and reporting. Data Transformation: Transform raw data from multiple channels into structured, clean formats suitable for analysis, reporting, and decision-making. Ensure data integrity and consistency throughout the transformation process. Analytics & Digitalization Tools Development: Further develop and drive the adoption of analytics and digitalization tools to support data-driven decision-making and operational improvements across all regions. Work to optimize and scale these tools as business needs evolve. Support Regional Data Requests: Act as a key point of contact for supporting specific quality data deep-dive requests from regional teams. Analyze and respond to their unique data needs, offering insights and actionable recommendations. Continuous Improvement: Work to continuously improve data quality, performance, and efficiency within the data ecosystem, implementing new tools or strategies to optimize overall data management and analysis. What are we looking for? Bachelor’s degree in Data Science, Computer Science or a related field. Proven experience in data analysis, data engineering, or similar roles with a focus on building and maintaining data warehouses. Hands-on experience with ETL processesand data pipeline development. Strong background in data transformationand structuring raw data for reporting and analysis. Experience in developing or supporting analyticsand digitalization tools across multiple regions or business units. Experience with data visualization tools(e.g., Tableau, Power BI) and proficiency in SQL Familiarity with data modelling, database optimization, and data management best practices. Good To Have Familiarity with cloud platforms(AWS, GCP, Azure) and related data services. Experience with data warehousing solutionslike Amazon Redshift, Snowflake, or Google BigQuery Show more Show less
Posted 1 week ago
0.0 - 3.0 years
0 Lacs
India
On-site
Description GroundTruth is an advertising platform that turns real-world behavior into marketing that drives in-store visits and other real business results. We use observed real-world consumer behavior, including location and purchase data, to create targeted advertising campaigns across all screens, measure how consumers respond, and uncover unique insights to help optimize ongoing and future marketing efforts. With this focus on media, measurement, and insights, we provide marketers with tools to deliver media campaigns that drive measurable impact, such as in-store visits, sales, and more. Learn more at groundtruth.com. We believe that innovative technology starts with the best talent and have been ranked one of Ad Age’s Best Places to Work in 2021, 2022, 2023 & 2025! Learn more about the perks of joining our team here. About Team GroundTruth seeks an Associate Software Engineer to join our Reporting team. The Reporting Team at GroundTruth is responsible for designing, building, and maintaining data pipelines and dashboards that deliver actionable insights. We ensure accurate and timely reporting to drive data-driven decisions for advertisers and publishers. We take pride in building an Engineering Team composed of strong communicators who collaborate with multiple business and engineering stakeholders to find compromises and solutions. Our engineers are organised and detail-oriented team players who are problem solvers with a maker mindset. As an Associate Software Engineer (ASE) on our Integration Team, you will build solutions that add new capabilities to our platform. You Will Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies. Lead engineering efforts across multiple software components. Write excellent production code and tests, and help others improve in code reviews. Analyse high-level requirements to design, document, estimate, and build systems. Continuously improve the team's practices in code quality, reliability, performance, testing, automation, logging, monitoring, alerting, and build processes You Have B.Tech./B.E./M.Tech./MCA or equivalent in computer science 0-3 years of experience in Data Engineering Experience with AWS Stack used for Data engineering EC2, S3, Athena, Redshift, EMR, ECS, Lambda, and Step functions Experience in MapReduce, Spark, and Glue Hands-on experience with Java/Python for the orchestration of data pipelines and Data engineering tasks Experience in writing analytical queries using SQL Experience in Airflow Experience in Docker Proficient in Git How can you impress us? Knowledge of REST APIs The following skills/certifications: Python, SQL/MySQL, AWS, Git Additional nice-to-have skills/certifications: Flask, Fast API Knowledge of shell scripting. Experience with BI tools like Looker. Experience with DB maintenance Experience with Amazon Web Services and Docker Configuration management and QA practices Benefits At GroundTruth, we want our employees to be comfortable with their benefits so they can focus on doing the work they love. Parental leave- Maternity and Paternity Flexible Time Offs (Earned Leaves, Sick Leaves, Birthday leave, Bereavement leave & Company Holidays) In Office Daily Catered Breakfast, Lunch, Snacks and Beverages Health cover for any hospitalization. Covers both nuclear family and parents Tele-med for free doctor consultation, discounts on health checkups and medicines Wellness/Gym Reimbursement Pet Expense Reimbursement Childcare Expenses and reimbursements Employee referral program Education reimbursement program Skill development program Cell phone reimbursement (Mobile Subsidy program). Internet reimbursement/Postpaid cell phone bill/or both. Birthday treat reimbursement Employee Provident Fund Scheme offering different tax saving options such as Voluntary Provident Fund and employee and employer contribution up to 12% Basic Creche reimbursement Co-working space reimbursement National Pension System employer match Meal card for tax benefit Special benefits on salary account Show more Show less
Posted 1 week ago
10.0 years
0 Lacs
India
Remote
Role: Senior Azure / Data Engineer with (ETL/ Data warehouse background) Location: Remote, India Duration: Long Term Contract Need with 10+ years of experience Must have Skills : • Min 5 years of experience in modern data engineering/data warehousing/data lakes technologies on cloud platforms like Azure, AWS, GCP, Data Bricks, etc. Azure experience is preferred over other cloud platforms. • 10 + years of proven experience with SQL, schema design, and dimensional data modeling • Solid knowledge of data warehouse best practices, development standards, and methodologies • Experience with ETL/ELT tools like ADF, Informatica, Talend, etc., and data warehousing technologies like Azure Synapse, Azure SQL, Amazon Redshift, Snowflake, Google Big Query, etc.. • Strong experience with big data tools(Databricks, Spark, etc..) and programming skills in PySpark and Spark SQL. • Be an independent self-learner with a “let’s get this done” approach and the ability to work in Fast paced and Dynamic environment. • Excellent communication and teamwork abilities. Nice-to-Have Skills: • Event Hub, IOT Hub, Azure Stream Analytics, Azure Analysis Service, Cosmo DB knowledge. • SAP ECC /S/4 and Hana knowledge. • Intermediate knowledge on Power BI • Azure DevOps and CI/CD deployments, Cloud migration methodologies and processes Show more Show less
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
Delhi, India
On-site
What is Findem: Findem is the only talent data platform that combines 3D data with AI. It automates and consolidates top-of-funnel activities across your entire talent ecosystem, bringing together sourcing, CRM, and analytics into one place. Only 3D data connects people and company data over time - making an individual’s entire career instantly accessible in a single click, removing the guesswork, and unlocking insights about the market and your competition no one else can. Powered by 3D data, Findem’s automated workflows across the talent lifecycle are the ultimate competitive advantage. Enabling talent teams to deliver continuous pipelines of top, diverse candidates while creating better talent experiences, Findem transforms the way companies plan, hire, and manage talent. Learn more at www.findem.ai Experience - 5 - 9 years We are looking for an experienced Big Data Engineer, who will be responsible for building, deploying and managing various data pipelines, data lake and Big data processing solutions using Big data and ETL technologies. Location- Delhi, India Hybrid- 3 days onsite Responsibilities Build data pipelines, Big data processing solutions and data lake infrastructure using various Big data and ETL technologies Assemble and process large, complex data sets that meet functional non-functional business requirements ETL from a wide variety of sources like MongoDB, S3, Server-to-Server, Kafka etc., and processing using SQL and big data technologies Build analytical tools to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics Build interactive and ad-hoc query self-serve tools for analytics use cases Build data models and data schema for performance, scalability and functional requirement perspective Build processes supporting data transformation, metadata, dependency and workflow management Research, experiment and prototype new tools/technologies and make them successful Skill Requirements Must have-Strong in Python/Scala Must have experience in Big data technologies like Spark, Hadoop, Athena / Presto, Redshift, Kafka etc Experience in various file formats like parquet, JSON, Avro, orc etc Experience in workflow management tools like airflow Experience with batch processing, streaming and message queues Any of visualization tools like Redash, Tableau, Kibana etc Experience in working with structured and unstructured data sets Strong problem solving skills Good to have Exposure to NoSQL like MongoDB Exposure to Cloud platforms like AWS, GCP, etc Exposure to Microservices architecture Exposure to Machine learning techniques The role is full-time and comes with full benefits. We are globally headquartered in the San Francisco Bay Area with our India headquarters in Bengaluru. Equal Opportunity As an equal opportunity employer, we do not discriminate on the basis of race, color, religion, national origin, age, sex (including pregnancy), physical or mental disability, medical condition, genetic information, gender identity or expression, sexual orientation, marital status, protected veteran status or any other legally-protected characteristic. Show more Show less
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
The job market for redshift professionals in India is growing rapidly as more companies adopt cloud data warehousing solutions. Redshift, a powerful data warehouse service provided by Amazon Web Services, is in high demand due to its scalability, performance, and cost-effectiveness. Job seekers with expertise in redshift can find a plethora of opportunities in various industries across the country.
The average salary range for redshift professionals in India varies based on experience and location. Entry-level positions can expect a salary in the range of INR 6-10 lakhs per annum, while experienced professionals can earn upwards of INR 20 lakhs per annum.
In the field of redshift, a typical career path may include roles such as: - Junior Developer - Data Engineer - Senior Data Engineer - Tech Lead - Data Architect
Apart from expertise in redshift, proficiency in the following skills can be beneficial: - SQL - ETL Tools - Data Modeling - Cloud Computing (AWS) - Python/R Programming
As the demand for redshift professionals continues to rise in India, job seekers should focus on honing their skills and knowledge in this area to stay competitive in the job market. By preparing thoroughly and showcasing their expertise, candidates can secure rewarding opportunities in this fast-growing field. Good luck with your job search!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.