Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 years
30 - 38 Lacs
Gurgaon
Remote
Role: AWS Data Engineer Location: Gurugram Mode: Hybrid Type: Permanent Job Description: We are seeking a talented and motivated Data Engineer with requisite years of hands-on experience to join our growing data team. The ideal candidate will have experience working with large datasets, building data pipelines, and utilizing AWS public cloud services to support the design, development, and maintenance of scalable data architectures. This is an excellent opportunity for individuals who are passionate about data engineering and cloud technologies and want to make an impact in a dynamic and innovative environment. Key Responsibilities: Data Pipeline Development: Design, develop, and optimize end-to-end data pipelines for extracting, transforming, and loading (ETL) large volumes of data from diverse sources into data warehouses or lakes. Cloud Infrastructure Management: Implement and manage data processing and storage solutions in AWS (Amazon Web Services) using services like S3, Redshift, Lambda, Glue, Kinesis, and others. Data Modeling: Collaborate with data scientists, analysts, and business stakeholders to define data requirements and design optimal data models for reporting and analysis. Performance Tuning & Optimization: Identify bottlenecks and optimize query performance, pipeline processes, and cloud resources to ensure cost-effective and scalable data workflows. Automation & Scripting: Develop automated data workflows and scripts to improve operational efficiency using Python, SQL, or other scripting languages. Collaboration & Documentation: Work closely with data analysts, data scientists, and other engineering teams to ensure data availability, integrity, and quality. Document processes, architectures, and solutions clearly. Data Quality & Governance: Ensure the accuracy, consistency, and completeness of data. Implement and maintain data governance policies to ensure compliance and security standards are met. Troubleshooting & Support: Provide ongoing support for data pipelines and troubleshoot issues related to data integration, performance, and system reliability. Qualifications: Essential Skills: Experience: 8+ years of professional experience as a Data Engineer, with a strong background in building and optimizing data pipelines and working with large-scale datasets. AWS Experience: Hands-on experience with AWS cloud services, particularly S3, Lambda, Glue, Redshift, RDS, and EC2. ETL Processes: Strong understanding of ETL concepts, tools, and frameworks. Experience with data integration, cleansing, and transformation. Programming Languages: Proficiency in Python, SQL, and other scripting languages (e.g., Bash, Scala, Java). Data Warehousing: Experience with relational and non-relational databases, including data warehousing solutions like AWS Redshift, Snowflake, or similar platforms. Data Modeling: Experience in designing data models, schema design, and data architecture for analytical systems. Version Control & CI/CD: Familiarity with version control tools (e.g., Git) and CI/CD pipelines. Problem-Solving: Strong troubleshooting skills, with an ability to optimize performance and resolve technical issues across the data pipeline. Desirable Skills: Big Data Technologies: Experience with Hadoop, Spark, or other big data technologies. Containerization & Orchestration: Knowledge of Docker, Kubernetes, or similar containerization/orchestration technologies. Data Security: Experience implementing security best practices in the cloud and managing data privacy requirements. Data Streaming: Familiarity with data streaming technologies such as AWS Kinesis or Apache Kafka. Business Intelligence Tools: Experience with BI tools (Tableau, Quicksight) for visualization and reporting. Agile Methodology: Familiarity with Agile development practices and tools (Jira, Trello, etc.) Job Type: Permanent Pay: ₹3,000,000.00 - ₹3,800,000.00 per year Benefits: Work from home Schedule: Day shift Monday to Friday Experience: AWS Glue Catalog : 5 years (Required) Data Engineering : 6 years (Required) AWS CDK, Cloud-formation, Lambda, Step-function: 3 years (Required) AWS Elastic MapReduce (EMR): 3 years (Required) Work Location: In person
Posted 2 weeks ago
3.0 years
7 - 9 Lacs
Gurgaon
On-site
Who We Are Boston Consulting Group partners with leaders in business and society to tackle their most important challenges and capture their greatest opportunities. BCG was the pioneer in business strategy when it was founded in 1963. Today, we help clients with total transformation-inspiring complex change, enabling organizations to grow, building competitive advantage, and driving bottom-line impact. To succeed, organizations must blend digital and human capabilities. Our diverse, global teams bring deep industry and functional expertise and a range of perspectives to spark change. BCG delivers solutions through leading-edge management consulting along with technology and design, corporate and digital ventures—and business purpose. We work in a uniquely collaborative model across the firm and throughout all levels of the client organization, generating results that allow our clients to thrive. What You'll Do As a Data Engineer, you will play a crucial role in designing, building, and maintaining the data infrastructure and systems required for efficient and reliable data processing. You will collaborate with cross-functional teams, including data scientists, analysts, to ensure the availability, integrity, and accessibility of data for various business needs. This role requires a strong understanding of data management principles, database technologies, data integration, and data warehousing concepts. Key Responsibilities Develop and maintain data warehouse solutions, including data modeling, schema design, and indexing strategies Optimize data processing workflows for improved performance, reliability, and scalability Identify and integrate diverse data sources, both internal and external, into a centralized data platform Implement and manage data lakes, data marts, or other storage solutions as required Ensure data privacy and compliance with relevant data protection regulations Define and implement data governance policies, standards, and best practices Transform raw data into usable formats for analytics, reporting, and machine learning purposes Perform data cleansing, normalization, aggregation, and enrichment operations to enhance data quality and usability Collaborate with data analysts and data scientists to understand data requirements and implement appropriate data transformations What You'll Bring Bachelor's or Master's degree in Computer Science, Data Science, Information Systems, or a related field Proficiency in SQL and experience with relational databases (e.g., Snowflake, MySQL, PostgreSQL, Oracle) 3+ years of experience in data engineering or a similar role Hands-on programming skills in languages such as Python or Java is a plus Familiarity with cloud-based data platforms (e.g., AWS, Azure, GCP) and related services (e.g., S3, Redshift, BigQuery) is good to have Knowledge of data modeling and database design principles Familiarity with data visualization tools (e.g., Tableau, Power BI) is a plus Strong problem-solving and analytical skills with attention to detail Experience with HR data analysis and HR domain knowledge is preferred Who You'll Work With As part of the People Analytics team, you will modernize HR platforms, capabilities & engagement, automate/digitize core HR processes and operations and enable greater efficiency. You will collaborate with the global people team and colleagues across BCG to manage the life cycle of all BCG employees. The People Management Team (PMT) is comprised of several centers of expertise including HR Operations, People Analytics, Career Development, Learning & Development, Talent Acquisition & Branding, Compensation, and Mobility. Our centers of expertise work together to build out new teams and capabilities by sourcing, acquiring and retaining the best, diverse talent for BCG’s Global Services Business. We develop talent and capabilities, while enhancing managers’ effectiveness, and building affiliation and engagement in our new global offices. The PMT also harmonizes process efficiencies, automation, and global standardization. Through analytics and digitalization, we are always looking to expand our PMT capabilities and coverage Boston Consulting Group is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, age, religion, sex, sexual orientation, gender identity / expression, national origin, disability, protected veteran status, or any other characteristic protected under national, provincial, or local law, where applicable, and those with criminal histories will be considered in a manner consistent with applicable state and local laws. BCG is an E - Verify Employer. Click here for more information on E-Verify.
Posted 2 weeks ago
0 years
4 - 6 Lacs
Gurgaon
On-site
About Us KlearNow.AI is on a mission to futurize global trade. Our patented AI and machine learning platform digitizes and contextualizes unstructured trade documents to unlock real-time shipment visibility, drive smart analytics, and provide critical business intelligence—without the hassle of complex integrations. We empower supply chains to move faster, work smarter, and make data-driven decisions with confidence. With operations in the U.S., Canada, U.K., Spain, and the Netherlands—and aggressive growth plans underway—we’re scaling a global platform for the future of logistics. We achieve our goals by assembling a team of the best talents. As we expand, it's crucial to maintain and strengthen our culture, which places a high value on our people and teams. Our collective growth and triumphs are intrinsically linked to the success and well-being of every team member OUR VISION To empower people and optimize processes with AI-powered clarity. YOUR MISSION We’re building a team of bold thinkers, problem solvers, and storytellers. As part of our high-energy, inclusive workplace, you’ll challenge the status quo of traditional supply chains and help shape a more transparent, intelligent, and efficient world of trade. Whether you're a product innovator, logistics expert, or marketing storyteller—your work at KlearNow.AI will make a measurable impact. Why Klearnow.ai Global Impact : Be part of a platform live in five countries and expanding rapidly. Fast-Growing SaaS Company : Work in an agile environment with enterprise backing. Cutting-Edge Tech : AI-powered customs clearance, freight visibility, document automation, and drayage intelligence—all in one. People-First Culture : We invest in our team’s growth and well-being. Make Your Mark : Shape the future of trade with your ideas and energy. About Us KlearNow.AI digitizes and contextualizes unstructured trade documents to create shipment visibility, business intelligence, and advanced analytics for supply chain stakeholders. It provides unparalleled transparency and insights, empowering businesses to operate efficiently. We futurize supply chains with AI&ML-powered collaborative digital platforms created from ingesting required trade documentation without the pain of complex integrations. We achieve our goals by assembling a team of the best talents. As we expand, it's crucial to maintain and strengthen our culture, which places a high value on our people and teams. Our collective growth and triumphs are intrinsically linked to the success and well-being of every team member. OUR VISION To futurize global trade, empowering people and optimizing processes with AI-powered clarity. YOUR MISSION As part of a diverse, high-energy workplace, you will challenge the status quo of supply chain operations with your knack for engaging clients and sharing great stories. KlearNow is operational and a certified Customs Business provider in US, Canada, UK, Spain and Netherlands with plans to grow in many more markets in near future. Business Analyst - Data Science & Business Intelligence Location: India Employment Type: Full-time The Role: Join our Data & Analytics team as a Business Analyst where you'll transform data from our modern data warehouse into actionable business insights and strategic recommendations. You'll work with advanced analytics tools and techniques to create compelling reports, dashboards, and predictive models that drive data-driven decision making across the organization. Key Responsibilities: Analyze data from cloud data warehouses (like Amazon Redshift) to identify business trends and opportunities Create interactive dashboards and reports using Business Intelligence platforms(like ThoughtSpot, PowerBI) Develop statistical models and perform predictive analytics using tools (like Python, R) Collaborate with stakeholders to understand business requirements and translate them into analytical solutions Design and implement KPIs, metrics, and performance indicators for various business functions Conduct ad-hoc analysis to support strategic business decisions and initiatives Present findings and recommendations to leadership through compelling data visualizations Monitor and troubleshoot existing reports and dashboards to ensure accuracy and performance Ensure data quality and consistency in all analytical outputs and reporting Support business teams with self-service analytics training and best practices Required Qualifications: Strong analytical and problem-solving skills with business acumen Experience with Business Intelligence tools and dashboard creation Proficiency in data analysis using programming languages (like Python, R) or advanced Excel Experience querying cloud data warehouses and relational databases Strong data visualization and storytelling capabilities Experience with statistical analysis and basic predictive modeling Preferred Qualifications: Experience with advanced BI platforms (like ThoughtSpot) is a significant advantage Machine learning and advanced statistical modeling experience Experience with modern analytics tools and frameworks Advanced data visualization and presentation skills Experience with business process optimization and data-driven strategy Join our vibrant and forward-thinking team at KlearNow.ai as we continue to push the boundaries of AI/ML technology. We offer a competitive salary, flexible work arrangements, and ample opportunities for professional growth. We are committed to diversity, equality and inclusion. If you are passionate about shaping the future of logistics and supply chain and making a difference, we invite you to apply . Join our vibrant and forward-thinking team at KlearNow.ai as we continue to push the boundaries of AI/ML technology. We offer a competitive salary, flexible work arrangements, and ample opportunities for professional growth. We are committed to diversity, equality and inclusion. If you are passionate about shaping the future of logistics and supply chain and making a difference, we invite you to apply .
Posted 2 weeks ago
6.0 years
0 Lacs
Hyderabad, Telangana, India
Remote
Overview As a member of the data engineering team, you will be the key technical expert developing and overseeing PepsiCo's data product build & operations and drive a strong vision for how data engineering can proactively create a positive impact on the business. You'll be an empowered member of a team of data engineers who build data pipelines into various source systems, rest data on the PepsiCo Data Lake, and enable exploration and access for analytics, visualization, machine learning, and product development efforts across the company. As a member of the data engineering team, you will help lead the development of very large and complex data applications into public cloud environments directly impacting the design, architecture, and implementation of PepsiCo's flagship data products around topics like revenue management, supply chain, manufacturing, and logistics. You will work closely with process owners, product owners and business users. You'll be working in a hybrid environment with in-house, on-premise data sources as well as cloud and remote systems Responsibilities Active contributor to code development in projects and services. Manage and scale data pipelines from internal and external data sources to support new product launches and drive data quality across data products. Build and own the automation and monitoring frameworks that captures metrics and operational KPIs for data pipeline quality and performance. Responsible for implementing best practices around systems integration, security, performance and data management. Empower the business by creating value through the increased adoption of data, data science and business intelligence landscape. Collaborate with internal clients (data science and product teams) to drive solutioning and POC discussions. Evolve the architectural capabilities and maturity of the data platform by engaging with enterprise architects and strategic internal and external partners. Develop and optimize procedures to “productionalize” data science models. Define and manage SLA’s for data products and processes running in production. Support large-scale experimentation done by data scientists. Prototype new approaches and build solutions at scale. Research in state-of-the-art methodologies. Create documentation for learnings and knowledge transfer. Create and audit reusable packages or libraries Qualifications 6+ years of overall technology experience that includes at least 4+ years of hands-on software development, data engineering, and systems architecture. 4+ years of experience with Data Lake Infrastructure, Data Warehousing, and Data Analytics tools. 4+ years of experience in SQL optimization and performance tuning, and development experience in programming languages like Python, PySpark, Scala etc.). 2+ years in cloud data engineering experience in Azure. Fluent with Azure cloud services. Azure Certification is a plus. Experience with integration of multi cloud services with on-premises technologies. Experience with data modeling, data warehousing, and building high-volume ETL/ELT pipelines. Experience with data profiling and data quality tools like Apache Griffin, Deequ, and Great Expectations. Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets. Experience with at least one MPP database technology such as Redshift, Synapse or SnowFlake. Experience with running and scaling applications on the cloud infrastructure and containerized services like Kubernetes. Experience with version control systems like Github and deployment & CI tools. Experience with Azure Data Factory, Azure Databricks and Azure Machine learning tools. Experience with Statistical/ML techniques is a plus. Experience with building solutions in the retail or in the supply chain space is a plus Understanding of metadata management, data lineage, and data glossaries is a plus. Working knowledge of agile development, including DevOps and DataOps concepts. Familiarity with business intelligence tools (such as PowerBI). BA/BS in Computer Science, Math, Physics, or other technical fields.
Posted 2 weeks ago
0 years
0 Lacs
Ahmedabad
On-site
ROLES RESPONSIBILITIES: Work with business IT partners to understand the business and data requirements. Acquire data from primary or secondary data sources and perform data profiling. Interpret data, analyze results and provide quick data analysis. Identify, analyze, and interpret trends or patterns in complex datasets. Build bus matrices, proof of concepts, source mapping document and raw source data model. Locate and define new process improvement opportunities. MANDATORY SKILLS: Strong knowledge of and experience with Excel, databases (Redshift, Oracle etc.) and programming languages (Python, R). Ability to write SQL queries to perform data profiling, data analysis and present the insights to the business. Exposure to Datawarehousing and Data Modelling concepts. Strong Exposure to IT Project Lifecycle. Finance/Life Science Domain experience. BI Tool knowledge is an added advantage.
Posted 2 weeks ago
7.0 years
0 Lacs
Pune, Maharashtra, India
Remote
Job Title: - Manager/Sr Manager – ETL - Pyspark Requisition ID: Job Location: -Pune Job Summary: - This role will be responsible for developing and maintaining data models to support data warehouse and reporting requirements. It requires a strong background in data engineering, excellent leadership capabilities, and the ability to drive projects to successful completion. Job Responsibilities: - Working experience in building Data Lake, DWH architecture using Databricks platform Engage with Client to participate in requirement gathering, Status update on work, UAT and be the key partner in the overall engagement Participates in ETL Design using any python framework of new or changing mappings and workflows with the team and prepares technical specifications Crafts ETL Mappings, Mapplets, Workflows, Worklets using Informatica PowerCenter Write complex SQL queries with performance tuning and optimization Should be able to handle task independently and lead the team Responsible for unit testing, integration testing and UAT as and when required Good communication Skills Coordinate with cross-functional teams to ensure project objectives are met. Collaborate with data architects and engineers to design and implement data models. Managing projects in fast paced agile ecosystem and ensuring quality deliverables within stringent timelines Responsible for Risk Management, maintaining the Risk documentation and mitigations plan. Drive continuous improvement in a Lean/Agile environment, implementing DevOps delivery approaches encompassing CI/CD, build automation and deployments. Communication & Logical Thinking – Demonstrates strong analytical skills, employing a systematic and logical approach to data analysis, problem-solving, and situational assessment. Capable of effectively presenting and defending team viewpoints, while securing buy-in from both technical and client stakeholders. Handle Client Relationship – Manage client relationship and client expectations independently. Should be able to deliver results back to the Client independently. Should have excellent communication skills. Job Requirements: - Should have 7+ years of working experience in ETL & Data Warehousing Advanced knowledge of PySpark/python, pandas, numpy frameworks. Minimum 4 years of extensive experience in design, build and deployment of Spark/Pyspark - for data integration. Deep experience in developing data processing tasks using pySpark such as reading data from external sources, merge data, perform data enrichment and load in to target data destinations Create Spark jobs for data transformation and aggregation Spark query tuning and performance optimization - Good understanding of different file formats (ORC, Parquet, AVRO) to optimize queries/processing and compression techniques. Deep understanding of distributed systems (e.g. CAP theorem, partitioning, replication, consistency, and consensus) Experience in Modular Programming & Robust programming methodologies ETL knowledge and have done ETL development using any pyspark/Python framework Advanced SQL knowledge Ability to perform multiple task in continually changing environment Worked with Redshift/Synapse/Snowflake in the past Preferable. Good understanding and experience in the SDLC phases like the Requirements Specification, Analysis, Design, Implementation, Testing, Deployment and Maintenance Qualification: - BE/ B. Tech/ /M Tech/MBA Must have Skills: - Expertise in pharma commercial domain Proficiency in ETL using PySpark Strong experience in data warehousing Skills that give you an edge: - Experience in AWS or Azure cloud and its service offerings Excellent interpersonal/communication skills (both oral/written) with the ability to communicate at various levels with clarity & precision We will provide– (Employee Value Proposition) Offer an inclusive environment that encourages diverse perspectives and ideas Delivering challenging and unique opportunities to contribute to the success of a transforming organization Opportunity to work on technical challenges that may impact across geographies Vast opportunities for self-development: online Axtria Institute, knowledge sharing opportunities globally, learning opportunities through external certifications Sponsored Tech Talks & Hackathons Possibility of relocating to any Axtria office for short and long-term projects Benefit package: -Health benefits -Retirement benefits -Paid time off -Flexible Benefits -Hybrid /FT Office/Remote Axtria is an equal-opportunity employer that values diversity and inclusiveness in the workplace. Who we are Axtria 14 years journey Axtria, Great Place to Work Life at Axtria Axtria Diversity
Posted 2 weeks ago
4.0 - 5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
About Verbaflo VerbaFlo is an innovative AI-driven company with a focus on revolutionizing Real Estate with AI. As we expand our operations and scale new heights, we are looking for a detail-oriented and strategic Finance Manager to join our dynamic team and manage the financial health of the organization. Role Overview We are seeking a highly skilled Business Analyst with hands-on experience in Metabase to help unlock actionable insights and build a data-first culture across the organization. Key Responsibilities Work with stakeholders to understand reporting needs and convert them into intuitive dashboards and visualizations using Metabase . Write and optimize SQL queries to extract, transform, and analyze data across multiple sources. Develop and maintain key business reports, KPI dashboards, and self-serve analytics in Metabase. Perform ad-hoc analysis to support business decisions across product, marketing, operations, and sales. Identify data gaps, inconsistencies, and opportunities to improve data reliability and availability. Collaborate with data engineering or dev teams to ensure data pipelines and schemas support reporting needs. Present findings in a clear and actionable manner to both technical and non-technical stakeholders. Qualifications & Experience Core Requirements: 4-5 years of experience as a Business/Data Analyst or similar role. Proven experience building and maintaining dashboards in Metabase. Proficiency in SQL and strong understanding of relational databases. Strong business acumen with the ability to frame problems and tell stories with data. Experience working with cross-functional teams in a fast-paced environment. Familiarity with tools like Excel/Sheets, Google Analytics, or product analytics platforms is a plus. Ability to effectively communicate complex ideas to technical and non-technical stakeholders while thriving in cross-functional teams. Bonus Points: Experience with version control (Git), basic scripting (Python), or other BI tools (Looker, Power BI). Understanding of data modeling or working with data warehouses like BigQuery, Redshift, etc. Exposure to product analytics and customer funnel metrics. Why Verbaflo? Be a part of an innovative, fast-growing AI startup. Work with a talented and diverse team in a collaborative environment. Competitive compensation and benefits. Opportunities for career growth and development. If you're passionate about AI and looking for an exciting opportunity to contribute to a rapidly growing company, we’d love to hear from you!
Posted 2 weeks ago
8.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
At bpost group, operational excellence is driven by smart data and scalable systems. As a leading logistics and e-commerce player in Belgium and beyond, we rely on data to balance network capacity, optimize yield, and ensure efficient service delivery across every step of our value chain. We are looking for an Operational Data Engineer to strengthen our Yield and Capacity Management team. In this hands-on role, you will design, develop, and maintain the data infrastructure that powers real-time decision-making and performance tracking across our operational landscape. Your work will directly support forecasting models, pricing intelligence, and capacity planning tools that are critical to both day-to-day efficiency and long-term profitability. If you thrive in high-impact environments, have a deep understanding of data engineering in operational contexts, and want to help shape the future of logistics through data, then we want to hear from you. Role Summary We are seeking a highly skilled and detail-oriented Data Engineer with a specialization in operational reporting and dashboarding. The ideal candidate will have 5–8 years of experience in designing, developing, and maintaining data pipelines and visual analytics solutions that empower decision-making. This role requires a solid foundation in data modeling, ETL development, and BI tools, along with the ability to work cross-functionally to deliver high-impact reporting solutions. Key Responsibilities Data Pipeline Development and Maintenance Design, build, and optimize robust ETL pipelines to support operational reporting requirements Ensure data quality, consistency, and integrity across sources and reporting outputs Automate data ingestion from various internal and external systems Reporting and Dashboarding Develop and maintain dashboards and reports in BI tools (e.g., Power BI, Tableau, Looker) Collaborate with business stakeholders to translate requirements into effective visualizations Optimize dashboard performance and user experience through best practices Data Modeling and Architecture Create logical and physical data models that support scalable reporting solutions Participate in the design and implementation of data marts and operational data stores Work closely with data architects to align with enterprise data strategy Cross-Functional Collaboration Partner with analysts, product managers, and operations teams to define reporting KPIs Ensure consistent definitions and calculations across different business units Support ad hoc analytical requests and provide technical insights when needed Governance and Best Practices Implement and advocate for data governance practices including data cataloging and lineage Define and enforce reporting standards and data documentation Participate in peer code and dashboard reviews Qualifications Experience: 5–8 years of experience in data engineering or business intelligence engineering roles Proven track record in building scalable reporting systems and maintaining dashboards for operational use Technical Skills: Solid experience with SQL, capable of writing complex queries and understanding database structures across various SQL dialects (e.g., Oracle, MySQL, PostgreSQL). Strong experience with SQL, Python, and modern ETL frameworks (e.g., dbt, Apache Airflow) Understanding of data orchestration concepts and experience with Airflow (or similar tools like Prefect, Dagster). Proficiency in at least one BI tool (Power BI, Tableau, or Looker) or similar technology for dashboard and report development. Knowledge of cloud data platforms (AWS Redshift, Google BigQuery, Databricks, Snowflake, or Azure Synapse) Familiarity with version control and CI/CD pipelines for data Exposure to or understanding of streaming data concepts, ideally with Kafka. Soft Skills: Excellent communication and stakeholder management skills Strong problem-solving capabilities and attention to detail Ability to manage multiple projects and meet tight deadlines Preferred Skills Experience with real-time data processing frameworks (e.g., Kafka, Spark Streaming) Exposure to data observability and monitoring tools Understanding of data privacy and compliance requirements (e.g., GDPR, HIPAA)
Posted 2 weeks ago
130.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description Manager, Scientific Data Engineering The Opportunity Based in Hyderabad, join a global healthcare biopharma company and be part of a 130-year legacy of success backed by ethical integrity, forward momentum, and an inspiring mission to achieve new milestones in global healthcare. Be part of an organisation driven by digital technology and data-backed approaches that support a diversified portfolio of prescription medicines, vaccines, and animal health products. Drive innovation and execution excellence. Join a team that is passionate about using data, analytics, and insights to drive decision-making and create custom software, allowing us to tackle some of the world's greatest health threats. Our Technology Centers focus on creating a space where teams can come together to deliver business solutions that save and improve lives. An integral part of our company's IT operating model, Tech Centers are globally distributed locations where each IT division has employees to enable our digital transformation journey and drive business outcomes. These locations, in addition to the other sites, are essential to supporting our business and strategy. A focused group of leaders in each Tech Center helps ensure we can manage and improve each location, from investing in the growth, success, and well-being of our people to making sure colleagues from each IT division feel a sense of belonging, to managing critical emergencies. Together, we must leverage the strength of our team to collaborate globally to optimize connections and share best practices across the Tech Centers. Role Overview Design, develop, and maintain data pipelines to extract data from various sources and populate a data lake and data warehouse. Work closely with data scientists, analysts, and business teams to understand data requirements and deliver solutions aligned with business goals. Build and maintain platforms that support data ingestion, transformation, and orchestration across various data sources, both internal and external. Use data orchestration, logging, and monitoring tools to build resilient pipelines. Automate data flows and pipeline monitoring to ensure scalability, performance, and resilience of the platform. Monitor, troubleshoot, and resolve issues related to the data integration platform, ensuring uptime and reliability. Maintain thorough documentation for integration processes, configurations, and code to ensure easy onboarding for new team members and future scalability. Develop pipelines to ingest data into cloud data warehouses. Establish, modify and maintain data structures and associated components. Create and deliver standard reports in accordance with stakeholder needs and conforming to agreed standards. Work within a matrix organizational structure, reporting to both the functional manager and the project manager. Participate in project planning, execution, and delivery, ensuring alignment with both functional and project goals. What Should You Have Bachelor’s degree in information technology, Computer Science or any Technology stream. 3+ years of developing data pipelines & data infrastructure, ideally within a drug development or life sciences context. Demonstrated expertise in delivering large-scale information management technology solutions encompassing data integration and self-service analytics enablement. Experienced in software/data engineering practices (including versioning, release management, deployment of datasets, agile & related software tools). Ability to design, build and unit test applications on Spark framework on Python. Build PySpark based applications for both batch and streaming requirements, which will require in-depth knowledge on Databricks/ Hadoop. Experience working with storage frameworks like Delta Lake/ Iceberg Experience working with MPP Datawarehouse’s like Redshift Cloud-native, ideally AWS certified. Strong working knowledge of at least one Reporting/Insight generation technology Good interpersonal and communication skills (verbal and written). Proven record of delivering high-quality results. Product and customer-centric approach. Innovative thinking, experimental mindset. Mandatory Skills Skill Category Skills Foundational Data Concepts SQL (Intermediate / Advanced) Python (Intermediate) Cloud Fundamentals (AWS Focus) AWS Console, IAM roles, regions, concept of cloud computing AWS S3 Data Processing & Transformation Apache Spark (Concepts & Usage) Databricks (Platform Usage), Unity Catalog, Delta Lake ETL & Orchestration AWS Glue (ETL, Catalog), Lambda Apache Airflow (DAGs and Orchestration) or other orchestration tool dbt (Data Build Tool) Matillion (or similar ETL tool) Data Storage & Querying Amazon Redshift / Azure Synapse Trino / Equivalent AWS Athena / Query Federation Data Quality & Governance Data Quality Concepts / Implementation Data Observability Concepts Collibra / equivalent tool Real-time / Streaming Apache Kafka (Concepts & Usage) DevOps & Automation CI / CD concepts, Pipelines (GitHub Actions / Jenkins / Azure DevOps) Our technology teams operate as business partners, proposing ideas and innovative solutions that enable new organizational capabilities. We collaborate internationally to deliver services and solutions that help everyone be more productive and enable innovation. Who We Are We are known as Merck & Co., Inc., Rahway, New Jersey, USA in the United States and Canada and MSD everywhere else. For more than a century, we have been inventing for life, bringing forward medicines and vaccines for many of the world's most challenging diseases. Today, our company continues to be at the forefront of research to deliver innovative health solutions and advance the prevention and treatment of diseases that threaten people and animals around the world. What We Look For Imagine getting up in the morning for a job as important as helping to save and improve lives around the world. Here, you have that opportunity. You can put your empathy, creativity, digital mastery, or scientific genius to work in collaboration with a diverse group of colleagues who pursue and bring hope to countless people who are battling some of the most challenging diseases of our time. Our team is constantly evolving, so if you are among the intellectually curious, join us—and start making your impact today. #HYDIT2025 Current Employees apply HERE Current Contingent Workers apply HERE Search Firm Representatives Please Read Carefully Merck & Co., Inc., Rahway, NJ, USA, also known as Merck Sharp & Dohme LLC, Rahway, NJ, USA, does not accept unsolicited assistance from search firms for employment opportunities. All CVs / resumes submitted by search firms to any employee at our company without a valid written search agreement in place for this position will be deemed the sole property of our company. No fee will be paid in the event a candidate is hired by our company as a result of an agency referral where no pre-existing agreement is in place. Where agency agreements are in place, introductions are position specific. Please, no phone calls or emails. Employee Status Regular Relocation VISA Sponsorship Travel Requirements Flexible Work Arrangements Hybrid Shift Valid Driving License Hazardous Material(s) Required Skills Business Intelligence (BI), Database Administration, Data Engineering, Data Management, Data Modeling, Data Visualization, Design Applications, Information Management, Software Development, Software Development Life Cycle (SDLC), System Designs Preferred Skills Job Posting End Date 08/20/2025 A job posting is effective until 11 59 59PM on the day BEFORE the listed job posting end date. Please ensure you apply to a job posting no later than the day BEFORE the job posting end date. Requisition ID R353508
Posted 2 weeks ago
5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Title: Data Engineer Work Mode: Hybrid (3 Days from Office – Only 4 Hours Onsite per Day) Location: Gurgaon About the Role BayOne is looking for a skilled Data Engineer to join our dynamic team in Gurgaon. This hybrid role offers flexibility, with just 4 hours per day required in-office , 3 days a week. If you're passionate about building scalable data solutions using Azure and Databricks and thrive in a fast-paced environment, we'd love to hear from you. Key Responsibilities Design and build scalable data pipelines and data lake/warehouse solutions on Azure and Databricks . Work extensively with SQL , schema design, and dimensional data modeling . Develop and maintain ETL/ELT processes using tools like ADF, Talend, Informatica , etc. Leverage Azure Synapse, Azure SQL, Snowflake, Redshift, or BigQuery to manage and optimize data storage and retrieval. Utilize Spark, PySpark, and Spark SQL for big data processing. Collaborate cross-functionally to gather requirements, design solutions, and implement best practices in data engineering. Required Qualifications Minimum 5 years of experience in data engineering, data warehousing, or data lake technologies. Strong experience on Azure cloud platform (preferred over others). Proven expertise in SQL , data modeling, and data warehouse architecture. Hands-on with Databricks, Spark , and proficient programming in PySpark/Spark SQL . Experience with ETL/ELT tools such as Azure Data Factory (ADF) , Talend , or Informatica . Strong communication skills and the ability to thrive in a fast-paced, dynamic environment . Self-motivated, independent learner with a proactive mindset. Nice-to-Have Skills Knowledge of Azure Event Hub , IoT Hub , Stream Analytics , Cosmos DB , and Azure Analysis Services . Familiarity with SAP ECC, S/4HANA, or HANA data sources. Intermediate skills in Power BI , Azure DevOps , CI/CD pipelines , and cloud migration strategies . About BayOne BayOne is a 12-year-old software consulting company headquartered in Pleasanton, California . We specialize in Talent Solutions , helping clients build diverse and high-performing teams. Our mission is to #MakeTechPurple by driving diversity in tech while delivering cutting-edge solutions across: Project & Program Management Cloud & IT Infrastructure Big Data & Analytics Software & Quality Engineering User Experience Design Explore More: 🔗 Company Website 🔗 LinkedIn 🔗 Glassdoor Reviews Join us to shape the future of data-driven decision-making while working in a flexible and collaborative environment.
Posted 2 weeks ago
6.0 years
0 Lacs
Gurugram, Haryana, India
Remote
What would a typical day at your work be like? • You will lead and manage the delivery of projects and be responsible for the delivery of project and team goals. • Build & support data ingestion and processing pipelines. This will entail extract, load and transform of data from a wide variety of sources using latest data frameworks and technologies. • Design, build, test, and maintain machine learning infrastructure and frameworks to empower data scientists to rapidly iterate on model development. • Own and lead client engagement and communication on technical projects. Define project scopes and track project progress and delivery. • Plan and execute project architecture and allocate work to team. • Keep up to date with advances in big data technologies and run pilots to design the data architecture to scale with the increased data volume. • Partner with software engineering teams to drive completion of multi-functional projects. What Do We Expect? • Minimum 6 years of overall experience in data engineering and 2+ years leading a team as team lead and doing project management. • Experience working with a global team and remote clients. • Hands on experience in building data pipelines on various infrastructures. • Knowledge of statistical and machine learning techniques. Hands on experience in integrating machine learning in data pipelines. • Ability to work hands-on with the data engineers in the team in design and development of the solution using the relevant big data technologies and data warehouse concepts Strong knowledge of advanced SQL, data warehousing concepts, DataMart designing. • Have strong experience in modern data platform components such as Spark, Python, etc. • Experience with setting up and maintaining Data warehouse (Google BigQuery, Redshift, Snowflake) and Data Lakes (GCS, AWS S3 etc.) for an organization. • Experience in building data pipeline with AWS Glue, Azure Data Factory and Google Dataflow. • Experience with relational SQL and NoSQL databases, including Postgres and Cassandra / MongoDB. • Strong problem solving and communication skills.
Posted 2 weeks ago
10.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Role: IT Project Manager/Architect for Data Platform Experience: 10+ Year's Location: Hyderabad Notice Period: 15 Days Less Job Description As an IT Project Manager/Architect for Data Platform & Monitoring within Global Operations and Supply Chain IT, your primary responsibility is to lead the architecture, technical implementation, and overall management of the data platform and monitoring program. This is achieved in close collaboration with internal teams and key stakeholders to ensure successful delivery. The role is critical in the planning and execution of a strategic program that includes two core components: Developing a centralized data platform to consolidate manufacturing systems data across all sites. Implementing robust observability and monitoring capabilities for global manufacturing systems and applications, aimed at ensuring up time through effective alerting, logging, and visibility mechanisms. Success in this role demands strong coordination and communication skills, with the ability to work seamlessly across cross-functional teams—including project managers, business stakeholders, IT teams, and external partners—to ensure alignment with organizational objectives, timelines, and delivery standards. We believe that when people from different cultures, genders, and points of view come together, innovation is the result —and everyone wins. Creating an inclusive culture where you can thrive. Our unwavering commitment to inclusion, diversity, and equity (ID&E) means zero barriers to opportunity within and a culture where all employees belong, are respected, and feel valued for who they are and the life experiences they contribute. We know equity starts beyond our workplace, and we must play a role in addressing systemic inequities in our communications if we hope to have a long-term sustainable impact. Anchored in our Mission, we continue to drive ID&E forward both to enhance the well-being of employees and to accelerate innovation that brings our lifesaving technologies to more people in more places around the world. Bring your talents to an industry leader in medical technology and healthcare solutions – we’re a market leader and growing every day. You can be proud to be a part of technologies that are rooted in our long history of mission-driven innovation. You will be empowered to shape your own career. We encourage and support your growth with the training, mentorship, and guidance you need to own your future success. Together, we can transform healthcare. Join us for a career in IT that changes lives. Committed to fostering a diverse and inclusive culture. Check out the accomplishments of our Women in IT group! CAREERS THAT CHANGE LIVES As an IT Project Manager/Architect for Data Platform & Monitoring within Global Operations and Supply Chain IT, your primary responsibility is to lead the architecture, technical implementation, and overall management of the data platform and monitoring program. Responsibilities may include the following and other duties may be assigned essential to successfully deliver improvements in technology capabilities, operational efficiency, financial management, and business continuity. • Develop a comprehensive project plan outlining tasks, timelines, resources, and milestones for manufacturing IT systems implementation. • Manage a team of 10-15 Global Operations Supply Chain team in the core manufacturing and supply chain digital platform domain • Define the project scope, goals, and objectives, ensuring alignment with organizational strategy. • Identify potential risks and develop mitigation plans to ensure successful project execution. • Lead a diverse cross-functional project team, encompassing IT professionals, process engineers, production units, and external consultants. • Establish a collaborative environment conducive to effective communication and harmonious coordination among team members. • Work closely with business stakeholders to gather and document functional and technical requirements for the IT systems implementation solution. • Lead the implementation of Manufacturing IT Systems with extensive experience in large scale program management, manufacturing IT platforms, MES platforms, SAP, and team leadership. • Provide updates to leadership team • Experience in implementing enterprise data platforms (e.g., Snowflake, Redshift, Synapse), including data integration activities such as data ingestion, transformation (ETL/ELT), and ensuring robust, scalable data architecture • (Good to Have) Experience in implementing application and system monitoring solutions using tools like Dynatrace and SolarWinds to ensure observability and reliability. OR any monitoring experience will be helpful. • Lead and coordinate cross-functional teams and stakeholders to gather business and technical requirements, translating them into a clear, actionable 3-year data platform roadmap. • Proven experience in effective team management, including coordination with external vendors and partners to ensure timely delivery, alignment with technical goals, and quality outcomes. • Demonstrated ability to manage project budgets, including financial tracking, forecasting, and monthly reporting to ensure alignment with organizational goals and governance standards. MUST HAVE (Minimum Qualifications) • Bachelor’s degree (Required); advanced degree preferred. • Minimum 10 years of relevant experience in IT project or program management roles. • 4+ years team management experience of 10+ team members • Prior experience in regulated or validated industries is a strong plus. • Strong documentation, organizational, and communication skills. • Familiarity with project management tools (e.g., Excel, Planisware, JIRA, Confluence, MS Project, Smartsheet). • Ability to understand the customer's business problem, need, or opportunity and to design a solution that completely and correctly addresses the business problem, need, or opportunity without unnecessary enhancements. • Proven ability to work as a team player, delivering quality results within defined timelines. • Understanding of application lifecycle processes and system integration concepts • Ability to thrive in a fast-paced, team-oriented environment. SKILLS NEEDED • Strong background in IT project management, especially in manufacturing or supply chain domains • Experienced in leading multi-function cross-team collaboration between IT and Business • Experience in managing program timelines, risks, status, escalations in a timely manner • Understand and work within processes and tools • Solid understanding of SDLC as well as good knowledge of Agile/Waterfall/Hybrid project management principles and practices • Experience with Project management tools like DevOps • Strong knowledge of MS PowerPoint, MS Excel, MS Projects • Experience managing Project Costing, Budget Forecasting, Resource Management • Working knowledge of manufacturing IT systems like ERP, MES, etc.
Posted 2 weeks ago
4.0 years
0 Lacs
India
Remote
Job Title Senior Product Analyst at Careem (Fully remote) Company Details Careem is building the Everything App for the greater Middle East, making it easier than ever to move around, order food and groceries, manage payments, and more. Careem is led by a powerful purpose to simplify and improve the lives of people and build an awesome organisation that inspires. Since 2012, Careem has created earnings for over 2.5 million Captains, simplified the lives of over 70 million customers, and built a platform for the region’s best talent to thrive and for entrepreneurs to scale their businesses. Careem operates in over 70 cities across 10 countries, from Morocco to Pakistan. About the team The Careem Analytics team’s mission is to build and track the full lively picture for Careem businesses and experiences, uphold the experience bar, provide actionable insights, formulate problems and contribute to solving them. As part of this team, you will be a core team member fulfilling this mission. You will be working alongside the top analyst talent of the region, leveraging modern analysis and visualization tools to solve the region’s day to day problems. Job Roles & Responsibilities Act as the first point of contact to answer all business data queries Develop effective reporting solutions by utilizing engineering best practices and various reporting tools Participate in continuous improvement of these reporting solutions Drive and support your designated business unit by converting complex data and findings into understandable tables, graphs, and written reports Present appropriate analysis and commentary to technical and non-technical audience Gain subject matter expertise and help define appropriate key metrics for the business unit and discover untapped areas for business improvement Provide concrete data-driven insights Test and communicate new features to users Run regular data integrity audits Devise and evaluate methods for collecting data, such as surveys, questionnaires, and opinion polls Gather data about consumers, competitors, and market conditions Cultural Expectations 4+ years of demonstrated experience working in an analytical role 3+ years of demonstrated experience with business intelligence and visualization tools creating management dashboards Strong analytical skills and a passion to work with large sets of data Passionate about learning new technologies and working on a product of massive scale and impact Expert at writing SQL queries against large amounts of data Self-starter with excellent communication and organizational skills Ability to get hands-on in a complex operational environment Must be process orientated and a logical thinker with good attention to detail Working knowledge with reporting tools such as Tableau, MicroStrategy or Looker Working knowledge in Python, R, Spark, Hive Experience in ETL / ELT is a plus Experience working with dimensional data, data lakes is a plus Experience working with MPP databases like Redshift is a plus Hiring Process 2-3 rounds of interviews with the hiring team
Posted 2 weeks ago
7.0 years
0 Lacs
Greater Chennai Area
On-site
Redefine the future of customer experiences. One conversation at a time. We’re changing the game with a first-of-its-kind, conversation-centric platform that unifies team collaboration and customer experience in one place. Powered by AI, built by amazing humans. Our culture is forward-thinking, customer-obsessed and built on an unwavering belief that connection fuels business and life; connections to our customers with our signature Amazing Service®, our products and services, and most importantly, each other. Since 2008, 100,000+ companies and 1M+ users rely on Nextiva for customer and team communication. If you’re ready to collaborate and create with amazing people, let your personality shine and be on the frontlines of helping businesses deliver amazing experiences, you’re in the right place. Build Amazing - Deliver Amazing - Live Amazing - Be Amazing We’re looking for an experienced Engineering Manager to lead backend and data platform teams building the next generation product. You will be responsible for leading the development of Java-based services , ETL pipelines , and data infrastructure that power mission-critical features like scheduling, labor forecasting, time tracking, and analytics. You’ll collaborate closely with product, data science, and infrastructure teams to ensure our systems are scalable, reliable, and data-driven — enabling our customers to optimize workforce operations in real time. Key Responsibilities Lead a team of backend and data engineers responsible for: Building and maintaining Java microservices (Spring Boot) for WFM features. Designing and scaling ETL pipelines, data ingestion, and data lake components. Supporting reporting, analytics, and forecasting models with high-quality datasets. Define and evolve the architecture for data processing, streaming, and batch workloads using tools like Apache Kafka, Airflow, AWS Glue, or Spark. Collaborate with Product Managers and Data Analysts to turn business requirements into scalable data solutions. Drive engineering best practices in CI/CD, code quality, observability, and data governance. Mentor engineers, foster a strong team culture, and support career growth through coaching and feedback. Work cross-functionally with QA, DevOps, and InfoSec to ensure compliance, scalability, and performance. Required Qualifications 7+ years of backend software engineering experience, with at least 3+ years in engineering leadership roles. Strong hands-on experience with Java (Spring Boot) and microservice architecture. Proven experience managing ETL workflows, data pipelines, and distributed data processing. Knowledge of relational and analytical databases (e.g., PostgreSQL, Redshift, Snowflake). Experience with event streaming platforms (Kafka, Kinesis, or similar). Cloud-native development experience with AWS, GCP, or Azure. Familiarity with data warehousing, schema evolution, and data quality best practices. Solid understanding of Agile development methodologies and team management. Preferred Qualifications Experience with observability tools like Prometheus, Grafana, or Datadog. Exposure to ML/forecasting models for labor planning is a plus. Nextiva DNA (Core Competencies) Nextiva’s most successful team members share common traits and behaviors: Drives Results: Action-oriented with a passion for solving problems. They bring clarity and simplicity to ambiguous situations, challenge the status quo, and ask what can be done differently. They lead and drive change, celebrating success to build more success. Critical Thinker: Understands the "why" and identifies key drivers, learning from the past. They are fact-based and data-driven, forward-thinking, and see problems a few steps ahead. They provide options, recommendations, and actions, understanding risks and dependencies. Right Attitude: They are team-oriented, collaborative, competitive, and hate losing. They are resilient, able to bounce back from setbacks, zoom in and out, and get in the trenches to help solve important problems. They cultivate a culture of service, learning, support, and respect, caring for customers and teams. Total Rewards Our Total Rewards offerings are designed to allow our employees to take care of themselves and their families so they can be their best, in and out of the office. Our compensation packages are tailored to each role and candidate's qualifications. We consider a wide range of factors, including skills, experience, training, and certifications, when determining compensation. We aim to offer competitive salaries or wages that reflect the value you bring to our team. Depending on the position, compensation may include base salary and/or hourly wages, incentives, or bonuses. Medical 🩺 - Medical insurance coverage is available for employees, their spouse, and up to two dependent children with a limit of 500,000 INR, as well as their parents or in-laws for up to 300,000 INR. This comprehensive coverage ensures that essential healthcare needs are met for the entire family unit, providing peace of mind and security in times of medical necessity. Group Term & Group Personal Accident Insurance 💼 - Provides insurance coverage against the risk of death / injury during the policy period sustained due to an accident caused by violent, visible & external means. Coverage Type - Employee Only Sum Insured - 3 times of annual CTC with minimum cap of INR 10,00,000 Free Cover Limit - 1.5 Crore Work-Life Balance ⚖️ - 15 days of Privilege leaves per calendar year, 6 days of Paid Sick leave per calendar year, 6 days of Casual leave per calendar year. Paid 26 weeks of Maternity leaves, 1 week of Paternity leave, a day off on your Birthday, and paid holidays Financial Security💰 - Provident Fund & Gratuity Wellness 🤸 - Employee Assistance Program and comprehensive wellness initiatives Growth 🌱 - Access to ongoing learning and development opportunities and career advancement At Nextiva, we're committed to supporting our employees' health, well-being, and professional growth. Join us and build a rewarding career! Established in 2008 and headquartered in Scottsdale, Arizona, Nextiva secured $200M from Goldman Sachs in late 2021, valuing the company at $2.7B.To check out what’s going on at Nextiva, check us out on Instagram, Instagram (MX), YouTube, LinkedIn, and the Nextiva blog.
Posted 2 weeks ago
7.0 - 10.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Data Engineer Description We are seeking a skilled Data Engineer with 7-10 years of experience to join our dynamic team in India. The ideal candidate will have a strong background in designing and optimizing data pipelines, as well as a passion for working with large datasets to drive business insights. Location : Trivandrum, Kochi, Bangalore Responsibilities Design, build, and maintain scalable data pipelines and architecture. Develop and optimize ETL processes for data ingestion and transformation. Collaborate with data scientists and analysts to understand data requirements and deliver solutions. Implement data quality checks and monitor data integrity. Utilize cloud-based data technologies and services for data storage and processing. Ensure compliance with data governance and security policies. Skills and Qualifications Bachelor's or Master's degree in Computer Science, Engineering, or a related field. Proficiency in SQL and experience with database technologies such as MySQL, PostgreSQL, or Oracle. Strong knowledge of programming languages such as Python, Java, or Scala. Experience with big data technologies like Hadoop, Spark, or Kafka. Familiarity with cloud platforms such as AWS, Azure, or Google Cloud. Understanding of data warehousing concepts and tools (e.g., Redshift, Snowflake). Experience with data modeling and data architecture design.
Posted 2 weeks ago
7.0 - 11.0 years
0 Lacs
India
Remote
Position - AWS Data Engineer Experience Range: 7 to 11 Years Location: Remote Shift Timings: 12 PM to 9 PM Primary Skills: Python, Pyspark, SQL, AWS Responsibilities · Data Architecture: Develop and maintain the overall data architecture, ensuring scalability, performance, and data quality. · AWS Data Services: Expertise in using AWS data services such as AWS Glue, S3, SNS, SES, Dynamo DB, Redshift, Cloud formation, Cloud watch, IAM, DMS, Event bridge scheduler etc. · Data Warehousing: Design and implement data warehouses on AWS, leveraging AWS Redshift or other suitable options. · Data Lakes: Build and manage data lakes on AWS using AWS S3 and other relevant services. · Data Pipelines: Design and develop efficient data pipelines to extract, transform, and load data from various sources. · Data Quality: Implement data quality frameworks and best practices to ensure data accuracy, completeness, and consistency. · Cloud Optimization: Optimize data engineering solutions for performance, cost-efficiency, and scalability on the AWS cloud. · Team Leadership: Mentor and guide data engineers, ensuring they adhere to best practices and meet project deadlines. Qualifications · Bachelor’s degree in computer science, Engineering, or a related field. · 6-7 years of experience in data engineering roles, with a focus on AWS cloud platforms. · Strong understanding of data warehousing and data lake concepts. · Proficiency in SQL and at least one programming language (Python/Pyspark). · Good to have - Experience with any big data technologies like Hadoop, Spark, and Kafka. · Knowledge of data modeling and data quality best practices. · Excellent problem-solving, analytical, and communication skills. · Ability to work independently and as part of a team. · Preferred Qualifications · Certifications in AWS Certified Data Analytics - Specialty or AWS Certified Solutions Architect - Data.
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
noida, uttar pradesh
On-site
As an Informatica IDMC Developer at Coforge, your primary responsibility will be to design, develop, and maintain resilient ETL pipelines using Informatica Intelligent Data Management Cloud (IDMC/IICS). You will work closely with data architects, analysts, and business stakeholders to comprehend data requirements and integrate data from various sources, including databases, APIs, and flat files. Your role will involve optimizing data workflows for enhanced performance, scalability, and reliability while monitoring and troubleshooting ETL jobs to address data quality issues. In addition, you will be expected to implement data governance and security best practices, ensuring compliance and confidentiality. Maintaining detailed documentation of data flows, transformations, and architecture will be essential. Active participation in code reviews and contributing to continuous improvement initiatives are also part of your responsibilities. To excel in this role, you must possess substantial hands-on experience with Informatica IDMC (IICS) and cloud-based ETL tools. Proficiency in SQL and prior experience with relational databases like Oracle, SQL Server, and PostgreSQL is necessary. Familiarity with cloud platforms such as AWS, Azure, or GCP and data warehousing concepts and tools like Snowflake, Redshift, or BigQuery is highly desirable. Strong problem-solving skills and effective communication abilities are key attributes that will contribute to your success in this position. Preferred qualifications for this role include experience with CI/CD pipelines and version control systems, knowledge of data modeling and metadata management, and certifications in Informatica or cloud platforms, which would be considered advantageous. If you have 5 to 8 years of relevant experience and possess the required skills and qualifications, we encourage you to apply for this Informatica IDMC Developer position based in Greater Noida. Kindly send your CV to Gaurav.2.Kumar@coforge.com.,
Posted 2 weeks ago
5.0 - 6.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Company Description Sufalam Technologies, based in Ahmedabad, India, is an IT services and solutions company known for integrating and optimizing complex technologies and data to enhance business results. We bring together real-world business experience with deep technology expertise, service delivery tools, and proven methodologies to help clients achieve their strategic objectives. Our services include Custom Application Development, Web Application and Development, BPO, and much more. Our expertise in different vertical industry domains and a wide range of software tools has ensured a consistent track record of delivering top-notch IT Services globally. Role Description This is a full-time on-site role for an AWS Data Engineer located in Ahmedabad. The AWS Data Engineer will be responsible for designing and implementing data engineering solutions, developing data models, managing Extract, Transform, Load (ETL) processes, and ensuring the efficient operation of data warehousing solutions. Additionally, the engineer will contribute to data analytics activities to support business decision-making and strategic goals. Key Responsibilities ● Design and implement scalable and secure ETL/ELT pipelines on AWS for processing financial data. ● Build automated data reconciliation systems to ensure data integrity and accuracy across multiple financial sources (e.g., bank statements, internal ledgers, ERP, payment gateways). ● Collaborate closely with Finance, Data Science, and Product teams to understand reconciliation needs and ensure timely data delivery. ● Implement monitoring and alerting for pipeline health and data quality. ● Maintain detailed documentation on data flows, data models, and reconciliation logic. ● Ensure compliance with financial data handling and audit standards. Must-Have Skills ● 5-6 years of experience in data engineering, with a strong focus on AWS data services. ● Hands-on experience with: ○ AWS Glue, Lambda, S3, Redshift, Athena, Step Functions ○ AWS Lake Formation and IAM for secure data governance ● Solid understanding of data reconciliation processes in the finance domain (e.g., matching transactions, resolving mismatches, variance analysis). ● Strong SQL skills and experience with data warehousing and data lakes. ● Experience with Python or PySpark for data transformation. ● Knowledge of financial accounting principles or experience working with financial datasets (AR, AP, General Ledger, etc.).
Posted 2 weeks ago
2.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Category Engineering Experience Manager Primary Address Bangalore, Karnataka Overview Voyager (94001), India, Bangalore, Karnataka Manager, Data Engineering Do you love building and pioneering in the technology space? Do you enjoy solving complex business problems in a fast-paced, collaborative,inclusive, and iterative delivery environment? At Capital One, you'll be part of a big group of makers, breakers, doers and disruptors, who solve real problems and meet real customer needs. We are seeking Data Engineers who are passionate about marrying data with emerging technologies. As a Capital One Data Engineer, you’ll have the opportunity to be on the forefront of driving a major transformation within Capital One. What You’ll Do: Collaborate with and across Agile teams to design, develop, test, implement, and support technical solutions in full-stack development tools and technologies Work with a team of developers with deep experience in machine learning, distributed microservices, and full stack systems Utilize programming languages like Java, Scala, Python and Open Source RDBMS and NoSQL databases and Cloud based data warehousing services such as Redshift and Snowflake Share your passion for staying on top of tech trends, experimenting with and learning new technologies, participating in internal & external technology communities, and mentoring other members of the engineering community Collaborate with digital product managers, and deliver robust cloud-based solutions that drive powerful experiences to help millions of Americans achieve financial empowerment Perform unit tests and conduct reviews with other team members to make sure your code is rigorously designed, elegantly coded, and effectively tuned for performance Basic Qualifications: Bachelor’s Degree At least 4 years of experience in application development (Internship experience does not apply) At least 2 years of experience in big data technologies At least 1 year experience with cloud computing (AWS, Microsoft Azure, Google Cloud) At least 2 years of people management experience Preferred Qualifications: 7+ years of experience in application development including Python, SQL, Scala, or Java 4+ years of experience with a public cloud (AWS, Microsoft Azure, Google Cloud) 4+ years experience with Distributed data/computing tools (MapReduce, Hadoop, Hive, EMR, Kafka, Spark, Gurobi, or MySQL) 4+ year experience working on real-time data and streaming applications 4+ years of experience with NoSQL implementation (Mongo, Cassandra) 4+ years of data warehousing experience (Redshift or Snowflake) 4+ years of experience with UNIX/Linux including basic commands and shell scripting 2+ years of experience with Agile engineering practices At this time, Capital One will not sponsor a new applicant for employment authorization for this position No agencies please. Capital One is an equal opportunity employer (EOE, including disability/vet) committed to non-discrimination in compliance with applicable federal, state, and local laws. Capital One promotes a drug-free workplace. Capital One will consider for employment qualified applicants with a criminal history in a manner consistent with the requirements of applicable laws regarding criminal background inquiries, including, to the extent applicable, Article 23-A of the New York Correction Law; San Francisco, California Police Code Article 49, Sections 4901-4920; New York City’s Fair Chance Act; Philadelphia’s Fair Criminal Records Screening Act; and other applicable federal, state, and local laws and regulations regarding criminal background inquiries. If you have visited our website in search of information on employment opportunities or to apply for a position, and you require an accommodation, please contact Capital One Recruiting at 1-800-304-9102 or via email at RecruitingAccommodation@capitalone.com. All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodations. For technical support or questions about Capital One's recruiting process, please send an email to Careers@capitalone.com Capital One does not provide, endorse nor guarantee and is not liable for third-party products, services, educational tools or other information available through this site. Capital One Financial is made up of several different entities. Please note that any position posted in Canada is for Capital One Canada, any position posted in the United Kingdom is for Capital One Europe and any position posted in the Philippines is for Capital One Philippines Service Corp. (COPSSC). This carousel contains a column of headings. Selecting a heading will change the main content in the carousel that follows. Use the Previous and Next buttons to cycle through all the options, use Enter to select. This carousel shows one item at a time. Use the preceding navigation carousel to select a specific heading to display the content here. How We Hire We take finding great coworkers pretty seriously. Step 1 Apply It only takes a few minutes to complete our application and assessment. Step 2 Screen and Schedule If your application is a good match you’ll hear from one of our recruiters to set up a screening interview. Step 3 Interview(s) Now’s your chance to learn about the job, show us who you are, share why you would be a great addition to the team and determine if Capital One is the place for you. Step 4 Decision The team will discuss — if it’s a good fit for us and you, we’ll make it official! How to Pick the Perfect Career Opportunity Overwhelmed by a tough career choice? Read these tips from Devon Rollins, Senior Director of Cyber Intelligence, to help you accept the right offer with confidence. Your wellbeing is our priority Our benefits and total compensation package is designed for the whole person. Caring for both you and your family. Healthy Body, Healthy Mind You have options and we have the tools to help you decide which health plans best fit your needs. Save Money, Make Money Secure your present, plan for your future and reduce expenses along the way. Time, Family and Advice Options for your time, opportunities for your family, and advice along the way. It’s time to BeWell. Career Journey Here’s how the team fits together. We’re big on growth and knowing who and how coworkers can best support you.
Posted 2 weeks ago
2.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Voyager (94001), India, Bangalore, Karnataka Manager, Data Engineering Do you love building and pioneering in the technology space? Do you enjoy solving complex business problems in a fast-paced, collaborative, inclusive, and iterative delivery environment? At Capital One, you'll be part of a big group of makers, breakers, doers and disruptors, who solve real problems and meet real customer needs. We are seeking Data Engineers who are passionate about marrying data with emerging technologies. As a Capital One Data Engineer, you’ll have the opportunity to be on the forefront of driving a major transformation within Capital One. What You’ll Do: Collaborate with and across Agile teams to design, develop, test, implement, and support technical solutions in full-stack development tools and technologies Work with a team of developers with deep experience in machine learning, distributed microservices, and full stack systems Utilize programming languages like Java, Scala, Python and Open Source RDBMS and NoSQL databases and Cloud based data warehousing services such as Redshift and Snowflake Share your passion for staying on top of tech trends, experimenting with and learning new technologies, participating in internal & external technology communities, and mentoring other members of the engineering community Collaborate with digital product managers, and deliver robust cloud-based solutions that drive powerful experiences to help millions of Americans achieve financial empowerment Perform unit tests and conduct reviews with other team members to make sure your code is rigorously designed, elegantly coded, and effectively tuned for performance Basic Qualifications: Bachelor’s Degree At least 4 years of experience in application development (Internship experience does not apply) At least 2 years of experience in big data technologies At least 1 year experience with cloud computing (AWS, Microsoft Azure, Google Cloud) At least 2 years of people management experience Preferred Qualifications: 7+ years of experience in application development including Python, SQL, Scala, or Java 4+ years of experience with a public cloud (AWS, Microsoft Azure, Google Cloud) 4+ years experience with Distributed data/computing tools (MapReduce, Hadoop, Hive, EMR, Kafka, Spark, Gurobi, or MySQL) 4+ year experience working on real-time data and streaming applications 4+ years of experience with NoSQL implementation (Mongo, Cassandra) 4+ years of data warehousing experience (Redshift or Snowflake) 4+ years of experience with UNIX/Linux including basic commands and shell scripting 2+ years of experience with Agile engineering practices At this time, Capital One will not sponsor a new applicant for employment authorization for this position No agencies please. Capital One is an equal opportunity employer (EOE, including disability/vet) committed to non-discrimination in compliance with applicable federal, state, and local laws. Capital One promotes a drug-free workplace. Capital One will consider for employment qualified applicants with a criminal history in a manner consistent with the requirements of applicable laws regarding criminal background inquiries, including, to the extent applicable, Article 23-A of the New York Correction Law; San Francisco, California Police Code Article 49, Sections 4901-4920; New York City’s Fair Chance Act; Philadelphia’s Fair Criminal Records Screening Act; and other applicable federal, state, and local laws and regulations regarding criminal background inquiries. If you have visited our website in search of information on employment opportunities or to apply for a position, and you require an accommodation, please contact Capital One Recruiting at 1-800-304-9102 or via email at RecruitingAccommodation@capitalone.com. All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodations. For technical support or questions about Capital One's recruiting process, please send an email to Careers@capitalone.com Capital One does not provide, endorse nor guarantee and is not liable for third-party products, services, educational tools or other information available through this site. Capital One Financial is made up of several different entities. Please note that any position posted in Canada is for Capital One Canada, any position posted in the United Kingdom is for Capital One Europe and any position posted in the Philippines is for Capital One Philippines Service Corp. (COPSSC).
Posted 2 weeks ago
10.0 - 15.0 years
0 Lacs
delhi
On-site
You are looking for a Senior Data Architect to join the team at Wingify in Delhi. As a Senior Data Architect, you will be responsible for leading and mentoring a team of data engineers, optimizing scalable data infrastructure, driving data governance frameworks, collaborating with cross-functional teams, and ensuring data security, compliance, and quality. Your role will involve optimizing data processing workflows, fostering a culture of innovation and technical excellence, and aligning technical strategy with business objectives. To be successful in this role, you should have at least 10 years of experience in software/data engineering, with a minimum of 3 years in a leadership position. You should possess expertise in backend development using programming languages like Java, PHP, Python, Node.JS, GoLang, JavaScript, HTML, and CSS. Proficiency in SQL, Python, and Scala for data processing and analytics is essential, along with a strong understanding of cloud platforms such as AWS, GCP, or Azure and their data services. Additionally, you should have experience with big data technologies like Spark, Hadoop, Kafka, and distributed computing frameworks, as well as hands-on experience with data warehousing solutions like Snowflake, Redshift, or BigQuery. Deep knowledge of data governance, security, and compliance, along with familiarity with NoSQL databases and automation/DevOps tools, is required. Strong leadership, communication, and stakeholder management skills are crucial for this role. Preferred qualifications include experience in machine learning infrastructure or MLOps, exposure to real-time data processing and analytics, and interest in data structures, algorithm analysis and design, multicore programming, and scalable architecture. Prior experience in a SaaS or high-growth tech company would be advantageous. Please note that candidates must have a minimum of 10 years of experience to be eligible for this role. Graduation from Tier - 1 colleges, such as IIT, is preferred. Candidates from B2B Product Companies with High data-traffic are encouraged to apply, while those who do not meet these criteria are kindly requested not to apply.,
Posted 2 weeks ago
2.0 - 6.0 years
0 Lacs
pune, maharashtra
On-site
You should have 2-7 years of experience in Noida, Gurugram, Indore, Pune, or Bangalore with a notice period of currently serving or immediate joiners. Your primary responsibilities will include having 2-6 years of hands-on experience with Big Data technologies like PySpark (Data frame and SparkSQL), Hadoop, and Hive. Additionally, you should have good experience with Python and Bash Scripts, a solid understanding of SQL and data warehouse concepts, and strong analytical, problem-solving, data analysis, and research skills. You should also demonstrate the ability to think creatively and independently, along with excellent communication, presentation, and interpersonal skills. It would be beneficial if you have hands-on experience with using Cloud Platform provided Big Data technologies such as IAM, Glue, EMR, RedShift, S3, and Kinesis. Experience in orchestration with Airflow and any job scheduler, as well as experience in migrating workloads from on-premise to cloud and cloud to cloud migrations, would be considered a plus.,
Posted 2 weeks ago
3.0 years
0 Lacs
Andhra Pradesh, India
On-site
At PwC, our people in infrastructure focus on designing and implementing robust, secure IT systems that support business operations. They enable the smooth functioning of networks, servers, and data centres to optimise performance and minimise downtime. In infrastructure engineering at PwC, you will focus on designing and implementing robust and scalable technology infrastructure solutions for clients. Your work will involve network architecture, server management, and cloud computing experience. Data Modeler Job Description: Looking for candidates with a strong background in data modeling, metadata management, and data system optimization. You will be responsible for analyzing business needs, developing long term data models, and ensuring the efficiency and consistency of our data systems. Key areas of expertise include Analyze and translate business needs into long term solution data models. Evaluate existing data systems and recommend improvements. Define rules to translate and transform data across data models. Work with the development team to create conceptual data models and data flows. Develop best practices for data coding to ensure consistency within the system. Review modifications of existing systems for cross compatibility. Implement data strategies and develop physical data models. Update and optimize local and metadata models. Utilize canonical data modeling techniques to enhance data system efficiency. Evaluate implemented data systems for variances, discrepancies, and efficiency. Troubleshoot and optimize data systems to ensure optimal performance. Strong expertise in relational and dimensional modeling (OLTP, OLAP). Experience with data modeling tools (Erwin, ER/Studio, Visio, PowerDesigner). Proficiency in SQL and database management systems (Oracle, SQL Server, MySQL, PostgreSQL). Knowledge of NoSQL databases (MongoDB, Cassandra) and their data structures. Experience working with data warehouses and BI tools (Snowflake, Redshift, BigQuery, Tableau, Power BI). Familiarity with ETL processes, data integration, and data governance frameworks. Strong analytical, problem-solving, and communication skills. Qualifications: Bachelor's degree in Engineering or a related field. 3 to 5 years of experience in data modeling or a related field. 4+ years of hands-on experience with dimensional and relational data modeling. Expert knowledge of metadata management and related tools. Proficiency with data modeling tools such as Erwin, Power Designer, or Lucid. Knowledge of transactional databases and data warehouses. Preferred Skills: Experience in cloud-based data solutions (AWS, Azure, GCP). Knowledge of big data technologies (Hadoop, Spark, Kafka). Understanding of graph databases and real-time data processing. Certifications in data management, modeling, or cloud data engineering. Excellent communication and presentation skills. Strong interpersonal skills to collaborate effectively with various teams. Preferred Skills: Experience in cloud-based data solutions (AWS, Azure, GCP). Knowledge of big data technologies (Hadoop, Spark, Kafka). Understanding of graph databases and real-time data processing. Certifications in data management, modeling, or cloud data engineering. Excellent communication and presentation skills. Strong interpersonal skills to collaborate effectively with various teams.
Posted 2 weeks ago
10.0 - 14.0 years
0 Lacs
maharashtra
On-site
As the Technical Lead of Data Engineering at Assent, you will collaborate with various stakeholders including Product Managers, Product Designers, and Engineering team members to identify opportunities and evaluate the feasibility of solutions. Your role will involve offering technical guidance, influencing decision-making, and aligning data engineering initiatives with business objectives as part of Assent's roadmap development. You will be responsible for driving the technical strategy, overseeing team execution, and implementing process improvements to construct resilient and scalable data systems. In addition, you will lead data engineering efforts, mentor a growing team, and establish robust and scalable data infrastructure. Key Requirements & Responsibilities: - Lead the technical execution of data engineering projects to ensure high-quality and timely delivery, covering discovery, delivery, and adoption stages. - Collaborate with Architecture team members to design and implement scalable, high-performance data pipelines and infrastructure. - Provide technical guidance to the team, ensuring adherence to best practices in data engineering, performance optimization, and system reliability. - Work cross-functionally with various teams such as Product Managers, Software Development, Analysts, and AI/ML teams to define and implement data initiatives. - Partner with the team manager to plan and prioritize work, striking a balance between short-term deliverables and long-term technical enhancements. - Keep abreast of emerging technologies and methodologies, advocating for their adoption to accelerate the team's objectives. - Ensure compliance with corporate security policies and follow the established guidelines and procedures of Assent. Qualifications: Your Knowledge, Skills and Abilities: - Possess 10+ years of experience in data engineering, software development, or related fields. - Proficient in cloud data platforms, particularly AWS. - Expertise in modern data technologies like Spark, Airflow, dbt, Snowflake, Redshift, or similar. - Deep understanding of distributed systems and data pipeline design, with specialization in ETL/ELT processes, data warehousing, and real-time streaming. - Strong programming skills in Python, SQL, Scala, or similar languages. - Experience with infrastructure as code tools like Terraform, CloudFormation, and knowledge of DevOps best practices. - Ability to influence technical direction and promote best practices across teams. - Excellent communication and leadership skills, with a focus on fostering collaboration and technical excellence. - A learning mindset, continuously exploring new technologies and best practices. - Experience in security, compliance, and governance related to data systems is a plus. This is not an exhaustive list of duties, and responsibilities may be modified or added as needed to meet business requirements. Life at Assent: At Assent, we are dedicated to cultivating an inclusive environment where team members feel valued, respected, and heard. Our diversity, equity, and inclusion practices are guided by our Diversity and Inclusion Working Group and Employee Resource Groups (ERGs), ensuring that team members from diverse backgrounds are recruited, retained, and provided opportunities to contribute to business success. If you need assistance or accommodation during any stage of the interview and selection process, please reach out to talent@assent.com, and we will be happy to assist you.,
Posted 2 weeks ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Proficiency in building highly scalable ETL and streaming-based data pipelines using Google Cloud Platform (GCP) services and products like Bigquery, Cloud Proficiency in large scale data platforms and data processing systems such as Google BigQuery, Amazon Redshift, Azure DataLake Excellent Python, PySpark and SQL development and debugging skills, exposure to other BigData frameworks like Hadoop Hive would be added advantage Experience building systems to retrieve and aggregate data from event-driven messaging frameworks (e.g. RabbitMQ and Pub/Sub) Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs. Work with data and analytics experts to strive for greater functionality in our data systems. Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader. Keep our data separated and secure across national boundaries through multiple data centres and Azure/AWS Skills : Advanced working SQL knowledge and experience working with relational databases, query authoring, optimizing (SQL) as well as working familiarity with a variety of databases. Experience building and optimizing 'big data' data pipelines, architectures, and data sets. Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement. Strong analytic skills related to working with unstructured Skills : CloudBigTable, AI/Ml solutions, Compute Engine, Cloud Profile : ClicFlyer provides SAAS Business Intelligence and Analytics Solutions to Retail Chains and Brands across the Middle East leading Global FMCG Companies. From data integration and standardization to tailor-fit action plans, we take care of all the large and complex data to provide the retailer with customer Business Intelligence and Analytical centric insights. Our services are available in UAE, KSA, Bahrain, Qatar, Kuwait, Oman, Jordan, Egypt, S. Africa & Indonesia. (ref:hirist.tech)
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France