Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
10.0 - 15.0 years
55 - 60 Lacs
Mumbai, Delhi / NCR, Bengaluru
Work from Office
Position Overview We are seeking an experienced Data Modeler/Lead with deep expertise in health plan data models and enterprise data warehousing to drive our healthcare analytics and reporting initiatives. The candidate should have hands-on experience with modern data platforms and a strong understanding of healthcare industry data standards. Key Responsibilities Data Architecture & Modeling Design and implement comprehensive data models for health plan operations, including member enrollment, claims processing, provider networks, and medical management Develop logical and physical data models that support analytical and regulatory reporting requirements (HEDIS, Stars, MLR, risk adjustment) Create and maintain data lineage documentation and data dictionaries for healthcare datasets Establish data modeling standards and best practices across the organization Technical Leadership Lead data warehousing initiatives using modern platforms like Databricks or traditional ETL tools like Informatica Architect scalable data solutions that handle large volumes of healthcare transactional data Collaborate with data engineers to optimize data pipelines and ensure data quality Healthcare Domain Expertise Apply deep knowledge of health plan operations, medical coding (ICD-10, CPT, HCPCS), and healthcare data standards (HL7, FHIR, X12 EDI) Design data models that support analytical, reporting and AI/ML needs Ensure compliance with healthcare regulations including HIPAA/PHI, and state insurance regulations Partner with business stakeholders to translate healthcare business requirements into technical data solutions Data Governance & Quality Implement data governance frameworks specific to healthcare data privacy and security requirements Establish data quality monitoring and validation processes for critical health plan metrics Lead eUorts to standardize healthcare data definitions across multiple systems and data sources. Required Qualifications Technical Skills 10+ years of experience in data modeling with at least 4 years focused on healthcare/health plan data Expert-level proficiency in dimensional modeling, data vault methodology, or other enterprise data modeling approaches Hands-on experience with Informatica PowerCenter/IICS or Databricks platform for large-scale data processing Strong SQL skills and experience with Oracle Exadata and cloud data warehouses (Databricks) Proficiency with data modeling tools (Hackolade, ERwin, or similar) Healthcare Industry Knowledge Deep understanding of health plan data structures including claims, eligibility, provider data, and pharmacy data Experience with healthcare data standards and medical coding systems Knowledge of regulatory reporting requirements (HEDIS, Medicare Stars, MLR reporting, risk adjustment) Familiarity with healthcare interoperability standards (HL7 FHIR, X12 EDI) Leadership & Communication Proven track record of leading data modeling projects in complex healthcare environments Strong analytical and problem-solving skills with ability to work with ambiguous requirements Excellent communication skills with ability to explain technical concepts to business stakeholders Experience mentoring team members and establishing technical standards. Preferred Qualifications Experience with Medicare Advantage, Medicaid, or Commercial health plan operations Cloud platform certifications (AWS, Azure, or GCP) Experience with real-time data streaming and modern data lake architectures Knowledge of machine learning applications in healthcare analytics Previous experience in a lead or architect role within healthcare organizations. Locations : Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune, Remote
Posted 1 month ago
12.0 years
0 Lacs
Hyderābād
On-site
Job Description: Job Description – External: We are hiring a Senior Data Engineer with deep expertise in Azure Data Bricks, Azure Data Lake, and Azure Synapse Analytics to join our high-performing team. The ideal candidate will have a proven track record in designing, building, and optimizing big data pipelines and architectures while leveraging their technical proficiency in cloud-based data engineering. This role requires a strategic thinker who can bridge the gap between raw data and actionable insights, enabling data-driven decision-making for large-scale enterprise initiatives. A strong foundation in distributed computing, ETL frameworks, and advanced data modeling is crucial. The individual will work closely with data architects, analysts, and business teams to deliver scalable and efficient data solutions. Key Responsibilities: Data Engineering & Architecture: Design, develop, and maintain high-performance data pipelines for structured and unstructured data using Azure Data Bricks and Apache Spark. Build and manage scalable data ingestion frameworks for batch and real-time data processing. Implement and optimize data lake architecture in Azure Data Lake to support analytics and reporting workloads. Develop and optimize data models and queries in Azure Synapse Analytics to power BI and analytics use cases. Cloud-Based Data Solutions: Architect and implement modern data lakehouses combining the best of data lakes and data warehouses. Leverage Azure services like Data Factory, Event Hub, and Blob Storage for end-to-end data workflows. Ensure security, compliance, and governance of data through Azure Role-Based Access Control (RBAC) and Data Lake ACLs. ETL/ELT Development: Develop robust ETL/ELT pipelines using Azure Data Factory, Data Bricks notebooks, and PySpark. Perform data transformations, cleansing, and validation to prepare datasets for analysis. Manage and monitor job orchestration, ensuring pipelines run efficiently and reliably. Performance Optimization: Optimize Spark jobs and SQL queries for large-scale data processing. Implement partitioning, caching, and indexing strategies to improve performance and scalability of big data workloads. Conduct capacity planning and recommend infrastructure optimizations for cost-effectiveness. Collaboration & Stakeholder Management: Work closely with business analysts, data scientists, and product teams to understand data requirements and deliver solutions. Participate in cross-functional design sessions to translate business needs into technical specifications. Provide thought leadership on best practices in data engineering and cloud computing. Documentation & Knowledge Sharing: Create detailed documentation for data workflows, pipelines, and architectural decisions. Mentor junior team members and promote a culture of learning and innovation. Required Qualifications: Experience: 12+ years of experience in data engineering, big data, or cloud-based data solutions. Proven expertise with Azure Data Bricks, Azure Data Lake, and Azure Synapse Analytics. Technical Skills: Strong hands-on experience with Apache Spark and distributed data processing frameworks. Advanced proficiency in Python and SQL for data manipulation and pipeline development. Deep understanding of data modeling for OLAP, OLTP, and dimensional data models. Experience with ETL/ELT tools like Azure Data Factory or Informatica. Familiarity with Azure DevOps for CI/CD pipelines and version control. Big Data Ecosystem: Familiarity with Delta Lake for managing big data in Azure. Experience with streaming data frameworks like Kafka, Event Hub, or Spark Streaming. Cloud Expertise: Strong understanding of Azure cloud architecture, including storage, compute, and networking. Knowledge of Azure security best practices, such as encryption and key management. Preferred Skills (Nice to Have): Experience with machine learning pipelines and frameworks like MLFlow or Azure Machine Learning. Knowledge of data visualization tools such as Power BI for creating dashboards and reports. Familiarity with Terraform or ARM templates for infrastructure as code (IaC). Exposure to NoSQL databases like Cosmos DB or MongoDB. Experience with data governance tools like Azure Purview. Weekly Hours: 40 Time Type: Regular Location: IND:AP:Hyderabad / Argus Bldg 4f & 5f, Sattva, Knowledge City- Adm: Argus Building, Sattva, Knowledge City It is the policy of AT&T to provide equal employment opportunity (EEO) to all persons regardless of age, color, national origin, citizenship status, physical or mental disability, race, religion, creed, gender, sex, sexual orientation, gender identity and/or expression, genetic information, marital status, status with regard to public assistance, veteran status, or any other characteristic protected by federal, state or local law. In addition, AT&T will provide reasonable accommodations for qualified individuals with disabilities. AT&T is a fair chance employer and does not initiate a background check until an offer is made.
Posted 1 month ago
0 years
6 - 8 Lacs
Calcutta
On-site
Join our Team About this opportunity: We are seeking a highly motivated and skilled Data Engineer to join our cross-functional team of Data Architects and Data Scientists. This role offers an exciting opportunity to work on large-scale data infrastructure and AI/ML pipelines, driving intelligent insights and scalable solutions across the organization. What you will do: Build, optimize, and maintain robust ETL/ELT pipelines to support AI/ML and analytics workloads. Collaborate closely with Data Scientists to productionize ML models, ensuring scalable deployment and monitoring. Design and implement cloud-based data lake and data warehouse architectures. Ensure high data quality, governance, security, and observability across data platforms. Develop and manage real-time and batch data workflows using tools like Apache Spark, Airflow, and Kafka. Support CI/CD and MLOps workflows using tools like GitHub Actions, Docker, Kubernetes, and MLflow. The skills you bring: Languages: Python, SQL, Bash Data Tools: Apache Spark, Airflow, Kafka, dbt, Pandas Cloud Platforms: AWS (preferred), Azure, or GCP Databases: Snowflake, Redshift, BigQuery, PostgreSQL, NoSQL (MongoDB/DynamoDB) DevOps/MLOps: Docker, Kubernetes, MLflow, CI/CD (e.g., GitHub Actions, Jenkins) Data Modeling: OLAP/OLTP, Star/Snowflake schema, Data Vault Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply? Click Here to find all you need to know about what our typical hiring process looks like. Encouraging a diverse and inclusive organization is core to our values at Ericsson, that's why we champion it in everything we do. We truly believe that by collaborating with people with different experiences we drive innovation, which is essential for our future growth. We encourage people from all backgrounds to apply and realize their full potential as part of our Ericsson team. Ericsson is proud to be an Equal Opportunity Employer. learn more. Primary country and city: India (IN) || Kolkata Req ID: 768921
Posted 1 month ago
6.0 years
0 Lacs
Noida, Uttar Pradesh, India
Remote
Location: Remote Duration: Long Term Employment Type: Full-Time Dual Employment Policy: Strictly not allowed Client Environment: International (Fluent English required) Experience Required: Minimum 6+ years in Data Architecture Role Summary: We are seeking a visionary Snowflake Architect to lead the design and implementation of scalable, secure, and high-performance data architecture solutions. The ideal candidate will drive enterprise data strategy, enable advanced analytics, and serve as the go-to expert on Snowflake and Matillion across business units. Key Responsibilities: Architect end-to-end data platforms using Snowflake , ensuring scalability, performance, and security. Design, build, and optimize complex data models and data pipelines to support BI and analytics. Translate evolving business requirements into technical architectures and data strategies. Collaborate with cross-functional teams across analytics, engineering, and product functions. Champion best practices in data governance, ELT/ETL design , and platform monitoring. Lead performance tuning , cost optimization , and data lifecycle strategies. Act as a technical mentor and thought leader for Snowflake and Matillion implementations. Minimum Qualifications: ✅ 5+ years of experience as a Data Architect or similar senior role. ✅ Deep expertise in Snowflake , with Advanced Snowflake Architect Certification . ✅ Proficient in Matillion , with Matillion certification. ✅ Strong understanding of data modeling concepts (OLAP/OLTP, Star/Snowflake schema, etc.). ✅ Hands-on experience with data analytics and reporting systems . ✅ Excellent communication and client-facing presentation skills . ✅ Strong command of English (spoken & written) – international stakeholder collaboration required. Technology & Skills: Primary Technology: Snowflake (Expert level) ETL/ELT Tool: Matillion (Certified proficiency) Data Skills: Data Modeling, Data Analytics, Data Engineering Soft Skills: Communication, Presentation, Stakeholder Management Tools & Languages (Good to Have): Python, SQL, DBT, AWS/Azure/GCP, Git, CI/CD 📩 Apply now or refer someone you know- vinita.kumari@appsierra.in
Posted 1 month ago
7.0 - 9.0 years
14 - 18 Lacs
Mumbai, Navi Mumbai
Work from Office
"> Job ID: 40851 Global Finance Analyst Power BI - Analysis & Insight Lloyd s Register Location: - Mumbai, India What we re looking for Convert financial data into informative visual reports and dashboards that help inform decision making What we offer you The opportunity to work for an organization that has a strong sense of purpose, is value driven and helps colleagues to develop professionally and personally through our range of people development programmes. A Full-time permanent role. The role Build automated reports and dashboards with the help of Power BI and other reporting tools. Extract data from various sources to transform raw data into meaningful insights to support Senior leadership teams, Executive Leadership Teams and the FP&A leads. Develop models/reports, delivering the desired data visualisation and Business analytics results to support decision making. Support FP&A ad hoc analysis What you bring Qualified accountant (ACA or CIMA) and currently operating at a senior finance level in a global organisation Able to perform at the highest levels whilst also demonstrating the ability to be hands on when required. The appointee will measure their success by results and will have the resilience and maturity to manage internal relationships in an organisation going through rapid change. Experience of international multi-site and multi-currency organisations Experience in handling data preparation - collection (from various sources), organising, cleaning data to extract valuable Insights. Data modelling experience and understanding of different technologies such as OLAP, statistical analysis, computer science algorithms, databases etc Knowledge & Experience working with Business Intelligence tools and systems like SAP, Power BI, Tableau, etc. preferably complimented by associated skills such as SQL, Power Query, DAX, Python, R etc. Experience of international multi-site commercial/operational activity Ability to drill down and visualize data in the best possible way using charts, reports, or dashboards generated using Power BI Ability to understand and assess complex and sometimes unfamiliar situations, visualise solutions and see through to resolution and work effectively within a matrix organisation. Ability to work successfully within a Finance Shared Service Centre mode Good attention to detail with the keen eye for errors and flaws in the data to help LR work with the cleanest most accurate data. Strong communication skills You are someone who: Is keen to take accountability and ownership for delivering customer needs Can self-manage and prioritize tasks towards achieving goals. Is effective at solving problems, troubleshooting and making timely decisions Is flexible and eager to take initiatives. Communicates in a structured way and has ability to present technical ideas in user-friendly language. Displays a team spirit, particularly in a multicultural environment. Responds positively to learning opportunities and is comfortable stepping out of own comfort zone. #LI-KC1 #LI-Hybrid About us We are a leading international technical professional service provider and a leader in classification, compliance, and consultancy services to the marine and offshore industry, a trusted advisor to our customers helping to design, construct and operate their assets to the highest levels of safety and performance. We are shaping the industry s future through the development of novel and innovative technology for the next generation of assets, while continuing to deliver solutions for our customers every day. Be a part of Lloyd s Register is wholly owned by the Lloyd s Register Foundation, a politically and financially independent global charity that aims to engineer a safer world through promoting safety and education. For a thriving ocean economy, Lloyd s Register colleagues and Lloyd s Register Foundation work together to fund research, foster industry collaboration and develop action-oriented solutions to make the world a safer place. Want to apply. Here at Lloyd s Register, we care, we share and we do the right thing in every situation. It s ingrained in our culture and everything we do. We are committed, and continually strive, to lead with our values that empower and enable an inclusive environment conducive to your growth, development and engagement. It doesn t matter who you are, what you have experienced, how you identify, how old you are, where you are from, what your beliefs are or how your brain or body works - the diversity of our colleagues is fundamental to our futures and the changes we can make together. Our inclusive culture allows us to connect together authentically and to be courageous and bold. We don t just talk about our differences, we celebrate them! We are committed to making all stages of our recruitment process accessible to all candidates. Please let us know if you need any assistance or reasonable adjustments throughout your application and we will do everything we possibly can to support you. If you dont tick every box in these ads, please dont rule yourself out. We focus on hiring people who share our goal of working together for a safer, sustainable, thriving ocean economy. We care, we share, we do the right thing. If you have further questions about this role, please contact us at careers@lr.org and we will respond to you as soon as possible. Diversity and Inclusion at Lloyds Register: Together we are one Lloyd s Register, committed to developing an inclusive and safe workplace that embraces and celebrates diversity. We strive to ensure that all applicants to LR experience equality of opportunity and fair treatment, because we believe it is the right thing to do. We hope you do too. As a Disability Confident Committed Employer, we have committed to: ensure our recruitment process is inclusive and accessible. communicating and promoting vacancies offering an interview to disabled people who meet the minimum criteria for the job. anticipating and providing reasonable adjustments as required supporting any existing employee who acquires a disability or long-term health condition, enabling them to stay in work. at least one activity that will make a difference for disabled people. Find out more about Disability Confident at: www.gov.uk/disability-confident Copyright Lloyds Register 2024. All rights reserved. Terms of use . Privacy policy . The Lloyds Register Group comprises charities and non-charitable companies, with the latter supporting the charities in their main goal of enhancing the safety of life and property, at sea, on land and in the air - for the benefit of the public and the environment. ( Group entities ). Job Segment: Business Intelligence, Computer Science, Database, SQL, Technology, Research Apply now Apply now Start apply with Xing
Posted 1 month ago
10.0 - 15.0 years
50 - 75 Lacs
Chennai
Work from Office
Position Summary... What youll do... About Team: Walmart s Enterprise Business Services (EBS) is a powerhouse of several exceptional teams delivering world-class technology solutions and services making a profound impact at every level of Walmart. As a key part of Walmart Global Tech, our teams set the bar for operational excellence and leverage emerging technology to support millions of customers, associates, and stakeholders worldwide. Each time an associate turns on their laptop, a customer makes a purchase, a new supplier is onboarded, the company closes the books, physical and legal risk is avoided, and when we pay our associates consistently and accurately, that is EBS. Joining EBS means embarking on a journey of limitless growth, relentless innovation, and the chance to set new industry standards that shape the future of Walmart. What youll do: Guide the team in architectural decisions and best practices for building scalable applications. Drive design, development, implementation and documentation Build, test and deploy cutting edge solutions at scale, impacting associates of Walmart worldwide. Interact with Walmart engineering teams across geographies to leverage expertise and contribute to the tech community. Engage with Product Management and Business to drive the agenda, set your priorities and deliver awesome products. Drive the success of the implementation by applying technical skills, to design and build enhanced processes and technical solutions in support of strategic initiatives. Work closely with the Architects and cross functional teams and follow established practices for the delivery of solutions meeting QCD (Quality, Cost & Delivery). Within the established architectural guidelines. Work with senior leadership to chart out the future roadmap of the products Participate in hiring and build teams enabling them to be high performing agile teams. Interact closely for requirements with Business owners and technical teams both within India and across the globe. What youll bring: Bachelors/ Master s degree in Computer Science , engineering, or related field, with minimum 10+ years of experience in software design, development and automated deployments. Hands on experience building Java-based backend systems and experience of working in cloud based solutions is a must . Should be proficient in Java, Spring Boot, Kafka and Spark. Have prior experience in delivering highly scalable large scale data processing Java applications. Strong in high and low level system design. Should be experienced in designing data intensive applications in open stack. A good understanding of CS Fundamentals, Microservices, Data Structures, Algorithms & Problem Solving Should be experienced in CICD development environments/tools including, but not limited to, Git, Maven , Jenkins . Strong in writing modular and testable code and test cases (unit, functional and integration) using frameworks like JUnit, Mockito, and Mock MVC Should be experienced in microservices architecture. Possesses good understanding of distributed concepts, common design principles, design patterns and cloud native development concepts. Hands-on experience in Spring boot, concurrency, garbage collection, RESTful services, data caching services and ORM tools. Experience working with Relational Database and writing complex OLAP, OLTP and SQL queries. Provide multiple alternatives for development frameworks, libraries, and tools. Experience in working with NoSQL Databases like cosmos DB. Experience in working with Caching technology like Redis, Mem cache or other related Systems. Experience in event based systems like Kafka. Experience utilizing monitoring and alert tools like Prometheus, Splunk, and other related systems and excellent in debugging and troubleshooting issues. Exposure to Containerization tools like Docker, Helm, Kubernetes. Knowledge of public cloud platforms like Azure, GCP etc. will be an added advantage. An understanding of Mainframe databases will be an added advantage. About Walmart Global Tech . . Flexible, hybrid work . Benefits . Belonging . . Equal Opportunity Employer Walmart, Inc., is an Equal Opportunities Employer - By Choice. We believe we are best equipped to help our associates, customers and the communities we serve live better when we really know them. That means understanding, respecting and valuing unique styles, experiences, identities, ideas and opinions - while being inclusive of all people. Minimum Qualifications... Minimum Qualifications:Option 1: Bachelors degree in computer science, computer engineering, computer information systems, software engineering, or related area and 4 years experience in software engineering or related area.Option 2: 6 years experience in software engineering or related area. Preferred Qualifications... Master s degree in Computer Science, Computer Engineering, Computer Information Systems, Software Engineering, or related area and 2 years experience in software engineering or related area Primary Location... Rmz Millenia Business Park, No 143, Campus 1B (1St -6Th Floor), Dr. Mgr Road, (North Veeranam Salai) Perungudi , India
Posted 1 month ago
5.0 - 10.0 years
8 - 12 Lacs
Hyderabad, Pune
Work from Office
Sr Power BI Developer1 Power BI Sr Developer Overall Experience 5+ years of experience in MSBI Product suite (Power BI and DAX) Experience of 5+ years in data preparation. BI projects to understand business requirements in BI context and understand data model to transform raw data into meaningful insights in Power BI and has worked in SSIS Experience in requirement analysis, design and prototypingExperience in building enterprise models using Power BI desktop.Strong understanding of Power BI ApplicationDevelop data models, OLAP cubes, and reports utilizing Power BI applying best practices to the development lifecycle. Documentation of source-to-target mappings, data dictionaries, and database design. Identify areas of improvement to optimize data flows.Good Exposure to DAX queries in Power BI desktop.Creation of Power BI dashboard , report , KPI scorecard and transforming the manual reports, support Power BI dashboard deployment.Strong exposure to Visualization , transformation, data analysis and formatting skills.Connecting to data sources, importing data and transforming data for Business Intelligence.Experience in publishing and scheduling Power BI reportsArchitect and develop data models, OLAP cubes, and reports utilizing Power BI applying best practices to the development lifecycle.Documentation of source-to-target mappings, data dictionaries, and database design.Identify areas of improvement to optimize data flows.Installation and Administration of Microsoft SQL Server.Support business development efforts (proposals and client presentations).Knowledge on EBS Modules like Finance, HCM, Procurement will be an added advantage. Ability to thrive in a fast-paced, dynamic, client-facing role where delivering solid work products to exceed high expectations is a measure of success.Excellent leadership and interpersonal skills.Eager to contribute in a team-oriented environment.Strong prioritization and multi-tasking skills with a track record of meeting deadlines.Ability to be creative and analytical in a problem-solving environment.Effective verbal and written communication skills.Adaptable to new environments, people, technologies, and processesAbility to manage ambiguity and solve undefined problems
Posted 1 month ago
3.0 - 7.0 years
4 - 8 Lacs
Hyderabad
Work from Office
Hiring for Python programming and PySpark-Pan India Strong hands-on experience in Python programming and PySpark Experience using AWS services (RedShift, Glue, EMR, S3 & Lambda) Experience working with Apache Spark and Hadoop ecosystem. Experience in writing and optimizing SQL for data manipulations. Good Exposure to scheduling tools. Airflow is preferable. Must Have Data Warehouse Experience with AWS Redshift or Hive Experience in implementing security measures for data protection. Expertise to build/test complex data pipelines for ETL processes (batch and near real time) Readable documentation of all the components being developed. Knowledge of Database technologies for OLTP and OLAP workloads
Posted 1 month ago
6.0 - 10.0 years
10 - 20 Lacs
Bengaluru
Work from Office
Job Role: A Data Modeler designs and creates data structures to support business processes and analytics, ensuring data integrity and efficiency. They translate business requirements into technical data models, focusing on accuracy, scalability, and consistency. Here's a more detailed look at the role: Responsibilities: Designing and developing data models: This includes creating conceptual, logical, and physical models to represent data in a structured way. Translating business needs : They work with stakeholders to understand business requirements and translate them into actionable data structures. Ensuring data integrity: They implement data validation rules and constraints to maintain the accuracy and reliability of data. Optimizing data models: Data modelers optimize models for performance, scalability, and usability, ensuring data can be efficiently stored and retrieved. Collaborating with other teams : They work with database administrators, data engineers, and business analysts to ensure data models align with business needs and technical requirements. Documenting data models: They provide clear documentation of data structures and relationships, including entity-relationship diagrams and metadata. Skills: Data modeling techniques: Knowledge of various data modeling approaches, including normalization, denormalization, and dimensional modeling. Database technologies: Understanding of relational databases, NoSQL databases, and other database systems. SQL: Proficiency in writing SQL queries for database management and data manipulation. Data modeling tools: Familiarity with tools like PowerDesigner, ERwin, or Visio. Communication and collaboration: Strong communication skills to effectively work with diverse teams and stakeholders. Problem-solving:. Ability to identify and resolve data model performance issues and ensure data accuracy.
Posted 1 month ago
0 years
0 Lacs
Greater Kolkata Area
On-site
Join our Team About this opportunity: We are seeking a highly motivated and skilled Data Engineer to join our cross-functional team of Data Architects and Data Scientists. This role offers an exciting opportunity to work on large-scale data infrastructure and AI/ML pipelines, driving intelligent insights and scalable solutions across the organization. What you will do: Build, optimize, and maintain robust ETL/ELT pipelines to support AI/ML and analytics workloads. Collaborate closely with Data Scientists to productionize ML models, ensuring scalable deployment and monitoring. Design and implement cloud-based data lake and data warehouse architectures. Ensure high data quality, governance, security, and observability across data platforms. Develop and manage real-time and batch data workflows using tools like Apache Spark, Airflow, and Kafka. Support CI/CD and MLOps workflows using tools like GitHub Actions, Docker, Kubernetes, and MLflow. The skills you bring: Languages: Python, SQL, Bash Data Tools: Apache Spark, Airflow, Kafka, dbt, Pandas Cloud Platforms: AWS (preferred), Azure, or GCP Databases: Snowflake, Redshift, BigQuery, PostgreSQL, NoSQL (MongoDB/DynamoDB) DevOps/MLOps: Docker, Kubernetes, MLflow, CI/CD (e.g., GitHub Actions, Jenkins) Data Modeling: OLAP/OLTP, Star/Snowflake schema, Data Vault Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply? Click Here to find all you need to know about what our typical hiring process looks like. Encouraging a diverse and inclusive organization is core to our values at Ericsson, that's why we champion it in everything we do. We truly believe that by collaborating with people with different experiences we drive innovation, which is essential for our future growth. We encourage people from all backgrounds to apply and realize their full potential as part of our Ericsson team. Ericsson is proud to be an Equal Opportunity Employer. learn more. Primary country and city: India (IN) || Kolkata Req ID: 768921
Posted 1 month ago
8.0 - 18.0 years
14 - 18 Lacs
Hyderabad
Work from Office
Career Category Information Systems Job Description Join Amgen s Mission of Serving Patients At Amgen, if you feel like you re part of something bigger, it s because you are. Our shared mission to serve patients living with serious illnesses drives all that we do. Since 1980, we ve helped pioneer the world of biotech in our fight against the world s toughest diseases. With our focus on four therapeutic areas -Oncology, Inflammation, General Medicine, and Rare Disease- we reach millions of patients each year. As a member of the Amgen team, you ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. What you will do Let s do this. Let s change the world. Role Description: We are seeking an experienced Senior Manager, Data Engineering to lead and scale a strong team of data engineers. This role blends technical depth with strategic oversight and people leadership. The ideal candidate will oversee the execution of data engineering initiatives, collaborate with business analysts and multi-functional teams, manage resource capacity, and ensure delivery aligned to business priorities. In addition to technical competence, the candidate will be adept at managing agile operations and driving continuous improvement. Roles & Responsibilities: Possesses strong rapid prototyping skills and can quickly translate concepts into working code. Provide expert guidance and mentorship to the data engineering team, fostering a culture of innovation and standard methodologies. Design, develop, and implement robust data architectures and platforms to support business objectives. Oversee the development and optimization of data pipelines, and data integration solutions. Establish and maintain data governance policies and standards to ensure data quality, security, and compliance. Architect and manage cloud-based data solutions, leveraging AWS or other preferred platforms. Lead and motivate a strong data engineering team to deliver exceptional results. Identify, analyze, and resolve complex data-related challenges. Collaborate closely with business collaborators to understand data requirements and translate them into technical solutions. Stay abreast of emerging data technologies and explore opportunities for innovation. Lead and manage a team of data engineers, ensuring appropriate workload distribution, goal alignment, and performance management. Work closely with business analysts and product collaborators to prioritize and align engineering output with business objectives. What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Master s degree and 8 to 10 years of computer science and engineering preferred, other Engineering field is considered OR Bachelor s degree and 10 to 14 years of computer science and engineering preferred, other Engineering field is considered OR Diploma and 14 to 18 years of computer science and engineering preferred, other Engineering field is considered Demonstrated proficiency in using cloud platforms (AWS, Azure, GCP) for data engineering solutions. Strong understanding of cloud architecture principles and cost optimization strategies. Proficient on experience in Python, PySpark, SQL. Handon experience with bid data ETL performance tuning. Proven ability to lead and develop strong data engineering teams. Strong problem-solving, analytical, and critical thinking skills to address complex data challenges. Strong communication skills for collaborating with business and technical teams alike. Preferred Qualifications: Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced with Apache Spark, Apache Airflow Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc. ), CI/CD (Jenkins, Maven etc. ), automated unit testing, and Dev Ops Experienced with AWS, GCP or Azure cloud services Professional Certification: AWS Certified Data Engineer preferred Databricks Certificate preferred Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills. What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers. amgen. com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation. .
Posted 1 month ago
3.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY GDS – Data and Analytics (D&A) –Senior - Palantir Job Overview Big Data Developer/Senior Data Engineer with 3 to 6+ years of experience who would display strong analytical, problem-solving, programming, Business KPIs understanding and communication skills. They should be self-learner, detail-oriented team members who can consistently meet deadlines and possess the ability to work independently as needed. He/she must be able to multi-task and demonstrate the ability to work with a diverse work group of stakeholders for healthcare/Life Science/Pharmaceutical domains. Responsibilities And Duties Technical - Design, develop, and maintain data models, integrations, and workflows within Palantir Foundry. Detailed understanding and Hands-on knowledge of Palantir Solutions (e.g., Usecare, DTI, Code Repository, Pipeline Builder etc.) Analysing data within Palantir to extract insights for easy interpretation and Exploratory Data Analysis (e.g., Contour). Querying and Programming Skills: Utilizing programming languages query or scripts (e.g., Python, SQL) to interact with the data and perform analyses. Understanding relational data structures and data modelling to optimize data storage and retrieval based on OLAP engine principles. Distributed Frameworks with Automation using Spark APIs (e.g., PySpark, Spark SQL, RDD/DF) to automate processes and workflows within Palantir with external libraries (e.g., Pandas, NumPy etc.). API Integration: Integrating Palantir with other systems and applications using APIs for seamless data flow. Understanding of integration analysis, specification, and solution design based on different scenarios (e.g., Batch/Realtime Flow, Incremental Load etc.). Optimize data pipelines and finetune Foundry configurations to enhance system performance and efficiency. Unit Testing, Issues Identification, Debugging & Trouble shooting, End user documentation. Strong experience on Data Warehousing, Data Engineering, and Data Modelling problem statements. Knowledge of security related principles by ensuring data privacy and security while working with sensitive information. Familiarity with integrating machine learning and AI capabilities within the Palantir environment for advanced analytics. Non-Technical Collaborate with stakeholders to identify opportunities for continuous improvement, understanding business need and innovation in data processes and solutions. Ensure compliance with policies for data privacy, security, and regulatory requirements. Provide training and support to end-users to maximize the effective use of Palantir Foundry. Self-driven learning of technologies being adopted by the organizational requirements. Work as part of a team or individuals as engineer in a highly collaborative fashion EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 1 month ago
6.0 years
0 Lacs
Telangana
On-site
Overview: We are hiring a Senior Data Engineer (6 to 10 years) with deep expertise in Azure Data Bricks, Azure Data Lake, and Azure Synapse Analytics to join our high-performing team. The ideal candidate will have a proven track record in designing, building, and optimizing big data pipelines and architectures while leveraging their technical proficiency in cloud-based data engineering. This role requires a strategic thinker who can bridge the gap between raw data and actionable insights, enabling data-driven decision-making for large-scale enterprise initiatives. A strong foundation in distributed computing, ETL frameworks, and advanced data modeling is crucial. The individual will work closely with data architects, analysts, and business teams to deliver scalable and efficient data solutions. Work Location: Hyderabad, Bangalore and Chennai Work Mode: Work from Office (5 days) Notice Period: Immediate to 15 days Responsibilities: Key Responsibilities: Data Engineering & Architecture: Design, develop, and maintain high-performance data pipelines for structured and unstructured data using Azure Data Bricks and Apache Spark. Build and manage scalable data ingestion frameworks for batch and real-time data processing. Implement and optimize data lake architecture in Azure Data Lake to support analytics and reporting workloads. Develop and optimize data models and queries in Azure Synapse Analytics to power BI and analytics use cases. Cloud-Based Data Solutions: Architect and implement modern data lakehouses combining the best of data lakes and data warehouses. Leverage Azure services like Data Factory, Event Hub, and Blob Storage for end-to-end data workflows. Ensure security, compliance, and governance of data through Azure Role-Based Access Control (RBAC) and Data Lake ACLs. ETL/ELT Development: Develop robust ETL/ELT pipelines using Azure Data Factory, Data Bricks notebooks, and PySpark. Perform data transformations, cleansing, and validation to prepare datasets for analysis. Manage and monitor job orchestration, ensuring pipelines run efficiently and reliably. Performance Optimization: Optimize Spark jobs and SQL queries for large-scale data processing. Implement partitioning, caching, and indexing strategies to improve performance and scalability of big data workloads. Conduct capacity planning and recommend infrastructure optimizations for cost-effectiveness. Collaboration & Stakeholder Management: Work closely with business analysts, data scientists, and product teams to understand data requirements and deliver solutions. Participate in cross-functional design sessions to translate business needs into technical specifications. Provide thought leadership on best practices in data engineering and cloud computing. Documentation & Knowledge Sharing: Create detailed documentation for data workflows, pipelines, and architectural decisions. Mentor junior team members and promote a culture of learning and innovation. Requirements: Required Qualifications: Experience: 7+ years of experience in data engineering, big data, or cloud-based data solutions. Proven expertise with Azure Data Bricks, Azure Data Lake, and Azure Synapse Analytics. Technical Skills: Strong hands-on experience with Apache Spark and distributed data processing frameworks. Advanced proficiency in Python and SQL for data manipulation and pipeline development. Deep understanding of data modeling for OLAP, OLTP, and dimensional data models. Experience with ETL/ELT tools like Azure Data Factory or Informatica. Familiarity with Azure DevOps for CI/CD pipelines and version control. Big Data Ecosystem: Familiarity with Delta Lake for managing big data in Azure. Experience with streaming data frameworks like Kafka, Event Hub, or Spark Streaming. Cloud Expertise: Strong understanding of Azure cloud architecture, including storage, compute, and networking. Knowledge of Azure security best practices, such as encryption and key management. Preferred Skills (Nice to Have): Experience with machine learning pipelines and frameworks like MLFlow or Azure Machine Learning. Knowledge of data visualization tools such as Power BI for creating dashboards and reports. Familiarity with Terraform or ARM templates for infrastructure as code (IaC). Exposure to NoSQL databases like Cosmos DB or MongoDB. Experience with data governance tools like Azure Purview.
Posted 1 month ago
12.0 years
0 Lacs
Hyderābād
On-site
Welcome to Warner Bros. Discovery… the stuff dreams are made of. Who We Are… When we say, “the stuff dreams are made of,” we’re not just referring to the world of wizards, dragons and superheroes, or even to the wonders of Planet Earth. Behind WBD’s vast portfolio of iconic content and beloved brands, are the storytellers bringing our characters to life, the creators bringing them to your living rooms and the dreamers creating what’s next… From brilliant creatives, to technology trailblazers, across the globe, WBD offers career defining opportunities, thoughtfully curated benefits, and the tools to explore and grow into your best selves. Here you are supported, here you are celebrated, here you can thrive. Manager, Analytics Engineering – Hyderabad, India. About Warner Bros. Discovery: Warner Bros. Discovery, a premier global media and entertainment company, offers audiences the world's most differentiated and complete portfolio of content, brands and franchises across television, film, streaming and gaming. The new company combines Warner Media’s premium entertainment, sports and news assets with Discovery's leading non-fiction and international entertainment and sports businesses. For more information, please visit www.wbd.com. Role Overview As a Manager-Analytics Engineering at Warner Bros. Discovery, you will play a pivotal role in driving the data and analytics strategy across various business data domains, including marketing, commerce, finance, customer, and content. Reporting to the VP of Content Lifecycle Analytics, you will lead a team of skilled data and analytics engineers to build and maintain state-of-the-art analytics solutions. Your work will support Warner Bros. Discovery's mission to deliver innovative and data-driven insights that empower global & regional teams across the company. In this role, you will understand data relationships across domains, contribute to the design of reusable semantic models, and effectively communicate data findings and insights to non-technical stakeholders through storytelling, presentations, and reports. Collaboration is key, as you will work closely with cross-functional teams, including data scientists, data engineers, business analysts, and domain experts, to understand business needs and align data efforts with organizational goals. Staying updated with the latest analytics tools, platforms, and technologies will be essential to your success. You will lead by example and teach best practices by demonstrating your own technical competency with these languages, tools, and technology platforms. You will also be responsible for ensuring data privacy, governance, and cloud cost management. As a leader, you will promote a culture of experimentation and data-driven innovation, inspiring and motivating your team through internal and external presentations and other speaking opportunities. You will also play a key role in hiring, mentoring, and coaching engineers, helping to build an analytics & engineering team that prioritizes empathy, diversity, and inclusion. Responsibilities: Lead and mentor a team of analytics engineers, ensuring productivity, focus, and motivation in a dynamic environment. Design, review and develop analytical solutions by integrating data from multiple sources, including databases, APIs, and other sources. Build and maintain data pipelines to generate insights and support various business functions. Implement data validation and quality checks to ensure data integrity. Perform exploratory data analysis (EDA) to understand data distributions and relationships. Utilize analytical tools and techniques to uncover correlations, trends, variations, and outliers to gain a comprehensive understanding of the data your team works with. Employ data mining techniques to identify patterns or leverage data visualization to turn data into easy-to-understand visual formats like charts and graphs. Communicate data findings and insights to non-technical stakeholders through storytelling, presentations, and reports. Collaborate with cross-functional teams, including data scientists, data engineers, business analysts, and domain experts, to understand business needs and align data efforts with organizational goals with a focus on addressing customer pain points. Stay updated with the latest analytics tools, platforms, and technologies, such as Python, Spark, and Looker. Give and receive feedback to and from leadership, peers, and direct reports to promote positive development and growth. Deliver facts and decisions with empathy and transparency. Ensure data privacy, governance, and cost management. Promote a culture of experimentation and data-driven innovation. Requirements: Bachelor’s degree in computer science or a similar discipline. 12+ years of experience in data engineering, data science and analytics engineering. 2+ years of experience in engineering management, leading teams of data and analytics engineers. Expertise in analytical tools and frameworks, such as Looker, Tableau, or Power BI. Experience in AI-driven data analytics with cloud platforms, preferably AWS and Databricks. Proficiency in data modelling using OLAP databases, such as Snowflake and Databricks. Strong programming skills in SQL, Python, and Python-based data manipulation and visualization libraries. Experience with orchestration frameworks, such as Airflow. Familiarity with big data frameworks, such as Spark, and ML libraries, such as scikit-learn. Excellent data analytical and communication skills. Ability to work in a fast-paced, high-pressure, agile environment. Strong interpersonal, communication, and presentation skills. Ability to learn and teach new languages and frameworks. How We Get Things Done… This last bit is probably the most important! Here at WBD, our guiding principles are the core values by which we operate and are central to how we get things done. You can find them at www.wbd.com/guiding-principles/ along with some insights from the team on what they mean and how they show up in their day to day. We hope they resonate with you and look forward to discussing them during your interview. Championing Inclusion at WBD Warner Bros. Discovery embraces the opportunity to build a workforce that reflects a wide array of perspectives, backgrounds and experiences. Being an equal opportunity employer means that we take seriously our responsibility to consider qualified candidates on the basis of merit, regardless of sex, gender identity, ethnicity, age, sexual orientation, religion or belief, marital status, pregnancy, parenthood, disability or any other category protected by law. If you’re a qualified candidate with a disability and you require adjustments or accommodations during the job application and/or recruitment process, please visit our accessibility page for instructions to submit your request.
Posted 1 month ago
7.0 years
0 Lacs
Kochi, Kerala, India
On-site
Introduction A career in IBM Software means you’ll be part of a team that transforms our customer’s challenges into solutions. Seeking new possibilities and always staying curious, we are a team dedicated to creating the world’s leading AI-powered, cloud-native software solutions for our customers. Our renowned legacy creates endless global opportunities for our IBMers, so the door is always open for those who want to grow their career. We are seeking a skilled back-end developer to join our IBM Software team. As part of our team, you will be responsible for developing and maintaining high-quality software products, working with a variety of technologies and programming languages. IBM’s product and technology landscape includes Research, Software, and Infrastructure. Entering this domain positions you at the heart of IBM, where growth and innovation thrive IBM Planning Analytics® is an enterprise financial planning software platform used by a significant number of Global 500 companies. IBM Planning Analytics® provides a real-time approach to consolidating, viewing, and editing enormous volumes of multidimensional data. At the heart of the IBM Planning Analytics solution is TM1® Server, a patented, 64-bit, in-memory functional database server that can perform real-time complex calculations and aggregations over massive data spaces while allowing concurrent data editing. IBM TM1 Server development team is a dynamic and forward thinking team, and we are looking for a Senior Software Developer with significant experience in designing and developing enterprise-scale software products to join us. Your Role And Responsibilities Your Role and Responsibilities You, the ideal candidate, are expected to have strong technical, critical thinking and communication skills. You are creative and are not afraid of bringing forward ideas and running with them. If you are already product focused, are excited for new technological development that will help users do better in solving their problems, enjoy and appreciate teamwork with people across the globe, then you will be at home with our team. As a key member of our dynamic team, you will play a vital role in crafting exceptional software experiences. Your responsibilities will encompass the design and implementation of innovative features, fine-tuning and sustaining existing code for optimal performance, and guaranteeing top-notch quality through rigorous testing and debugging. Collaboration is at the heart of what we do, and you’ll be working closely with fellow developers, designers, and product managers to ensure our software aligns seamlessly with user expectations. Preferred Education Master's Degree Required Technical And Professional Expertise 7+ years of Developing High Performance, Highly Scalable C/C++ Application. Multi-threaded Programming, High Performance Data Structures and Algorithms. Experience developing and debugging software across multiple platforms including Microsoft Windows and Linux. Experience with Agile Software Development. Preferred Technical And Professional Experience Degree in Computer Science, Engineering, or equivalent professional experience. In Addition to the required skills, knowledge of MDX, OLAP Technologies and Multidimensional Modeling are a plus.
Posted 1 month ago
3.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY GDS – Data and Analytics (D&A) –Senior - Palantir Job Overview Big Data Developer/Senior Data Engineer with 3 to 6+ years of experience who would display strong analytical, problem-solving, programming, Business KPIs understanding and communication skills. They should be self-learner, detail-oriented team members who can consistently meet deadlines and possess the ability to work independently as needed. He/she must be able to multi-task and demonstrate the ability to work with a diverse work group of stakeholders for healthcare/Life Science/Pharmaceutical domains. Responsibilities And Duties Technical - Design, develop, and maintain data models, integrations, and workflows within Palantir Foundry. Detailed understanding and Hands-on knowledge of Palantir Solutions (e.g., Usecare, DTI, Code Repository, Pipeline Builder etc.) Analysing data within Palantir to extract insights for easy interpretation and Exploratory Data Analysis (e.g., Contour). Querying and Programming Skills: Utilizing programming languages query or scripts (e.g., Python, SQL) to interact with the data and perform analyses. Understanding relational data structures and data modelling to optimize data storage and retrieval based on OLAP engine principles. Distributed Frameworks with Automation using Spark APIs (e.g., PySpark, Spark SQL, RDD/DF) to automate processes and workflows within Palantir with external libraries (e.g., Pandas, NumPy etc.). API Integration: Integrating Palantir with other systems and applications using APIs for seamless data flow. Understanding of integration analysis, specification, and solution design based on different scenarios (e.g., Batch/Realtime Flow, Incremental Load etc.). Optimize data pipelines and finetune Foundry configurations to enhance system performance and efficiency. Unit Testing, Issues Identification, Debugging & Trouble shooting, End user documentation. Strong experience on Data Warehousing, Data Engineering, and Data Modelling problem statements. Knowledge of security related principles by ensuring data privacy and security while working with sensitive information. Familiarity with integrating machine learning and AI capabilities within the Palantir environment for advanced analytics. Non-Technical Collaborate with stakeholders to identify opportunities for continuous improvement, understanding business need and innovation in data processes and solutions. Ensure compliance with policies for data privacy, security, and regulatory requirements. Provide training and support to end-users to maximize the effective use of Palantir Foundry. Self-driven learning of technologies being adopted by the organizational requirements. Work as part of a team or individuals as engineer in a highly collaborative fashion EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 1 month ago
0 years
6 - 8 Lacs
Calcutta
On-site
Kolkata,West Bengal,India Job ID 768921 Join our Team About this opportunity: We are seeking a highly motivated and skilled Data Engineer to join our cross-functional team of Data Architects and Data Scientists. This role offers an exciting opportunity to work on large-scale data infrastructure and AI/ML pipelines, driving intelligent insights and scalable solutions across the organization. What you will do: Build, optimize, and maintain robust ETL/ELT pipelines to support AI/ML and analytics workloads. Collaborate closely with Data Scientists to productionize ML models, ensuring scalable deployment and monitoring. Design and implement cloud-based data lake and data warehouse architectures. Ensure high data quality, governance, security, and observability across data platforms. Develop and manage real-time and batch data workflows using tools like Apache Spark, Airflow, and Kafka. Support CI/CD and MLOps workflows using tools like GitHub Actions, Docker, Kubernetes, and MLflow. The skills you bring: Languages: Python, SQL, Bash Data Tools: Apache Spark, Airflow, Kafka, dbt, Pandas Cloud Platforms: AWS (preferred), Azure, or GCP Databases: Snowflake, Redshift, BigQuery, PostgreSQL, NoSQL (MongoDB/DynamoDB) DevOps/MLOps: Docker, Kubernetes, MLflow, CI/CD (e.g., GitHub Actions, Jenkins) Data Modeling: OLAP/OLTP, Star/Snowflake schema, Data Vault Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply?
Posted 1 month ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
The BI/Dashboard Developer works closely with cross-functional teams to transform raw data into meaningful insights through dashboards, reports, and visualizations. This role contributes to ongoing analytics initiatives, optimizes the performance of reporting tools, and supports scalable data strategies. It involves both development and maintenance of data systems in a fast-paced, project-driven environment, with a focus on integrating real-time data, BIM platforms, and advanced visualization tools. Collaborate with project stakeholders to gather and understand business requirements and translate them into technical specifications. Design, develop, and deploy end-to-end BI solutions including reports, dashboards, and data models. Maintain, monitor, and support data analytics platforms (e.g., Power BI, MicroStrategy). Work closely with BI developers and data engineers to build scalable and optimized data storage tools. Conduct unit testing and troubleshooting to ensure system performance, data integrity, and optimal load times. Evaluate and improve existing BI systems and propose innovative enhancements. Develop and execute advanced database queries and conduct in-depth analysis. Create meaningful data visualizations and analytical reports for project and business use. Maintain up-to-date technical documentation and data dictionary for internal and external use. Minimum Requirements Bachelor’s or Master’s degree in Engineering, Computer Science, IT, or equivalent experience. 3+ years of hands-on experience in business intelligence, data analytics, or IT management. Advanced expertise in Power BI (DAX, Power Query, Dataflows, etc.). Experience in Python , C# , React knowledge is preferred. Proficiency in one or more of the following: SQL, R, Python, or SAS. In-depth knowledge of Microsoft BI Stack (SSIS, SSRS), ETL frameworks, and data warehouse design. Familiarity with Oracle BI, SQL Server, OLAP systems, and data modelling. Fluent in English with strong communication and collaboration skills. Flexible to work in rotational shifts. Job Location: Hyderabad Preferred Skills Experience with BIM tools or BIM Server and knowledge of BIMQL. Experience working with real-time dashboards, WebSocket integrations, or Digital Twin platforms. Familiarity with messaging systems such as RabbitMQ, MQTT, or ZeroMQ. Exposure to Docker, AWS deployments, or CI/CD pipelines. Understanding of JavaScript-based visualization tools such as D3.js, SVG, or TypeScript. Microsoft certifications in BI tools or ETL technologies (e.g., SSIS, Azure Data Factory). Exposure to construction tech, 3D data analytics, or Unreal Engine visualizations. Involvement in technical blogging, documentation, or knowledge-sharing communities. What We Offer Projects that integrate data analytics, construction tech, and real-time visualizations. Collaboration with Unreal/Digital Twin development teams. A flexible and innovative work environment.
Posted 1 month ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Summary: We are seeking a seasoned and visionary Head of Database Administration (HOD - DBA) to lead and manage our database architecture and administration function. The ideal candidate will bring deep technical expertise in database operations, replication, disaster recovery, performance tuning, and big data integration, along with strong leadership capabilities to guide a growing DBA team in a complex, high-performance environment. Key Responsibilities: • Implement and oversee replication, sharding, and backup drills. • Develop and maintain disaster recovery plans with regular testing. • Optimize performance through indexing, query tuning, and resource allocation. • Perform real-time monitoring and health checks for all databases. • Create, review, and optimize complex NoSQL queries • Lead database migration projects with minimal downtime. • Administer databases on Linux environments and AWS cloud (RDS, EC2, S3, etc.). • Use Python scripting for automation and custom DBA tools. • Manage integration with Big Data systems, Data Lakes, Data Marts, and Data Warehouses. • Design and manage database architecture for OLTP and OLAP systems. • Collaborate with DevOps, engineering, and analytics teams. Qualifications: • Strong experience with replication, sharding, backup & disaster recovery. • Expertise in database performance tuning, architecture, and query optimization. • Proficiency in MongoDB, PostgreSQL, MySQL, or similar databases. • Hands-on with Python scripting for automation. • Experience in Linux-based systems and AWS services. • Solid understanding of OLTP, OLAP, Data Lakes, Data Marts, and Data Warehouses. • Strong analytical, debugging, and leadership skills. Preferred: • Experience with NoSQL, Big Data tools (Hadoop, Spark, Kafka). • Certifications in AWS, Linux, or leading database technologies
Posted 1 month ago
0.0 years
0 Lacs
Kolkata, West Bengal
On-site
Kolkata,West Bengal,India Job ID 768921 Join our Team About this opportunity: We are seeking a highly motivated and skilled Data Engineer to join our cross-functional team of Data Architects and Data Scientists. This role offers an exciting opportunity to work on large-scale data infrastructure and AI/ML pipelines, driving intelligent insights and scalable solutions across the organization. What you will do: Build, optimize, and maintain robust ETL/ELT pipelines to support AI/ML and analytics workloads. Collaborate closely with Data Scientists to productionize ML models, ensuring scalable deployment and monitoring. Design and implement cloud-based data lake and data warehouse architectures. Ensure high data quality, governance, security, and observability across data platforms. Develop and manage real-time and batch data workflows using tools like Apache Spark, Airflow, and Kafka. Support CI/CD and MLOps workflows using tools like GitHub Actions, Docker, Kubernetes, and MLflow. The skills you bring: Languages: Python, SQL, Bash Data Tools: Apache Spark, Airflow, Kafka, dbt, Pandas Cloud Platforms: AWS (preferred), Azure, or GCP Databases: Snowflake, Redshift, BigQuery, PostgreSQL, NoSQL (MongoDB/DynamoDB) DevOps/MLOps: Docker, Kubernetes, MLflow, CI/CD (e.g., GitHub Actions, Jenkins) Data Modeling: OLAP/OLTP, Star/Snowflake schema, Data Vault Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply?
Posted 1 month ago
5.0 - 10.0 years
40 - 45 Lacs
Pune, Gurugram, Bengaluru
Work from Office
Notice: - Immediate Joiners Only Design, develop, and maintain SQL Server Analysis Services (SSAS) models Create and manage OLAP cubes to support business intelligence reporting Develop and implement multidimensional and tabular data models Optimize the performance of SSAS solutions for efficient query processing Integrate data from various sources into SQL Server databases and SSAS models Preferably knowledge on AWS S3 and SQL server Polybase Location - Bangalore, Pune, Gurgaon, Noida, Hyderabad
Posted 1 month ago
12.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Welcome to Warner Bros. Discovery… the stuff dreams are made of. Who We Are… When we say, “the stuff dreams are made of,” we’re not just referring to the world of wizards, dragons and superheroes, or even to the wonders of Planet Earth. Behind WBD’s vast portfolio of iconic content and beloved brands, are the storytellers bringing our characters to life, the creators bringing them to your living rooms and the dreamers creating what’s next… From brilliant creatives, to technology trailblazers, across the globe, WBD offers career defining opportunities, thoughtfully curated benefits, and the tools to explore and grow into your best selves. Here you are supported, here you are celebrated, here you can thrive. Manager, Analytics Engineering – Hyderabad, India. About Warner Bros. Discovery Warner Bros. Discovery, a premier global media and entertainment company, offers audiences the world's most differentiated and complete portfolio of content, brands and franchises across television, film, streaming and gaming. The new company combines Warner Media’s premium entertainment, sports and news assets with Discovery's leading non-fiction and international entertainment and sports businesses. For more information, please visit www.wbd.com. Role Overview As a Manager-Analytics Engineering at Warner Bros. Discovery, you will play a pivotal role in driving the data and analytics strategy across various business data domains, including marketing, commerce, finance, customer, and content. Reporting to the VP of Content Lifecycle Analytics, you will lead a team of skilled data and analytics engineers to build and maintain state-of-the-art analytics solutions. Your work will support Warner Bros. Discovery's mission to deliver innovative and data-driven insights that empower global & regional teams across the company. In this role, you will understand data relationships across domains, contribute to the design of reusable semantic models, and effectively communicate data findings and insights to non-technical stakeholders through storytelling, presentations, and reports. Collaboration is key, as you will work closely with cross-functional teams, including data scientists, data engineers, business analysts, and domain experts, to understand business needs and align data efforts with organizational goals. Staying updated with the latest analytics tools, platforms, and technologies will be essential to your success. You will lead by example and teach best practices by demonstrating your own technical competency with these languages, tools, and technology platforms. You will also be responsible for ensuring data privacy, governance, and cloud cost management. As a leader, you will promote a culture of experimentation and data-driven innovation, inspiring and motivating your team through internal and external presentations and other speaking opportunities. You will also play a key role in hiring, mentoring, and coaching engineers, helping to build an analytics & engineering team that prioritizes empathy, diversity, and inclusion. Responsibilities Lead and mentor a team of analytics engineers, ensuring productivity, focus, and motivation in a dynamic environment. Design, review and develop analytical solutions by integrating data from multiple sources, including databases, APIs, and other sources. Build and maintain data pipelines to generate insights and support various business functions. Implement data validation and quality checks to ensure data integrity. Perform exploratory data analysis (EDA) to understand data distributions and relationships. Utilize analytical tools and techniques to uncover correlations, trends, variations, and outliers to gain a comprehensive understanding of the data your team works with. Employ data mining techniques to identify patterns or leverage data visualization to turn data into easy-to-understand visual formats like charts and graphs. Communicate data findings and insights to non-technical stakeholders through storytelling, presentations, and reports. Collaborate with cross-functional teams, including data scientists, data engineers, business analysts, and domain experts, to understand business needs and align data efforts with organizational goals with a focus on addressing customer pain points. Stay updated with the latest analytics tools, platforms, and technologies, such as Python, Spark, and Looker. Give and receive feedback to and from leadership, peers, and direct reports to promote positive development and growth. Deliver facts and decisions with empathy and transparency. Ensure data privacy, governance, and cost management. Promote a culture of experimentation and data-driven innovation. Requirements Bachelor’s degree in computer science or a similar discipline. 12+ years of experience in data engineering, data science and analytics engineering. 2+ years of experience in engineering management, leading teams of data and analytics engineers. Expertise in analytical tools and frameworks, such as Looker, Tableau, or Power BI. Experience in AI-driven data analytics with cloud platforms, preferably AWS and Databricks. Proficiency in data modelling using OLAP databases, such as Snowflake and Databricks. Strong programming skills in SQL, Python, and Python-based data manipulation and visualization libraries. Experience with orchestration frameworks, such as Airflow. Familiarity with big data frameworks, such as Spark, and ML libraries, such as scikit-learn. Excellent data analytical and communication skills. Ability to work in a fast-paced, high-pressure, agile environment. Strong interpersonal, communication, and presentation skills. Ability to learn and teach new languages and frameworks. How We Get Things Done… This last bit is probably the most important! Here at WBD, our guiding principles are the core values by which we operate and are central to how we get things done. You can find them at www.wbd.com/guiding-principles/ along with some insights from the team on what they mean and how they show up in their day to day. We hope they resonate with you and look forward to discussing them during your interview. Championing Inclusion at WBD Warner Bros. Discovery embraces the opportunity to build a workforce that reflects a wide array of perspectives, backgrounds and experiences. Being an equal opportunity employer means that we take seriously our responsibility to consider qualified candidates on the basis of merit, regardless of sex, gender identity, ethnicity, age, sexual orientation, religion or belief, marital status, pregnancy, parenthood, disability or any other category protected by law. If you’re a qualified candidate with a disability and you require adjustments or accommodations during the job application and/or recruitment process, please visit our accessibility page for instructions to submit your request.
Posted 1 month ago
10.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Position Summary... What you'll do... About Team: Walmart’s Enterprise Business Services (EBS) is a powerhouse of several exceptional teams delivering world-class technology solutions and services making a profound impact at every level of Walmart. As a key part of Walmart Global Tech, our teams set the bar for operational excellence and leverage emerging technology to support millions of customers, associates, and stakeholders worldwide. Each time an associate turns on their laptop, a customer makes a purchase, a new supplier is onboarded, the company closes the books, physical and legal risk is avoided, and when we pay our associates consistently and accurately, that is EBS. Joining EBS means embarking on a journey of limitless growth, relentless innovation, and the chance to set new industry standards that shape the future of Walmart. What you'll do: Guide the team in architectural decisions and best practices for building scalable applications. Drive design, development, implementation and documentation Build, test and deploy cutting edge solutions at scale, impacting associates of Walmart worldwide. Interact with Walmart engineering teams across geographies to leverage expertise and contribute to the tech community. Engage with Product Management and Business to drive the agenda, set your priorities and deliver awesome products. Drive the success of the implementation by applying technical skills, to design and build enhanced processes and technical solutions in support of strategic initiatives. Work closely with the Architects and cross functional teams and follow established practices for the delivery of solutions meeting QCD (Quality, Cost & Delivery). Within the established architectural guidelines. Work with senior leadership to chart out the future roadmap of the products Participate in hiring and build teams enabling them to be high performing agile teams. Interact closely for requirements with Business owners and technical teams both within India and across the globe. What you'll bring: Bachelor's/Master’s degree in Computer Science, engineering, or related field, with minimum 10+ years of experience in software design, development and automated deployments. Hands on experience building Java-based backend systems and experience of working in cloud based solutions is a must. Should be proficient in Java, Spring Boot, Kafka and Spark. Have prior experience in delivering highly scalable large scale data processing Java applications. Strong in high and low level system design. Should be experienced in designing data intensive applications in open stack. A good understanding of CS Fundamentals, Microservices, Data Structures, Algorithms & Problem Solving Should be experienced in CICD development environments/tools including, but not limited to, Git, Maven, Jenkins. Strong in writing modular and testable code and test cases (unit, functional and integration) using frameworks like JUnit, Mockito, and Mock MVC Should be experienced in microservices architecture. Possesses good understanding of distributed concepts, common design principles, design patterns and cloud native development concepts. Hands-on experience in Spring boot, concurrency, garbage collection, RESTful services, data caching services and ORM tools. Experience working with Relational Database and writing complex OLAP, OLTP and SQL queries. Provide multiple alternatives for development frameworks, libraries, and tools. Experience in working with NoSQL Databases like cosmos DB. Experience in working with Caching technology like Redis, Mem cache or other related Systems. Experience in event based systems like Kafka. Experience utilizing monitoring and alert tools like Prometheus, Splunk, and other related systems and excellent in debugging and troubleshooting issues. Exposure to Containerization tools like Docker, Helm, Kubernetes. Knowledge of public cloud platforms like Azure, GCP etc. will be an added advantage. An understanding of Mainframe databases will be an added advantage. About Walmart Global Tech Imagine working in an environment where one line of code can make life easier for hundreds of millions of people. That’s what we do at Walmart Global Tech. We’re a team of software engineers, data scientists, cybersecurity expert's and service professionals within the world’s leading retailer who make an epic impact and are at the forefront of the next retail disruption. People are why we innovate, and people power our innovations. We are people-led and tech-empowered. We train our team in the skillsets of the future and bring in experts like you to help us grow. We have roles for those chasing their first opportunity as well as those looking for the opportunity that will define their career. Here, you can kickstart a great career in tech, gain new skills and experience for virtually every industry, or leverage your expertise to innovate at scale, impact millions and reimagine the future of retail. Flexible, hybrid work We use a hybrid way of working with primary in office presence coupled with an optimal mix of virtual presence. We use our campuses to collaborate and be together in person, as business needs require and for development and networking opportunities. This approach helps us make quicker decisions, remove location barriers across our global team, be more flexible in our personal lives. Benefits Beyond our great compensation package, you can receive incentive awards for your performance. Other great perks include a host of best-in-class benefits maternity and parental leave, PTO, health benefits, and much more. Belonging We aim to create a culture where every associate feels valued for who they are, rooted in respect for the individual. Our goal is to foster a sense of belonging, to create opportunities for all our associates, customers and suppliers, and to be a Walmart for everyone. At Walmart, our vision is "everyone included." By fostering a workplace culture where everyone is—and feels—included, everyone wins. Our associates and customers reflect the makeup of all 19 countries where we operate. By making Walmart a welcoming place where all people feel like they belong, we’re able to engage associates, strengthen our business, improve our ability to serve customers, and support the communities where we operate. Equal Opportunity Employer Walmart, Inc., is an Equal Opportunities Employer – By Choice. We believe we are best equipped to help our associates, customers and the communities we serve live better when we really know them. That means understanding, respecting and valuing unique styles, experiences, identities, ideas and opinions – while being inclusive of all people. Minimum Qualifications... Outlined below are the required minimum qualifications for this position. If none are listed, there are no minimum qualifications. Minimum Qualifications:Option 1: Bachelor's degree in computer science, computer engineering, computer information systems, software engineering, or related area and 4 years’ experience in software engineering or related area.Option 2: 6 years’ experience in software engineering or related area. Preferred Qualifications... Outlined below are the optional preferred qualifications for this position. If none are listed, there are no preferred qualifications. Master’s degree in Computer Science, Computer Engineering, Computer Information Systems, Software Engineering, or related area and 2 years' experience in software engineering or related area Primary Location... Rmz Millenia Business Park, No 143, Campus 1B (1St -6Th Floor), Dr. Mgr Road, (North Veeranam Salai) Perungudi , India R-2212352
Posted 1 month ago
10.0 - 14.0 years
3 - 7 Lacs
Bengaluru
Work from Office
Skill required: Delivery - Actionable Insights Designation: I&F Decision Sci Practitioner Assoc Mgr Qualifications: Any Graduation Years of Experience: 10 to 14 years About Accenture Combining unmatched experience and specialized skills across more than 40 industries, we offer Strategy and Consulting, Technology and Operations services, and Accenture Song all powered by the worlds largest network of Advanced Technology and Intelligent Operations centers. Our 699,000 people deliver on the promise of technology and human ingenuity every day, serving clients in more than 120 countries. Visit us at www.accenture.com What would you do Data & AIWe are seeking a highly skilled Insights Consultant in Marketing Analytics to join our team. The ideal candidate will have a strong marketing domain knowledge, expertise in OLAP, SQL, and a knack for deriving insights and storytelling. This role involves performing pre and post-marketing campaign analysis, analyzing campaign effectiveness, and providing recurring insights and analysis across various marketing channels. What are we looking for Insights Consultant Marketing Analytics End-to-End Analysis and Insights Benchmarking and Success Metrics Analytics Measurement Plans Proven experience in marketing analytics, with a strong understanding of OLAP and SQL Expertise in deriving insights and storytelling from data Strong analytical and problem-solving skills Excellent communication and collaboration skills Ability to provide actionable insights and prescriptive recommendations Roles and Responsibilities: In this role you are required to do analysis and solving of moderately complex problems Typically creates new solutions, leveraging and, where needed, adapting existing methods and procedures The person requires understanding of the strategic direction set by senior management as it relates to team goals Primary upward interaction is with direct supervisor or team leads Generally, interacts with peers and/or management levels at a client and/or within Accenture The person should require minimal guidance when determining methods and procedures on new assignments Decisions often impact the team in which they reside and occasionally impact other teams Individual would manage medium-small sized teams and/or work efforts (if in an individual contributor role) at a client or within Accenture Qualification Any Graduation
Posted 1 month ago
6.0 years
0 Lacs
Telangana, India
On-site
Overview We are hiring a Senior Data Engineer (6 to 10 years) with deep expertise in Azure Data Bricks, Azure Data Lake, and Azure Synapse Analytics to join our high-performing team. The ideal candidate will have a proven track record in designing, building, and optimizing big data pipelines and architectures while leveraging their technical proficiency in cloud-based data engineering. This role requires a strategic thinker who can bridge the gap between raw data and actionable insights, enabling data-driven decision-making for large-scale enterprise initiatives. A strong foundation in distributed computing, ETL frameworks, and advanced data modeling is crucial. The individual will work closely with data architects, analysts, and business teams to deliver scalable and efficient data solutions. Work Location: Hyderabad, Bangalore and Chennai Work Mode: Work from Office (5 days) Notice Period: Immediate to 15 days Responsibilities Key Responsibilities: Data Engineering & Architecture: Design, develop, and maintain high-performance data pipelines for structured and unstructured data using Azure Data Bricks and Apache Spark. Build and manage scalable data ingestion frameworks for batch and real-time data processing. Implement and optimize data lake architecture in Azure Data Lake to support analytics and reporting workloads. Develop and optimize data models and queries in Azure Synapse Analytics to power BI and analytics use cases. Cloud-Based Data Solutions: Architect and implement modern data lakehouses combining the best of data lakes and data warehouses. Leverage Azure services like Data Factory, Event Hub, and Blob Storage for end-to-end data workflows. Ensure security, compliance, and governance of data through Azure Role-Based Access Control (RBAC) and Data Lake ACLs. ETL/ELT Development: Develop robust ETL/ELT pipelines using Azure Data Factory, Data Bricks notebooks, and PySpark. Perform data transformations, cleansing, and validation to prepare datasets for analysis. Manage and monitor job orchestration, ensuring pipelines run efficiently and reliably. Performance Optimization: Optimize Spark jobs and SQL queries for large-scale data processing. Implement partitioning, caching, and indexing strategies to improve performance and scalability of big data workloads. Conduct capacity planning and recommend infrastructure optimizations for cost-effectiveness. Collaboration & Stakeholder Management: Work closely with business analysts, data scientists, and product teams to understand data requirements and deliver solutions. Participate in cross-functional design sessions to translate business needs into technical specifications. Provide thought leadership on best practices in data engineering and cloud computing. Documentation & Knowledge Sharing: Create detailed documentation for data workflows, pipelines, and architectural decisions. Mentor junior team members and promote a culture of learning and innovation. Requirements Required Qualifications: Experience: 7+ years of experience in data engineering, big data, or cloud-based data solutions. Proven expertise with Azure Data Bricks, Azure Data Lake, and Azure Synapse Analytics. Technical Skills: Strong hands-on experience with Apache Spark and distributed data processing frameworks. Advanced proficiency in Python and SQL for data manipulation and pipeline development. Deep understanding of data modeling for OLAP, OLTP, and dimensional data models. Experience with ETL/ELT tools like Azure Data Factory or Informatica. Familiarity with Azure DevOps for CI/CD pipelines and version control. Big Data Ecosystem: Familiarity with Delta Lake for managing big data in Azure. Experience with streaming data frameworks like Kafka, Event Hub, or Spark Streaming. Cloud Expertise: Strong understanding of Azure cloud architecture, including storage, compute, and networking. Knowledge of Azure security best practices, such as encryption and key management. Preferred Skills (Nice to Have): Experience with machine learning pipelines and frameworks like MLFlow or Azure Machine Learning. Knowledge of data visualization tools such as Power BI for creating dashboards and reports. Familiarity with Terraform or ARM templates for infrastructure as code (IaC). Exposure to NoSQL databases like Cosmos DB or MongoDB. Experience with data governance tools like Azure Purview.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39815 Jobs | Dublin
Wipro
19317 Jobs | Bengaluru
Accenture in India
15105 Jobs | Dublin 2
EY
14860 Jobs | London
Uplers
11139 Jobs | Ahmedabad
Amazon
10431 Jobs | Seattle,WA
IBM
9214 Jobs | Armonk
Oracle
9174 Jobs | Redwood City
Accenture services Pvt Ltd
7676 Jobs |
Capgemini
7672 Jobs | Paris,France