Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 years
1 - 9 Lacs
Gurgaon
On-site
Job Description: Senior Data Developer I Location: Gurugram, India Employment Type: Full-Time Experience Level: Mid to Senior-Level Department: Data & Analytics / IT Job Summary: We are seeking an experienced Data Developer with expertise in Microsoft Fabric, Azure Synapse Analytics, Databricks, and strong SQL development skills. The ideal candidate will work on end-to-end data solutions supporting analytics initiatives across clinical, regulatory, and commercial domains in the Life Sciences industry. Familiarity with Azure DevOps, and relevant certifications such as DP-700 and Databricks Data Engineer Associate/Professional are preferred. Power BI knowledge is highly preferable to support integrated analytics and reporting. Key Responsibilities: Design, develop, and maintain scalable and secure data pipelines using Microsoft Fabric, Azure Synapse Analytics, and Azure Databricks to support critical business processes. Develop curated datasets for clinical, regulatory, and commercial analytics using SQL and PySpark. Create and support dashboards and reports using Power BI (highly preferred). Collaborate with cross-functional stakeholders to understand data needs and translate them into technical solutions. Work closely with ERP teams such as Salesforce.com and SAP S/4HANA to integrate and transform business-critical data into analytic-ready formats. Partner with Data Scientists to enable advanced analytics and machine learning initiatives by providing clean, reliable, and well-structured data. Ensure data quality, lineage, and documentation in accordance with GxP, 21 CFR Part 11, and industry best practices. Use Azure DevOps to manage code repositories, track tasks, and support agile delivery processes. Monitor, troubleshoot, and optimize data workflows for reliability and performance. Contribute to the design of scalable, compliant data models and architecture. Required Qualifications: Bachelor’s or Master’s degree in Computer Science. 5+ years of experience in data development or data engineering roles. Hands-on experience with: Microsoft Fabric (Lakehouse, Pipelines, Dataflows) Azure Synapse Analytics (Dedicated/Serverless SQL Pools, Pipelines) Experience with Azure Data Factory, Apache Spark Azure Databricks (Notebooks, Delta Lake, Unity Catalog) SQL (complex queries, optimization, transformation logic) Familiarity with Azure DevOps (Repos, Pipelines, Boards). Understanding of data governance, security, and compliance in the Life Sciences domain. Certifications (Preferred): Microsoft Certified: DP-700 – Fabric Analytics Engineer Associate Databricks Certified Data Engineer Associate or Professional Preferred Skills: Preferred Skills: Strong knowledge of Power BI (highly preferred) Familiarity with HIPAA, GxP, and 21 CFR Part 11 compliance Experience working with ERP data from Salesforce.com and SAP S/4HANA Exposure to clinical trial, regulatory submission, or quality management data Good understanding of AI and ML concepts Experience working with APIs Excellent communication skills and the ability to collaborate across global teams Location - Gurugram Mode - Hybrid
Posted 1 day ago
7.0 years
21 Lacs
Gurgaon
On-site
Job Title: Data Engineer Location: Gurgaon (Onsite) Experience: 7+ Years Employment Type: Contract 6 month Job Description: We are seeking a highly experienced Data Engineer with a strong background in building scalable data solutions using Azure/AWS Databricks , Scala/Python , and Big Data technologies . The ideal candidate should have a solid understanding of data pipeline design, optimization, and cloud-based deployments. Key Responsibilities: Design and build data pipelines and architectures on Azure or AWS Optimize Spark queries and Databricks workloads Manage structured/unstructured data using best practices Implement scalable ETL processes with tools like Airflow, Kafka, and Flume Collaborate with cross-functional teams to understand and deliver data solutions Required Skills: Azure/AWS Databricks Python / Scala / PySpark SQL, RDBMS Hive / HBase / Impala / Parquet Kafka, Flume, Sqoop, Airflow Strong troubleshooting and performance tuning in Spark Qualifications: Bachelor’s degree in IT, Computer Science, Software Engineering, or related Minimum 7+ years of experience in Data Engineering/Analytics Apply Now if you're looking to join a dynamic team working with cutting-edge data technologies! Job Type: Contractual / Temporary Contract length: 6 months Pay: From ₹180,000.00 per month Work Location: In person
Posted 1 day ago
12.0 years
0 Lacs
India
On-site
Solution Architect - Data Engineering Starting: ASAP Duration: Long Term Key Skills: Databricks Solution Architect Spark, Unity Catalog CI/CD Terraform Azure Python SDK for Databricks Skills 12+ year's experience in data engineering, data platforms & analytics Completed Data Engineering Professional certification & required classes Minimum 8+ projects delivered with hands-on experience in development on databricks Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with deep expertise in at least one Deep experience with distributed computing with Spark with knowledge of Spark runtime internals Familiarity with CI/CD for production deployments Working knowledge of MLOps Current knowledge across the breadth of Databricks product and platform features Familiarity with optimizations for performance and scalability
Posted 1 day ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Overview Leading AI-driven Global Supply Chain Solutions Software Product Company and one of Glassdoor’s “Best Places to Work” Seeking an astute individual that has a strong technical foundation with the additional ability to be hands-on with the broader engineering team as part of the development/deployment cycle, and deep knowledge of industry best practices, Data Science and Machine Learning experience with the ability to implement them working with both the platform, and the product teams. Scope Our machine learning platform ingests data in real time, processes information from millions of retail items to serve deep learning models and produces billions of predictions on a daily basis. Blue Yonder Data Science and Machine Learning team works closely with sales, product and engineering teams to design and implement the next generation of retail solutions. Data Science team members are tasked with turning both small, sparse and massive data into actionable insights with measurable improvements to the customer bottom line. Our Current Technical Environment Software: Python 3.* Frameworks/Others: TensorFlow, PyTorch, BigQuery/Snowflake, Apache Beam, Kubeflow, Apache Flink/Dataflow, Kubernetes, Kafka, Pub/Sub, TFX, Apache Spark, and Flask. Application Architecture: Scalable, Resilient, Reactive, event driven, secure multi-tenant Microservices architecture. Cloud: Azure What We Are Looking For Bachelor’s Degree in Computer Science or related fields; graduate degree preferred. Solid understanding of data science and deep learning foundations. Proficient in Python programming with a solid understanding of data structures. Experience working with most of the following frameworks and libraries: Pandas, NumPy, Keras, TensorFlow, Jupyter, Matplotlib etc. Expertise in any database query language, SQL preferred. Familiarity with Big Data tech such as Snowflake , Apache Beam/Spark/Flink, and Databricks. etc. Solid experience with any of the major cloud platforms, preferably Azure and/or GCP (Google Cloud Platform). Reasonable knowledge of modern software development tools, and respective best practices, such as Git, Jenkins, Docker, Jira, etc. Familiarity with deep learning, NLP, reinforcement learning, combinatorial optimization etc. Provable experience guiding junior data scientists in official or unofficial setting. Desired knowledge of Kafka, Redis, Cassandra, etc. What You Will Do As a Senior Data Scientist, you serve as a specialist in the team that supports the team with following responsibilities. Independently, or alongside junior scientists, implement machine learning models by Procuring data from platform, client, and public data sources. Implementing data enrichment and cleansing routines Implementing features, preparing modelling data sets, feature selection, etc. Evaluating candidate models, selecting, and reporting on test performance of final one Ensuring proper runtime deployment of models, and Implementing runtime monitoring of model inputs and performance in order to ensure continued model stability. Work with product, sales and engineering teams helping shape up the final solution. Use data to understand patterns, come up with and test hypothesis; iterate. Help prepare sales materials, estimate hardware requirements, etc. Attend client meetings, online and onsite, to discuss new and current functionality Our Values If you want to know the heart of a company, take a look at their values. Ours unite us. They are what drive our success – and the success of our customers. Does your heart beat like ours? Find out here: Core Values All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status.
Posted 1 day ago
0 years
4 - 4 Lacs
Chennai
On-site
10-15 yrs experience in Managing and leading Data initiatives for Healthcare sector Exposure to technical solution of data platform using Azure/Databricks platform Program Management office lead coordinating, planning and reporting of progress and milestones across the workstreams of the program. Responsible for working with IHH on the governance meetings and collating risks and mitigation mechanisms. Oversee overall project delivery including resourcing, delivery to schedule and provide content inputs / steering guidance to delivery team from a business and experience perspective. Address escalated risk and mitigations About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.
Posted 1 day ago
3.0 - 5.0 years
3 - 8 Lacs
Chennai
On-site
3 - 5 Years 5 Openings Bangalore, Chennai, Kochi, Trivandrum Role description Role Proficiency: Independently develops error free code with high quality validation of applications guides other developers and assists Lead 1 – Software Engineering Outcomes: Understand and provide input to the application/feature/component designs; developing the same in accordance with user stories/requirements. Code debug test document and communicate product/component/features at development stages. Select appropriate technical options for development such as reusing improving or reconfiguration of existing components. Optimise efficiency cost and quality by identifying opportunities for automation/process improvements and agile delivery models Mentor Developer 1 – Software Engineering and Developer 2 – Software Engineering to effectively perform in their roles Identify the problem patterns and improve the technical design of the application/system Proactively identify issues/defects/flaws in module/requirement implementation Assists Lead 1 – Software Engineering on Technical design. Review activities and begin demonstrating Lead 1 capabilities in making technical decisions Measures of Outcomes: Adherence to engineering process and standards (coding standards) Adherence to schedule / timelines Adhere to SLAs where applicable Number of defects post delivery Number of non-compliance issues Reduction of reoccurrence of known defects Quick turnaround of production bugs Meet the defined productivity standards for project Number of reusable components created Completion of applicable technical/domain certifications Completion of all mandatory training requirements Outputs Expected: Code: Develop code independently for the above Configure: Implement and monitor configuration process Test: Create and review unit test cases scenarios and execution Domain relevance: Develop features and components with good understanding of the business problem being addressed for the client Manage Project: Manage module level activities Manage Defects: Perform defect RCA and mitigation Estimate: Estimate time effort resource dependence for one's own work and others' work including modules Document: Create documentation for own work as well as perform peer review of documentation of others' work Manage knowledge: Consume and contribute to project related documents share point libraries and client universities Status Reporting: Report status of tasks assigned Comply with project related reporting standards/process Release: Execute release process Design: LLD for multiple components Mentoring: Mentor juniors on the team Set FAST goals and provide feedback to FAST goals of mentees Skill Examples: Explain and communicate the design / development to the customer Perform and evaluate test results against product specifications Develop user interfaces business software components and embedded software components 5 Manage and guarantee high levels of cohesion and quality6 Use data models Estimate effort and resources required for developing / debugging features / components Perform and evaluate test in the customer or target environment Team Player Good written and verbal communication abilities Proactively ask for help and offer help Knowledge Examples: Appropriate software programs / modules Technical designing Programming languages DBMS Operating Systems and software platforms Integrated development environment (IDE) Agile methods Knowledge of customer domain and sub domain where problem is solved Additional Comments: Design, develop, and optimize large-scale data pipelines using Azure Databricks (Apache Spark). Build and maintain ETL/ELT workflows and batch/streaming data pipelines. Collaborate with data analysts, scientists, and business teams to support their data needs. Write efficient PySpark or Scala code for data transformations and performance tuning. Implement CI/CD pipelines for data workflows using Azure DevOps or similar tools. Monitor and troubleshoot data pipelines and jobs in production. Ensure data quality, governance, and security as per organizational standards. Skills Databricks,Adb,Etl About UST UST is a global digital transformation solutions provider. For more than 20 years, UST has worked side by side with the world’s best companies to make a real impact through transformation. Powered by technology, inspired by people and led by purpose, UST partners with their clients from design to operation. With deep domain expertise and a future-proof philosophy, UST embeds innovation and agility into their clients’ organizations. With over 30,000 employees in 30 countries, UST builds for boundless impact—touching billions of lives in the process.
Posted 1 day ago
3.0 years
11 - 24 Lacs
Chennai
On-site
Job Description Data Engineer, Chennai We’re seeking a highly motivated Data Engineer to join our agile, cross-functional team and drive end-to-end data pipeline development in a cloud-native, big data ecosystem. You’ll leverage ETL/ELT best practices and data lakehouse paradigms to deliver scalable solutions. Proficiency in SQL, Python, Spark, and modern data orchestration tools (e.g. Airflow) is essential, along with experience in CI/CD, DevOps, and containerized environments like Docker and Kubernetes. This is your opportunity to make an impact in a fast-paced, data-driven culture. Responsibilities Responsible for data pipeline development and maintenance. Contribute to development, maintenance, testing strategy, design discussions, and operations of the team. Participate in all aspects of agile software development including design, implementation, and deployment. Responsible for the end-to-end lifecycle of new product features / components. Ensuring application performance, uptime, and scale, maintaining high standards of code quality and thoughtful application design. Work with a small, cross-functional team on products and features to drive growth. Learning new tools, languages, workflows, and philosophies to grow. Research and suggest new technologies for boosting the product. Have an impact on product development by making important technical decisions, influencing the system architecture, development practices and more. Qualifications Excellent team player with strong communication skills. B.Sc. in Computer Sciences or similar. 3-5 years of experience in Data Pipeline development. 3-5 years of experience in PySpark / Databricks. 3-5 years of experience in Python / Airflow. Knowledge of OOP and design patterns. Knowledge of server-side technologies such as Java, Spring Experience with Docker containers, Kubernetes and Cloud environments Expertise in testing methodologies (Unit-testing, TDD, mocking). Fluent with large scale SQL databases. Good problem-solving and analysis abilities. Requirements - Advantage Experience with Azure cloud services. Experience with Agile Development methodologies. Experience with Git. Additional Information Our Benefits Flexible working environment Volunteer time off LinkedIn Learning Employee-Assistance-Program (EAP) About NIQ NIQ is the world’s leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. In 2023, NIQ combined with GfK, bringing together the two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights—delivered with advanced analytics through state-of-the-art platforms—NIQ delivers the Full View™. NIQ is an Advent International portfolio company with operations in 100+ markets, covering more than 90% of the world’s population. For more information, visit NIQ.com Want to keep up with our latest updates? Follow us on: LinkedIn | Instagram | Twitter | Facebook Our commitment to Diversity, Equity, and Inclusion NIQ is committed to reflecting the diversity of the clients, communities, and markets we measure within our own workforce. We exist to count everyone and are on a mission to systematically embed inclusion and diversity into all aspects of our workforce, measurement, and products. We enthusiastically invite candidates who share that mission to join us. We are proud to be an Equal Opportunity/Affirmative Action-Employer, making decisions without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability status, age, marital status, protected veteran status or any other protected class. Our global non-discrimination policy covers these protected classes in every market in which we do business worldwide. Learn more about how we are driving diversity and inclusion in everything we do by visiting the NIQ News Center: https://nielseniq.com/global/en/news-center/diversity-inclusion
Posted 1 day ago
2.0 years
3 - 7 Lacs
Chennai
On-site
Program Analyst Job #: req33911 Organization: World Bank Sector: General Services Grade: GE Term Duration: 2 years 0 months Recruitment Type: Local Recruitment Location: Chennai,India Required Language(s): English Preferred Language(s): Closing Date: 8/5/2025 (MM/DD/YYYY) at 11:59pm UTC Description Do you want to build a truly worthwhile career? Working at the World Bank Group provides a unique opportunity for you to help our clients solve their greatest development challenges. The World Bank Group is one of the largest sources of funding and knowledge for developing countries; a unique global partnership of five institutions dedicated to ending extreme poverty, increasing shared prosperity and promoting sustainable development. With 189 member countries and more than 120 offices worldwide, we work with public and private sector partners, investing in groundbreaking projects and using data, research, and technology to develop solutions to the most urgent global challenges. For more information, visit www.worldbank.org. Global Corporate Solutions Reporting to the Managing Director and World Bank Group Chief Administrative Officer, Global Corporate Solutions (GCS) brings together the functions of Corporate Security, Corporate Real Estate, and Corporate Services. About the Unit The Corporate Services (GCSCS) division within GCS provides services to the WBG in the areas of Travel and Visa Services; Food and Conference Services; Staff Services, including Commuter Services, Child Care, and Fitness Center; Mail and Shipping Services; the Art Program; Translation and Interpretation; Customer Service; Design and Publications; Printing; and Interactive Media. GCSCS also provides administrative oversight to the WBG Family Network and 1818 Society and is responsible for setting the policy framework and service standards, and for delivering services through a combination of staff and vendors at WBG headquarters (HQ) in Washington, DC and in Country Offices. To achieve its purpose, GCSCS is structured into three main units: (i) Travel and Client Services (GCSTC), (ii) Business Services (GCSBA), and (iii) Innovation and Client Solutions (GCSIS). GCSIS includes the GCS Service Desk and Processing & Analytics team in Chennai, India. Job Summary We are seeking a skilled and motivated Program Analyst to join our team in Chennai, India. Reporting to the Senior Program Manager, GCSIS, this role will support a small but dynamic data analytics team dedicated to supporting GCS and its clients. The ideal candidate will have expertise in analyzing large datasets, transforming complex data, and building insightful dashboards. This role will focus on data analysis, automation, and dashboard development using Power BI, Tableau, Power Automate, and other AI/ML tools. Strong analytical skills, attention to detail, and the ability to effectively communicate findings are essential for success in this position. If you’re a data-driven professional with a passion for problem-solving, we’d love to hear from you! Key Responsibilities Collaborate with stakeholders to understand reporting and analytical needs, translating business requirements into technical solutions. Extract, clean, and prepare data from multiple sources for analysis and reporting using Power Query and Tableau Prep Builder. Ensure data integrity, accuracy, and consistency through effective governance and quality checks. Analyze large datasets to identify trends, extract insights, and support business decision-making. Design, develop, and maintain interactive dashboards and reports using Power BI and Tableau. Present insights to stakeholders through clear and compelling visualizations and reports. Create and maintain documentation for dashboards, data sources, and automation workflows. Optimize and streamline reporting processes for efficiency and scalability. Automate workflows using Power Automate, enhancing efficiency across data-related processes. Work with Natural Language Processing (NLP) models to analyze unstructured text data. Build custom business applications using Power Apps. Apply Generative AI tools to support data analysis, automation, and reporting. Stay up-to-date with industry trends and best practices in data analytics and business intelligence. Selection Criteria Bachelor’s degree in Data Science, Computer Science, Business Analytics, Statistics, or a related field. Minimum 3+ years of experience in data analysis, reporting, or business intelligence roles. Proven expertise building dashboards and reports in Power BI and Tableau. Proficiency in M Code and DAX for data modeling and calculations. Advanced Excel skills, including Power Query, Power Pivot, complex formulas, and VBA (preferred). Hands-on experience with Power Automate or Zapier for workflow automation. Understanding of Generative AI and its applications in data analysis. Excellent problem-solving, analytical, and critical-thinking skills. Meticulous attention to detail and accuracy. Ability to work independently and take initiative. High level of personal motivation and eagerness to learn. Strong organizational skills with the ability to manage multiple tasks and deadlines. Excellent oral and written communication skills, capable of conveying complex issues concisely. Willingness to work in a schedule that overlaps with Washington, DC business hours. Preferred Qualifications Background in business intelligence, finance, or operations analytics. Experience with Power Apps. Experience applying Natural Language Processing (NLP) techniques to analyze unstructured text data (e.g., survey responses, emails, customer reviews). Familiarity with data warehousing platforms (e.g., Azure, AWS, Databricks, Snowflake). Proficiency with Python and R for data analysis and modeling. Knowledge of machine learning and AI-driven analytics. Prior experience working with cross-functional teams in a corporate setting. General Competencies Initiative - Volunteers to undertake tasks that stretch his or her capability. Flexibility - Demonstrates the ability to adapt plans, tasks and resources to meet objectives and/or work with others. Analytical Research and Writing - Able to undertake analytical research on topics requested by others. Shares findings with colleagues and other relevant parties. Client Orientation - Takes personal responsibility and accountability for timely response to client queries, requests or needs, working to remove obstacles that may impede execution or overall success. Drive for Results - Takes personal ownership and accountability to meet deadlines and achieve agreed- upon results and has the personal organization to do so. Teamwork, Collaboration and Inclusion - Collaborates with other team members and colleagues across units and contributes productively to the work and outputs of the team, as well as partners’ or stakeholders’, demonstrating respect for different points of view. Growth-mindset and Agile – Proactively action-oriented and outcome-focused. Proposes and implements strategic and practical adjustments to ensure optimal client service and maximum impact. Knowledge, Learning and Communication - Actively seeks the knowledge needed to complete assignments and shares knowledge with others, communicating and presenting information in a clear, accurate and organized manner with exceptional attention to detail. Business Judgment and Analytical Decision Making - Analyzes facts and data to support sound, logical decisions regarding own and others' work. WBG Culture Attributes: 1. Sense of Urgency – Anticipating and quickly reacting to the needs of internal and external stakeholders. 2. Thoughtful Risk Taking – Taking informed and thoughtful risks and making courageous decisions to push boundaries for greater impact. 3. Empowerment and Accountability – Engaging with others in an empowered and accountable manner for impactful results. World Bank Group Core Competencies The World Bank Group offers comprehensive benefits, including a retirement plan; medical, life and disability insurance; and paid leave, including parental leave, as well as reasonable accommodations for individuals with disabilities. We are proud to be an equal opportunity and inclusive employer with a dedicated and committed workforce, and do not discriminate based on gender, gender identity, religion, race, ethnicity, sexual orientation, or disability. Learn more about working at the World Bank and IFC , including our values and inspiring stories.
Posted 1 day ago
5.0 - 7.0 years
4 - 7 Lacs
Chennai
On-site
Comcast brings together the best in media and technology. We drive innovation to create the world's best entertainment and online experiences. As a Fortune 50 leader, we set the pace in a variety of innovative and fascinating businesses and create career opportunities across a wide range of locations and disciplines. We are at the forefront of change and move at an amazing pace, thanks to our remarkable people, who bring cutting-edge products and services to life for millions of customers every day. If you share in our passion for teamwork, our vision to revolutionize industries and our goal to lead the future in media and technology, we want you to fast-forward your career at Comcast. Job Summary Responsible for working cross-functionally to collect data and develop models to determine trends utilizing a variety of data sources. Retrieves, analyzes and summarizes business, operations, employee, customer and/or economic data in order to develop business intelligence, optimize effectiveness, predict business outcomes and decision-making purposes. Involved with numerous key business decisions by conducting the analyses that inform our business strategy. This may include: impact measurement of new products or features via normalization techniques, optimization of business processes through robust A/B testing, clustering or segmentation of customers to identify opportunities of differentiated treatment, deep dive analyses to understand drivers of key business trends, identification of customer sentiment drivers through natural language processing (NLP) of verbatim responses to Net Promotor System (NPS) surveys and development of frameworks to drive upsell strategy for existing customers by balancing business priorities with customer activity. Has in-depth experience, knowledge and skills in own discipline. Usually determines own work priorities. Acts as resource for colleagues with less experience. Job Description Core Responsibilities Work with business leaders and stakeholders to understand data and analysis needs and develop technical requirements. Analyzes large, complex data to determine actionable business insights using self-service analytics and reporting tools. Combines data as needed from disparate data sources to complete analysis from multiple sources. Identifies key business drivers and insights by conducting exploratory data analysis and hypothesis testing. Develops forecasting models to predict business key metrics. Analyzes the results of campaigns, offers or initiatives to measure their effectiveness and identifies opportunities for improvement. Communicates findings clearly and concisely through narrative-driven presentations and effective data visualizations to Company executives and decisionmakers. Stays current with emerging trends in analytics, statistics, and machine learning and applies them to business challenges. Mandatory Skills: SQL Tableau Good Story telling capabilities Nice to have skills: PPT creation Databricks Spark LLM Disclaimer:This information has been designed to indicate the general nature and level of work performed by employees in this role. It is not designed to contain or be interpreted as a comprehensive inventory of all duties, responsibilities and qualifications. Comcast is proud to be an equal opportunity workplace. We will consider all qualified applicants for employment without regard to race, color, religion, age, sex, sexual orientation, gender identity, national origin, disability, veteran status, genetic information, or any other basis protected by applicable law. Base pay is one part of the Total Rewards that Comcast provides to compensate and recognize employees for their work. Most sales positions are eligible for a Commission under the terms of an applicable plan, while most non-sales positions are eligible for a Bonus. Additionally, Comcast provides best-in-class Benefits to eligible employees. We believe that benefits should connect you to the support you need when it matters most, and should help you care for those who matter most. That’s why we provide an array of options, expert guidance and always-on tools, that are personalized to meet the needs of your reality – to help support you physically, financially and emotionally through the big milestones and in your everyday life. Please visit the compensation and benefits summary on our careers site for more details. Education Bachelor's Degree While possessing the stated degree is preferred, Comcast also may consider applicants who hold some combination of coursework and experience, or who have extensive related professional experience. Relevant Work Experience 5-7 Years
Posted 1 day ago
8.0 years
4 - 6 Lacs
Noida
On-site
Hello! You've landed on this page, which means you're interested in working with us. Let's take a sneak peek at what it's like to work at Innovaccer. Analytics at Innovaccer Our analytics team is dedicated to weaving analytics and data science magics across our products. They are the owners and custodians of intelligence behind our products. With their expertise and innovative approach, they play a crucial role in building various analytical models (including descriptive, predictive, and prescriptive) to help our end-users make smart decisions. Their focus on continuous improvement and cutting-edge methodologies ensures that they're always creating market leading solutions that propel our products to new heights of success. About the Role Data is the foundation of our innovation. We are seeking a Manager, Data Science with expertise in NLP and Generative AI to lead the development of cutting-edge AI-driven solutions in healthcare. This role requires a deep understanding of healthcare data and the ability to design and implement advanced language models that extract insights, automate workflows, and enhance clinical decision-making. We’re looking for a visionary leader who can define and build the next generation of AI-driven tools, leveraging LLMs, deep learning, and predictive analytics to personalize care based on patients’ clinical and behavioral history. If you’re passionate about pushing the boundaries of AI in healthcare, we’d love to hear from you! A Day in the Life Team Leadership & Development: Build, mentor, and manage a team of data scientists, and machine learning engineers. Foster a culture of collaboration, innovation, and technical excellence. Roadmap Execution: Define and execute on the quarterly AI/ML roadmap, setting clear goals, priorities, and deliverables for the team. Work with the business leaders and customers to understand their pain-points and build large-scale solutions for them. Define technical architecture to productize Innovaccer’s machine-learning algorithms and take them to market with partnerships with different organization. Work with our data platform and applications team to help them successfully integrate the data science capability or algorithms in their product/workflows. Project & Stakeholder Management: Work closely with cross-functional teams, including product managers, engineers, and business leaders, to align AI/ML initiatives with company objectives. What You Need Masters in Computer Science, Computer Engineering or other relevant fields (PhD Preferred) 8+ years of experience in Data Science (healthcare experience will be a plus) Strong experience with deep learning techniques to build NLP/Computer vision models as well as state of art GenAI pipelines - Has demonstrable experience deploying deep learning models in production at scale with interactive improvements- would require hands-on expertise with at least 1 deep learning frameworks like Pytorch or Tensorflow. Strong hands-on experience in building GenAI applications - building LLM based workflows along with optimization techniques - knowledge of implementing agentic workflows is a plus. Has keen interest in research and stays updated with key advancements in the area of AI and ML in the industry. Having patents/publications in any area of AI/ML is a great add on. Hands on experience with at least one ML platform among Databricks, Azure ML, Sagemaker s Strong written and spoken communication skills We offer competitive benefits to set you up for success in and outside of work. Here’s What We Offer Generous Leave Benefits: Enjoy generous leave benefits of up to 40 days. Parental Leave: Experience one of the industry's best parental leave policies to spend time with your new addition. Sabbatical Leave Policy: Want to focus on skill development, pursue an academic career, or just take a break? We've got you covered. Health Insurance: We offer health benefits and insurance to you and your family for medically related expenses related to illness, disease, or injury. Pet-Friendly Office*: Spend more time with your treasured friends, even when you're away from home. Bring your furry friends with you to the office and let your colleagues become their friends, too. *Noida office only Creche Facility for children* : Say goodbye to worries and hello to a convenient and reliable creche facility that puts your child's well-being first. *India offices Where and how we work Our Noida office is situated in a posh techspace, equipped with various amenities to support our work environment. Here, we follow a five-day work schedule, allowing us to efficiently carry out our tasks and collaborate effectively within our team. Innovaccer is an equal-opportunity employer. We celebrate diversity, and we are committed to fostering an inclusive and diverse workplace where all employees, regardless of race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, marital status, or veteran status, feel valued and empowered. Disclaimer : Innovaccer does not charge fees or require payment from individuals or agencies for securing employment with us. We do not guarantee job spots or engage in any financial transactions related to employment. If you encounter any posts or requests asking for payment or personal information, we strongly advise you to report them immediately to our HR department at px@innovaccer.com. Additionally, please exercise caution and verify the authenticity of any requests before disclosing personal and confidential information, including bank account details.
Posted 1 day ago
12.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Summary Lead the vision, roadmap, architecture and platform engineering development for the enterprise-wide Analytics AWS Platform on the cloud covering the capabilities across Analytics, Data Engineering, Data Management, Platform Architecture and Security About The Role Your responsibilities include but not limited to Lead the MVP or Tech Spike along with Platform Engineer and/or Domain / Solution Architect of Business Facing Function teams Supporting Platform engineering team for any technical issue during platform delivery or implementation of new services Driving discussion with other architects and/or vendors for new solution evolution, cost optimization and performance improvements Create, maintain and evangelize Point of Views and Opinions on emerging trends in life sciences, pharma industry and relevant technologies for data and analytics. Detailed design and implement of capabilities in line with Tech Spike outcomes. Perform assessments on use cases on-boarding on to the platforms to ensure alignment to platform governance, frameworks, and principles. Pro-actively engage with Technology & Platform partners to jointly strive for innovation and engage in Design Partnerships Ensure that the delivery teams follow best practices which includes peer reviews, formal documentation, and sign off by business. Ensure on time, within budget, compliant, secure, and quality delivery of portfolio. Continuous improvement to automate capabilities, simplify landscape and reduce cost. Ensure adherence to ISRM, Legal, ethics and other compliance policies and procedures in defining architecture standards, patterns, and platform solutions. Minimum Requirements 12+ years of IT experience in a highly qualified function such as IT Leader, in the area of Data and Analytics with strong exposure to Platform Architecture and Engineering. 10+ years Analytics / Big Data experience, proven skills & experience in solution design in a highly qualified technical function and global matrix organization Proven track record of broad industry experience and excellent understanding of complex enterprise IT landscapes and relationships as well as driving business transformations Experience in AWS services S3, Databricks, Snowflake, Integration and related technologies. Experience with code management tools, CI/CD, automated testing is required. Specialization in Pharma Domain is required and understanding of usage across the end-to-end enterprise value chain. Demonstrated strong interpersonal skills, accountability, written and verbal communication skills, and time management aligned with Novartis Values & Behaviors; deep technical expertise and understanding of the business processes and systems. Experience in managing vendor and stakeholder expectations and driving techno-functional discussion; understanding of the demand to sustain and project financial processes. A proven track record in managing high performing information technology global teams that are culturally diverse in matrix organizations. Customer orientation: Proven ability to communicate across various stakeholders, including internal and external, and align with all levels of IT and business stakeholders. Excellent negotiation skills, experience with agile and DevOps methodology, capability to think strategically. Why consider Novartis? Our purpose is to reimagine medicine to improve and extend people’s lives and our vision is to become the most valued and trusted medicines company in the world. How can we achieve this? With our people. It is our associates that drive us each day to reach our ambitions. Be a part of this mission and join us! Learn More Here https://www.novartis.com/about/strategy/people-and-culture Commitment To Diversity And Inclusion Novartis is committed to building an outstanding, inclusive work environment and diverse teams' representative of the patients and communities we serve. Join our Novartis Network: If this role is not suitable to your experience or career goals but you wish to stay connected to hear more about Novartis and our career opportunities, join the Novartis Network here: https://talentnetwork.novartis.com/network Why Novartis: Helping people with disease and their families takes more than innovative science. It takes a community of smart, passionate people like you. Collaborating, supporting and inspiring each other. Combining to achieve breakthroughs that change patients’ lives. Ready to create a brighter future together? https://www.novartis.com/about/strategy/people-and-culture Join our Novartis Network: Not the right Novartis role for you? Sign up to our talent community to stay connected and learn about suitable career opportunities as soon as they come up: https://talentnetwork.novartis.com/network Benefits and Rewards: Read our handbook to learn about all the ways we’ll help you thrive personally and professionally: https://www.novartis.com/careers/benefits-rewards
Posted 1 day ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Overview Seeking an Associate Manager, Data Operations, to support our growing data organization. In this role, you will assist in maintaining data pipelines and corresponding platforms (on-prem and cloud) while working closely with global teams on DataOps initiatives. Support the day-to-day operations of data pipelines, ensuring data governance, reliability, and performance optimization on Microsoft Azure. Hands-on experience with Azure Data Factory (ADF), Azure Synapse Analytics, Azure Databricks, and real-time streaming architectures is preferred. Assist in ensuring the availability, scalability, automation, and governance of enterprise data pipelines supporting analytics, AI/ML, and business intelligence. Contribute to DataOps programs, aligning with business objectives, data governance standards, and enterprise data strategy. Help implement real-time data observability, monitoring, and automation frameworks to improve data reliability, quality, and operational efficiency. Support the development of governance models and execution roadmaps to enhance efficiency across Azure, AWS, GCP, and on-prem environments. Work on CI/CD integration, data pipeline automation, and self-healing capabilities to improve enterprise-wide DataOps processes. Collaborate with cross-functional teams to support and maintain next-generation Data & Analytics platforms while promoting an agile and high-performing DataOps culture. Assist in the adoption of Data & Analytics technology transformations, ensuring automation for proactive issue identification and resolution. Partner with cross-functional teams to support process improvements, best practices, and operational efficiencies within DataOps. Responsibilities Assist in the implementation and optimization of enterprise-scale data pipelines using Azure Data Factory (ADF), Azure Synapse Analytics, Azure Databricks, and Azure Stream Analytics. Support data ingestion, transformation, orchestration, and storage workflows, ensuring data reliability, integrity, and availability. Help ensure seamless batch, real-time, and streaming data processing, focusing on high availability and fault tolerance. Contribute to DataOps automation efforts, including CI/CD for data pipelines, automated testing, and version control using Azure DevOps and Terraform. Collaborate with Data Engineering, Analytics, AI/ML, CloudOps, and Business Intelligence teams to support data-driven decision-making. Assist in aligning DataOps practices with regulatory and security requirements by working with IT, data stewards, and compliance teams. Support data operations and sustainment activities, including testing and monitoring processes for global products and projects. Participate in data capture, storage, integration, governance, and analytics efforts, working alongside cross-functional teams. Assist in managing day-to-day DataOps activities, ensuring adherence to service-level agreements (SLAs) and business requirements. Engage with SMEs and business stakeholders to ensure data platform capabilities align with business needs. Contribute to Agile work intake and execution processes, helping to maintain efficiency in data platform teams. Help troubleshoot and resolve issues related to cloud infrastructure and data services in collaboration with technical teams. Support the development and automation of operational policies and procedures, improving efficiency and resilience. Assist in incident response and root cause analysis, contributing to self-healing mechanisms and mitigation strategies. Foster a customer-centric approach, advocating for operational excellence and continuous improvement in service delivery. Help build a collaborative, high-performing team culture, promoting automation and efficiency within DataOps. Adapt to shifting priorities and support cross-functional teams in maintaining productivity and achieving business goals. Utilize technical expertise in cloud and data operations to support service reliability and scalability. Qualifications 5+ years of technology work experience in a large-scale global organization, with CPG industry experience preferred. 5+ years of experience in Data & Analytics roles, with hands-on expertise in data operations and governance. 2+ years of experience working within a cross-functional IT organization, collaborating with multiple teams. Experience in a lead or senior support role, with a focus on DataOps execution and delivery. Strong communication skills, with the ability to collaborate with stakeholders and articulate technical concepts to non-technical audiences. Analytical and problem-solving abilities, with a focus on prioritizing customer needs and operational improvements. Customer-focused mindset, ensuring high-quality service delivery and operational efficiency. Growth mindset, with a willingness to learn and adapt to new technologies and methodologies in a fast-paced environment. Experience supporting data operations in a Microsoft Azure environment, including data pipeline automation. Familiarity with Site Reliability Engineering (SRE) principles, such as monitoring, automated issue remediation, and scalability improvements. Understanding of operational excellence in complex, high-availability data environments. Ability to collaborate across teams, building strong relationships with business and IT stakeholders. Basic understanding of data management concepts, including master data management, data governance, and analytics. Knowledge of data acquisition, data catalogs, data standards, and data management tools. Strong execution and organizational skills, with the ability to follow through on operational plans and drive measurable results. Adaptability in a dynamic, fast-paced environment, with the ability to shift priorities while maintaining productivity.
Posted 1 day ago
8.0 years
0 Lacs
India
Remote
Azure Data Engineer Location: Remote Shift : 6am - 3pm US central time zone Job Summary: We are seeking a highly skilled Data Engineer with strong experience in PostgreSQL and SQL Server, as well as hands-on expertise in Azure Data Factory (ADF) and Databricks. The ideal candidate will be responsible for building scalable data pipelines, optimizing database performance, and designing robust data models and schemas to support enterprise data initiatives. Key Responsibilities: Design and develop robust ETL/ELT pipelines using Azure Data Factory and Databricks Develop and optimize complex SQL queries and functions in PostgreSQL Develop and optimize complex SQL queries in SQL Server Perform performance tuning and query optimization for PostgreSQL Design and implement data models and schema structures aligned with business and analytical needs Collaborate with data architects, analysts, and business stakeholders to understand data requirements Ensure data quality, integrity, and security across all data platforms Monitor and troubleshoot data pipeline issues and implement proactive solutions Participate in code reviews, sprint planning, and agile ceremonies Required Skills & Qualifications: 8+ years of experience in data engineering or related field Strong expertise in PostgreSQL and SQL Server development, performance tuning, and schema design Experience in data migration from SQL Server to PostgreSQL Hands-on experience with Azure Data Factory (ADF) and Databricks Proficiency in SQL, Python, or Scala for data processing Experience with data modeling techniques (e.g., star/snowflake schemas, normalization) Familiarity with CI/CD pipelines, version control (Git), and agile methodologies Excellent problem-solving and communication skills If interested, share your resume on aditya.dhumal@leanitcorp.com
Posted 1 day ago
7.0 years
0 Lacs
Calcutta
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Manager Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. Responsibilities: Job Description: · Analyses current business practices, processes, and procedures as well as identifying future business opportunities for leveraging Microsoft Azure Data & Analytics Services. · Provide technical leadership and thought leadership as a senior member of the Analytics Practice in areas such as data access & ingestion, data processing, data integration, data modeling, database design & implementation, data visualization, and advanced analytics. · Engage and collaborate with customers to understand business requirements/use cases and translate them into detailed technical specifications. · Develop best practices including reusable code, libraries, patterns, and consumable frameworks for cloud-based data warehousing and ETL. · Maintain best practice standards for the development or cloud-based data warehouse solutioning including naming standards. · Designing and implementing highly performant data pipelines from multiple sources using Apache Spark and/or Azure Databricks · Integrating the end-to-end data pipeline to take data from source systems to target data repositories ensuring the quality and consistency of data is always maintained · Working with other members of the project team to support delivery of additional project components (API interfaces) · Evaluating the performance and applicability of multiple tools against customer requirements · Working within an Agile delivery / DevOps methodology to deliver proof of concept and production implementation in iterative sprints. · Integrate Databricks with other technologies (Ingestion tools, Visualization tools). · Proven experience working as a data engineer · Highly proficient in using the spark framework (python and/or Scala) · Extensive knowledge of Data Warehousing concepts, strategies, methodologies. · Direct experience of building data pipelines using Azure Data Factory and Apache Spark (preferably in Databricks). · Hands on experience designing and delivering solutions using Azure including Azure Storage, Azure SQL Data Warehouse, Azure Data Lake, Azure Cosmos DB, Azure Stream Analytics · Experience in designing and hands-on development in cloud-based analytics solutions. · Expert level understanding on Azure Data Factory, Azure Synapse, Azure SQL, Azure Data Lake, and Azure App Service is required. · Designing and building of data pipelines using API ingestion and Streaming ingestion methods. · Knowledge of Dev-Ops processes (including CI/CD) and Infrastructure as code is essential. · Thorough understanding of Azure Cloud Infrastructure offerings. · Strong experience in common data warehouse modelling principles including Kimball. · Working knowledge of Python is desirable · Experience developing security models. · Databricks & Azure Big Data Architecture Certification would be plus · Must be team oriented with strong collaboration, prioritization, and adaptability skills required Mandatory skill sets: Azure Databricks Preferred skill sets: Azure Databricks Years of experience required: 7-10 Years Education qualification: BE, B.Tech, MCA, M.Tech Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Technology, Bachelor of Engineering Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Databricks Platform Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Coaching and Feedback, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling {+ 32 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date
Posted 1 day ago
12.0 years
0 Lacs
India
Remote
T Job Title: Product Manager – Content Development & ManagementLocation: Bangalore (Hybrid/Remote options available) Experience Required: 12+ Years (preferably in EdTech, Higher Education, or Technical Training) Job Type: Full-Time About the Role: We are looking for a seasoned Product Manager to lead the development and management of technical learning content across our AI, Data, and Software certification programs. You will be responsible for building high-quality curriculum and managing a team of Subject Matter Experts (SMEs), instructional designers, and content developers. This role requires strong technical depth, instructional design sensibility, and leadership skills to deliver content that meets both academic and industry standards. Key Responsibilities: End-to-End Content Management: Own the full lifecycle of content products—from concept to delivery—across AI, Data Science, Software Engineering, and emerging te ch areas.Curri culum Design: Deve lop and structure modular, scalable course content aligned with certification standards and market demand.Proje ct Leadership: Mana ge timelines, quality assurance, and team output for multiple concurrent content projects.Team Management: Lead and mentor SMEs, trainers, editors, and technical writers to maintain consistency and excellence in output.Hands -On Learning Development: Guid e creation of hands-on labs, real-time projects, assessments, and case studies.Conte nt Review & QA: Cond uct quality checks to ensure accuracy, relevance, and pedagogical effectiveness of content.Colla boration: Work with Product, Marketing, Tech, and Academic teams to align content with platform features and learner outcomes.Techn ology Integration: Over see LMS deployments and content integration with tools like Azure Synapse, Databricks, Spark, Kafka, and Power BI. Required Qualifications: Minimum 12 years of experience in EdTech, technical training, or curriculum development roles. Strong domain expertise in: Data Science, Machine Learning, Deep Learning Programming: Python, Java, C/C++ Azure Data Engineering tools: Synapse, Databricks, Snowflake, Kafka, Spark Experience leading technical teams or SME groups. Proven track record of designing and delivering academic/industry-focused content and training programs. Excellent communication and stakeholder management skills. Preferred Qualifications: Ph.D./M.Tech in Computer Science, IT, or related fields (PhD submission/ongoing is acceptable). Experience working with academic institutions and EdTech platforms. Knowledge of instructional design principles and outcome-based learning. Familiarity with tools like Power BI, Tableau, and LMS platforms. Published research papers in AI/ML or EdTech fields (optional but valued). What We Offer: An opportunity to shape the learning experiences of thousands globally. Freedom to innovate and create impactful educational content. A collaborative environment with a passionate team. Competitive salary and performance-based bonuses. Flexible work arrangements and growth opportunities. How to Apply: Send your resume and a portfolio (if applicable) to [insert your application email]. Subject: Application for Product Manager – Content Development ips: Provide a summary of the role, what success in the position looks like, and how this role fits into the organization overall.
Posted 1 day ago
2.0 years
3 - 10 Lacs
India
Remote
Job Title - Sr. Data Engineer Experience - 2+ Years Location - Indpre (onsite) Industry - IT Job Type - Full ime Roles and Responsibilities- 1. Design and develop scalable data pipelines and workflows for data ingestion, transformation, and integration. 2. Build and maintain data storage systems, including data warehouses, data lakes, and relational databases. 3. Ensure data accuracy, integrity, and consistency through validation and quality assurance processes. 4. Collaborate with data scientists, analysts, and business teams to understand data needs and deliver tailored solutions. 5. Optimize database performance and manage large-scale datasets for efficient processing. 6. Leverage cloud platforms (AWS, Azure, or GCP) and big data technologies (Hadoop, Spark, Kafka) for building robust data solutions. 7. Automate and monitor data workflows using orchestration frameworks such as Apache Airflow. 8. Implement and enforce data governance policies to ensure compliance and data security. 9. Troubleshoot and resolve data-related issues to maintain seamless operations. 10. Stay updated on emerging tools, technologies, and trends in data engineering. Skills and Knowledge- 1. Core Skills: ● Proficient in Python (libraries: Pandas, NumPy) and SQL. ● Knowledge of data modeling techniques, including: ○ Entity-Relationship (ER) Diagrams ○ Dimensional Modeling ○ Data Normalization ● Familiarity with ETL processes and tools like: ○ Azure Data Factory (ADF) ○ SSIS (SQL Server Integration Services) 2. Cloud Expertise: ● AWS Services: Glue, Redshift, Lambda, EKS, RDS, Athena ● Azure Services: Databricks, Key Vault, ADLS Gen2, ADF, Azure SQL ● Snowflake 3. Big Data and Workflow Automation: ● Hands-on experience with big data technologies like Hadoop, Spark, and Kafka. ● Experience with workflow automation tools like Apache Airflow (or similar). Qualifications and Requirements- ● Education: ○ Bachelor’s degree (or equivalent) in Computer Science, Information Technology, Engineering, or a related field. ● Experience: ○ Freshers with strong understanding, internships and relevant academic projects are welcome. ○ 2+ years of experience working with Python, SQL, and data integration or visualization tools is preferred. ● Other Skills: ○ Strong communication skills, especially the ability to explain technical concepts to non-technical stakeholders. ○ Ability to work in a dynamic, research-oriented team with concurrent projects. Job Types: Full-time, Permanent Pay: ₹300,000.00 - ₹1,000,000.00 per year Benefits: Paid sick time Provident Fund Work from home Schedule: Day shift Monday to Friday Weekend availability Supplemental Pay: Performance bonus Ability to commute/relocate: Niranjanpur, Indore, Madhya Pradesh: Reliably commute or planning to relocate before starting work (Preferred) Experience: Data Engineer: 2 years (Preferred) Work Location: In person Application Deadline: 31/08/2025
Posted 1 day ago
12.0 years
0 Lacs
Gurugram, Haryana, India
On-site
The Analytics lead is a key role within the Enterprise Data team. We are looking for expert Power BI lead with deep data visualization experience, and excellent capability around DAX, SQL and data modelling techniques. This is a unique opportunity to be involved in delivering leading-edge business analytics using the latest and greatest cutting-edge BI tools, such as cloud-based databases, self-service analytics and leading visualisation tools enabling the company’s aim to become a fully digital organisation. Job Description: Responsibilities Lead and manage a team of Power BI Developers, providing guidance, direction, and support in their day-to-day activities Define and design data visualation models and solutions within the Microsoft Azure ecosystem, including Power Bi, Azure Synapse Analytics, MSFT Fabric and Azure Machine Learning. Develop strategies for Analytics, reporting and governance to ensure scalability, reliability, and security. Collaborate with business stakeholders to define their analytics and reporting strategies Ensure alignment of solutions with organizational goals, compliance requirements, and technology trends. Act as a subject matter expert (SME) in Analytics services, mentoring senior/junior Power BI Developers teams. Evaluate emerging technologies and anlaytical capabilities Provide guidance on cost optimization, performance tuning, and best practices in Azure cloud environments. Stakeholder Collaboration: Partner with business stakeholder, product managers, and data scientists to understand business objectives and translate them into technical solutions. Work with DevOps, engineering, and operations teams to implement CI/CD pipelines and ensure smooth deployment of analytical solutions. Governance and Security: Define and implement policies for data governance, quality, and security, ensuring compliance with GDPR, HIPAA, or other relevant standards. Optimize solutions for data privacy, resilience, and disaster recovery. Qualifications Required Skills and Experience Technical Expertise: Proficient in Power BI and related technology including MSFT Fabric, Azure SQL Database, Azure Synapse, Databricks and other visualuation Hands-on experience with Power BI, machine learning and AI services in Azure. Excellent data visualation skills and experinence Professional Experience: 12+ years of experience in Power BI Development, with demonstrable experience designing high-quality models and dashboards using Power BI, transforming raw data into meaningful insights 8+ years experience using Power BI Desktop, DAX, Tabular Editor and related tools 5+ Years experience using Power BI Premium capacity administration 5+ Years SQL development experience Comprehensive understanding of data modelling, administration, and visualization Good knowledge and understanding of Data warehousing conceptions, Azure Cloud databases, ETL (Extract, Transform, Load) framework Leadership and Communication: Exceptional ability to communicate technical concepts to non-technical stakeholders and align teams on strategic goals. Experience in leading cross-functional teams and managing multiple concurrent projects. Certifications (Preferred): Relevant certifications in Power BI, machine learning, AI, or enterprise architecture. Key Competencies Expertise in data visualization tools such as Power BI or Tableau. Expertise in creating semantic models for reporting Familiarity with the Microsoft Fabric technologies including One Lake, Lakehouse and Data Factory Strong understanding of data governance, compliance, and security frameworks. Familiarity with DevOps and Infrastructure as Code (IaC) tools like biceps or Azure Resource Manager (ARM) templates. Proven ability to drive innovation in data strategy and cloud solutions. A deep understanding of business intelligence workflows and the ability to align technical solutions Strong database design skills, including an understanding of both normalised form and dimensional form databases. In-depth knowledge and experience of data-warehousing strategies and techniques e.g., Kimball Data warehousing Experience in Cloud based data integration tools like Azure Data Factory Experience in Azure Dev Ops or JIRA is a plus Experience working with finance data is highly desirable Familiarity with agile development techniques and objectives Location: DGS India - Mumbai - Thane Ashar IT Park Brand: Dentsu Time Type: Full time Contract Type: Permanent
Posted 1 day ago
6.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Description: Data Analyst Location: Pune/ PAN India Role Overview We are looking for two senior Data Analysts to support the RSA Pricing Sophistication Program by performing data profiling, mapping, and quality analysis on complex insurance datasets. Key Responsibilities · Analyze and map source data from multiple insurance systems (policy, claims, quotes, pricing). · Perform data profiling and data quality (DQ) checks using SQL, Python, and Azure tools. · Collaborate with Data Designers to translate business requirements into technical mappings. · Document data lineage, transformation logic, and quality rules. · Support the development of DQ dashboards and reconciliation reports. · Participate in sprint planning and agile ceremonies to ensure timely delivery of data artifacts. Required Skills · 6+ years of experience in data analysis and profiling. · Strong SQL and Python skills; experience with Azure Data Factory and Databricks preferred. · Familiarity with data quality frameworks and tools. · Experience in insurance domain data is highly desirable. · Strong analytical and documentation skills.
Posted 1 day ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your Role And Responsibilities In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Preferred Education Master's Degree Required Technical And Professional Expertise The ability to be a team player The ability and skill to train other people in procedural and technical topics Strong communication and collaboration skills Able to write complex SQL queries Having experience in Azure Databricks Preferred Technical And Professional Experience Exposure on Automation. Collaborate with a group of like-minded Automation Engineers and Manual Testers Proven interpersonal communications and technical writing skills
Posted 1 day ago
5.0 - 8.0 years
0 Lacs
India
Remote
Mandatory skill- Azure Databricks, Datafactory, Pyspark, SQL Experience- 5 to 8 years Location- Remote Key Responsibilities: Design and build data pipelines and ETL/ELT workflows using Azure Databricks and Azure Data Factory Ingest, clean, transform, and process large datasets from diverse sources (structured and unstructured) Implement Delta Lake solutions and optimize Spark jobs for performance and reliability Integrate Azure Databricks with other Azure services including Data Lake Storage, Synapse Analytics, and Event Hubs
Posted 1 day ago
2.0 - 5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
At EY, we’re all in to shape your future with confidence. We’ll help you succeed in a globally connected powerhouse of diverse teams and take your career wherever you want it to go. Join EY and help to build a better working world. Job Title: Azure Data Engineer Experience: 2-5 Years About the Company: EY is a leading global professional services firm offering a broad range of services in assurance, tax, transaction, and advisory services. We’re looking for candidates with strong technology and data understanding in big data engineering space, having proven delivery capability. Your Key Responsibilities Develop & deploy azure databricks in a cloud environment using Azure Cloud services ETL design, development, and deployment to Cloud Service Interact with Onshore, understand their business goals, contribute to the delivery of the workstreams Design and optimize model codes for faster execution Skills And Attributes For Success 3 to 5 years of experience in developing data ingestion, data processing and analytical pipelines for big data, relational databases, NoSQL, and data warehouse solutions Extensive hands-on experience implementing data migration and data processing using Azure services: Databricks, ADLS, Azure Data Factory, Azure Functions, Synapse/DW, Azure SQL DB, Azure Data Catalog, Cosmo Db etc Hands on experience on spark Hands on experience in programming like python/scala Well versed in DevOps and CI/CD deployments Must have hands on experience in SQL and procedural SQL languages Strong analytical skills and enjoys solving complex technical problems To qualify for the role, you must have Have working experience in an Agile base delivery methodology (Preferable) Flexible and proactive/self-motivated working style with strong personal ownership of problem resolution. Strong analytical skills and enjoys solving complex technical problems Excellent debugging and optimization skills Experience in Enterprise grade solution implementations & in converting business problems/challenges to technical solutions considering security, performance, scalability etc Excellent communicator (written and verbal formal and informal). Participate in all aspects of solution delivery life cycle including analysis, design, development, testing, production deployment, and support. Client management skills Education: BS/MS degree in Computer Science, Engineering, or a related subject is required. EY is committed to providing equal opportunities to all candidates. We welcome and encourage applications from candidates with diverse experiences and backgrounds. EY | Building a better working world EY is building a better working world by creating new value for clients, people, society and the planet, while building trust in capital markets. Enabled by data, AI and advanced technology, EY teams help clients shape the future with confidence and develop answers for the most pressing issues of today and tomorrow. EY teams work across a full spectrum of services in assurance, consulting, tax, strategy and transactions. Fueled by sector insights, a globally connected, multi-disciplinary network and diverse ecosystem partners, EY teams can provide services in more than 150 countries and territories.
Posted 1 day ago
4.0 years
15 - 30 Lacs
Gurugram, Haryana, India
Remote
Experience : 4.00 + years Salary : INR 1500000-3000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NuStudio.AI) (*Note: This is a requirement for one of Uplers' client - AI-first, API-powered Data Platform) What do you need for this opportunity? Must have skills required: Databricks, dbt, Delta Lake, Spark, Unity catalog, AI, Airflow, Cloud Function, Cloud Storage, Databricks Workflows, Dataflow, ETL/ELT, Functions), GCP (BigQuery, Pub/Sub, PySpark, AWS, Hadoop AI-first, API-powered Data Platform is Looking for: We’re scaling our platform and seeking Data Engineers (who are passionate about building high-performance data pipelines, products, and analytical pipelines in the cloud to power real-time AI systems. As a Data Engineer, you’ll: Build scalable ETL/ELT and streaming data pipelines using GCP (BigQuery, Pub/Sub, PySpark, Dataflow, Cloud Storage, Functions) Orchestrate data workflows with Airflow, Cloud Functions, or Databricks Workflows Work across batch + real-time architectures that feed LLMs and AI/ML systems Own feature engineering pipelines that power production models and intelligent agents Collaborate with platform and ML teams to design observability, lineage, and cost-aware performant solutions Bonus: Experience with AWS, Databricks, Hadoop (Delta Lake, Spark, dbt, Unity Catalog) or interest in building on it Why Us? Building production-grade data & AI solutions Your pipelines directly impact mission-critical and client-facing interactions Lean team, no red tape — build, own, ship Remote-first with async culture that respects your time Competitive comp and benefits Our Stack: Python, SQL, GCP/Azure/AWS, Spark, Kafka, Airflow, Databricks, Spark, dbt, Kubernetes, LangChain, LLMs How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 1 day ago
2.0 - 5.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
At EY, we’re all in to shape your future with confidence. We’ll help you succeed in a globally connected powerhouse of diverse teams and take your career wherever you want it to go. Join EY and help to build a better working world. Job Title: Azure Data Engineer Experience: 2-5 Years About the Company: EY is a leading global professional services firm offering a broad range of services in assurance, tax, transaction, and advisory services. We’re looking for candidates with strong technology and data understanding in big data engineering space, having proven delivery capability. Your Key Responsibilities Develop & deploy azure databricks in a cloud environment using Azure Cloud services ETL design, development, and deployment to Cloud Service Interact with Onshore, understand their business goals, contribute to the delivery of the workstreams Design and optimize model codes for faster execution Skills And Attributes For Success 3 to 5 years of experience in developing data ingestion, data processing and analytical pipelines for big data, relational databases, NoSQL, and data warehouse solutions Extensive hands-on experience implementing data migration and data processing using Azure services: Databricks, ADLS, Azure Data Factory, Azure Functions, Synapse/DW, Azure SQL DB, Azure Data Catalog, Cosmo Db etc Hands on experience on spark Hands on experience in programming like python/scala Well versed in DevOps and CI/CD deployments Must have hands on experience in SQL and procedural SQL languages Strong analytical skills and enjoys solving complex technical problems To qualify for the role, you must have Have working experience in an Agile base delivery methodology (Preferable) Flexible and proactive/self-motivated working style with strong personal ownership of problem resolution. Strong analytical skills and enjoys solving complex technical problems Excellent debugging and optimization skills Experience in Enterprise grade solution implementations & in converting business problems/challenges to technical solutions considering security, performance, scalability etc Excellent communicator (written and verbal formal and informal). Participate in all aspects of solution delivery life cycle including analysis, design, development, testing, production deployment, and support. Client management skills Education: BS/MS degree in Computer Science, Engineering, or a related subject is required. EY is committed to providing equal opportunities to all candidates. We welcome and encourage applications from candidates with diverse experiences and backgrounds. EY | Building a better working world EY is building a better working world by creating new value for clients, people, society and the planet, while building trust in capital markets. Enabled by data, AI and advanced technology, EY teams help clients shape the future with confidence and develop answers for the most pressing issues of today and tomorrow. EY teams work across a full spectrum of services in assurance, consulting, tax, strategy and transactions. Fueled by sector insights, a globally connected, multi-disciplinary network and diverse ecosystem partners, EY teams can provide services in more than 150 countries and territories.
Posted 1 day ago
4.0 years
15 - 30 Lacs
Cuttack, Odisha, India
Remote
Experience : 4.00 + years Salary : INR 1500000-3000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NuStudio.AI) (*Note: This is a requirement for one of Uplers' client - AI-first, API-powered Data Platform) What do you need for this opportunity? Must have skills required: Databricks, dbt, Delta Lake, Spark, Unity catalog, AI, Airflow, Cloud Function, Cloud Storage, Databricks Workflows, Dataflow, ETL/ELT, Functions), GCP (BigQuery, Pub/Sub, PySpark, AWS, Hadoop AI-first, API-powered Data Platform is Looking for: We’re scaling our platform and seeking Data Engineers (who are passionate about building high-performance data pipelines, products, and analytical pipelines in the cloud to power real-time AI systems. As a Data Engineer, you’ll: Build scalable ETL/ELT and streaming data pipelines using GCP (BigQuery, Pub/Sub, PySpark, Dataflow, Cloud Storage, Functions) Orchestrate data workflows with Airflow, Cloud Functions, or Databricks Workflows Work across batch + real-time architectures that feed LLMs and AI/ML systems Own feature engineering pipelines that power production models and intelligent agents Collaborate with platform and ML teams to design observability, lineage, and cost-aware performant solutions Bonus: Experience with AWS, Databricks, Hadoop (Delta Lake, Spark, dbt, Unity Catalog) or interest in building on it Why Us? Building production-grade data & AI solutions Your pipelines directly impact mission-critical and client-facing interactions Lean team, no red tape — build, own, ship Remote-first with async culture that respects your time Competitive comp and benefits Our Stack: Python, SQL, GCP/Azure/AWS, Spark, Kafka, Airflow, Databricks, Spark, dbt, Kubernetes, LangChain, LLMs How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 1 day ago
4.0 years
15 - 30 Lacs
Bhubaneswar, Odisha, India
Remote
Experience : 4.00 + years Salary : INR 1500000-3000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NuStudio.AI) (*Note: This is a requirement for one of Uplers' client - AI-first, API-powered Data Platform) What do you need for this opportunity? Must have skills required: Databricks, dbt, Delta Lake, Spark, Unity catalog, AI, Airflow, Cloud Function, Cloud Storage, Databricks Workflows, Dataflow, ETL/ELT, Functions), GCP (BigQuery, Pub/Sub, PySpark, AWS, Hadoop AI-first, API-powered Data Platform is Looking for: We’re scaling our platform and seeking Data Engineers (who are passionate about building high-performance data pipelines, products, and analytical pipelines in the cloud to power real-time AI systems. As a Data Engineer, you’ll: Build scalable ETL/ELT and streaming data pipelines using GCP (BigQuery, Pub/Sub, PySpark, Dataflow, Cloud Storage, Functions) Orchestrate data workflows with Airflow, Cloud Functions, or Databricks Workflows Work across batch + real-time architectures that feed LLMs and AI/ML systems Own feature engineering pipelines that power production models and intelligent agents Collaborate with platform and ML teams to design observability, lineage, and cost-aware performant solutions Bonus: Experience with AWS, Databricks, Hadoop (Delta Lake, Spark, dbt, Unity Catalog) or interest in building on it Why Us? Building production-grade data & AI solutions Your pipelines directly impact mission-critical and client-facing interactions Lean team, no red tape — build, own, ship Remote-first with async culture that respects your time Competitive comp and benefits Our Stack: Python, SQL, GCP/Azure/AWS, Spark, Kafka, Airflow, Databricks, Spark, dbt, Kubernetes, LangChain, LLMs How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 1 day ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough