Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
12.0 years
0 Lacs
Noida, Uttar Pradesh
Remote
Principal Engineering Manager- Data Engineering Noida, Uttar Pradesh, India Date posted Jun 10, 2025 Job number 1827114 Work site Up to 50% work from home Travel 0-25 % Role type People Manager Profession Software Engineering Discipline Software Engineering Employment type Full-Time Overview Microsoft is a company where passionate innovators come to collaborate, envision what can be and take their careers to levels they cannot achieve anywhere else. This is a world of more possibilities, more innovation, more openness in a cloud-enabled world. The Business & Industry Copilots group is a rapidly growing organization that is responsible for the Microsoft Dynamics 365 suite of products, Power Apps, Power Automate, Dataverse, AI Builder, Microsoft Industry Solution and more. Microsoft is considered one of the leaders in Software as a Service in the world of business applications and this organization is at the heart of how business applications are designed and delivered. This is an exciting time to join our group Customer Experience (CXP) and work on something highly strategic to Microsoft. The goal of CXP Engineering is to build the next generation of our applications running on Dynamics 365, AI, Copilot, and several other Microsoft cloud services to drive AI transformation across Marketing, Sales, Services and Support organizations within Microsoft. We innovate quickly and collaborate closely with our partners and customers in an agile, high-energy environment. Leveraging the scalability and value from Azure & Power Platform, we ensure our solutions are robust and efficient. Our organization’s implementation acts as reference architecture for large companies and helps drive product capabilities. If the opportunity to collaborate with a diverse engineering team, on enabling end-to-end business scenarios using cutting-edge technologies and to solve challenging problems for large scale 24x7 business SaaS applications excite you, please come and talk to us! We are hiring a passionate Principal SW Engineering Manager to lead a team of highly motivated and talented software developers building highly scalable data platforms and deliver services and experiences for empowering Microsoft’s customer, seller and partner ecosystem to be successful. This is a unique opportunity to use your leadership skills and experience in building core technologies that will directly affect the future of Microsoft on the cloud. In this position, you will be part of a fun-loving, diverse team that seeks challenges, loves learning and values teamwork. You will collaborate with team members and partners to build high-quality and innovative data platforms with full stack data solutions using latest technologies in a dynamic and agile environment and have opportunities to anticipate future technical needs of the team and provide technical leadership to keep raising the bar for our competition. We use industry-standard technology: C#, JavaScript/Typescript, HTML5, ETL/ELT, Data warehousing, and/ or Business Intelligence Development. Qualifications Bachelor's or Master's degree in Computer Science, Engineering, or a related technical field. 12+ years of experience of building high scale enterprise Business Intelligence and data engineering solutions. 3+ years of management experience leading a high-performance engineering team. Proficient in designing and developing distributed systems on cloud platform. Must be able to plan work, and work to a plan adapting as necessary in a rapidly evolving environment. Experience using a variety of data stores, including data ETL/ELT, warehouses, RDBMS, in-memory caches, and document Databases. Experience using ML, anomaly detection, predictive analysis, exploratory data analysis. A strong understanding of the value of Data, data exploration and the benefits of a data-driven organizational culture. Strong communication skills and proficiency with executive communications Demonstrated ability to effectively lead and operate in cross-functional global organization Preferred Qualifications: Prior experience as an engineering site leader is a strong plus. Proven success in recruiting and scaling engineering organizations effectively. Demonstrated ability to provide technical leadership to teams, with experience managing large-scale data engineering projects. Hands-on experience working with large data sets using tools such as SQL, Databricks, PySparkSQL, Synapse, Azure Data Factory, or similar technologies. Expertise in one or more of the following areas: AI and Machine Learning. Experience with Business Intelligence or data visualization tools, particularly Power BI, is highly beneficial. #BICJobs Responsibilities As a leader of the engineering team, you will be responsible for the following: Build and lead a world class data engineering team. Passionate about technology and obsessed about customer needs. Champion data-driven decisions for features identification, prioritization and delivery. Managing multiple projects, including timelines, customer interaction, feature tradeoffs, etc. Delivering on an ambitious product and services roadmap, including building new services on top of vast amount data collected by our batch and near real time data engines. Design and architect internet scale and reliable services. Leveraging machine learning(ML) models knowledge to select appropriate solutions for business objectives. Communicate effectively and build relationship with our partner teams and stakeholders. Help shape our long-term architecture and technology choices across the full client and services stack. Understand the talent needs of the team and help recruit new talent. Mentoring and growing other engineers to bring in efficiency and better productivity. Experiment with and recommend new technologies that simplify or improve the tech stack. Work to help build an inclusive working environment. Benefits/perks listed below may vary depending on the nature of your employment with Microsoft and the country where you work. Industry leading healthcare Educational resources Discounts on products and services Savings and investments Maternity and paternity leave Generous time away Giving programs Opportunities to network and connect Microsoft is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.
Posted 1 week ago
0.0 - 12.0 years
0 Lacs
Indore, Madhya Pradesh
On-site
Indore, Madhya Pradesh, India;Noida, Uttar Pradesh, India;Bangalore, Karnataka, India Qualification : Primary Tool & Expertise: Power BI and Semantic modelling Key Project Focus: Leading the migration of legacy reporting systems (Cognos or MicroStrategy) to Power BI solutions. Core Responsibility: Build / Develop optimized semantic models , metrics and complex reports and dashboards Work closely with the business analysts / BI teams to help business teams drive improvement in key business metrics and customer experience Responsible for timely, quality, and successful deliveries Sharing knowledge and experience within the team / other groups in the org Lead teams of BI engineers translate designed solutions to them, review their work, and provide guidance. Manage client communications and deliverables Skills Required : Power BI, Semanti Modelling, DAX, Power Query, Power BI Service, Data Warehousing, Data Modeling, Data Visualization, SQL Role : Core BI Skills: Power BI (Semanti Modelling, DAX, Power Query, Power BI Service) Data Warehousing Data Modeling Data Visualization SQL Datawarehouse, Database & Querying: Strong skills in databases (Oracle / MySQL / DB2 / Postgres) and expertise in writing SQL queries. Experience with cloud-based data intelligence platforms like (Databricks / Snowflake) Strong understanding of data warehousing and data modelling concepts and principles. Strong skills and experience in creating semantic models in the Power BI or similar tools. Additional BI & Data Skills (Good to Have): Certifications in Power BI and any Data platform Experience in other tools like MicroStrategy and Cognos Proven experience in migrating existing BI solutions to Power BI or other modern BI platforms. Experience with the broader Power Platform (Power Automate, Power Apps) to create integrated solutions Knowledge and experience in Power BI Admin features like, versioning, usage reports, capacity planning, creation of deployment pipelines etc Sound knowledge of various forms of data analysis and presentation methodologies. Experience in formal project management methodologies Exposure to multiple BI tools is desirable. Experience with Generative BI implementation. Working knowledge of scripting languages like Perl, Shell, Python is desirable. Exposure to one of the cloud providers – AWS / Azure / GCP. Soft Skills & Business Acumen: Exposure to multiple business domains (e.g., Insurance, Reinsurance, Retail, BFSI, healthcare, telecom) is desirable. Exposure to complete SDLC. Out-of-the-box thinker and not just limited to the work done in the projects. Capable of working as an individual contributor and within a team. Good communication, problem-solving, and interpersonal skills. Self-starter and resourceful, skilled in identifying and mitigating risks Experience : 10 to 12 years Job Reference Number : 13099
Posted 1 week ago
0.0 years
0 Lacs
Gurugram, Haryana
On-site
Expedia Group brands power global travel for everyone, everywhere. We design cutting-edge tech to make travel smoother and more memorable, and we create groundbreaking solutions for our partners. Our diverse, vibrant, and welcoming community is essential in driving our success. Why Join Us? To shape the future of travel, people must come first. Guided by our Values and Leadership Agreements, we foster an open culture where everyone belongs, differences are celebrated and know that when one of us wins, we all win. We provide a full benefits package, including exciting travel perks, generous time-off, parental leave, a flexible work model (with some pretty cool offices), and career development resources, all to fuel our employees' passion for travel and ensure a rewarding career journey. We’re building a more open world. Join us. Data Engineer III Expedia Group’s CTO Enablement team is looking for a highly motivated Data Engineer III to lead the design, delivery, and stewardship of business-critical data infrastructure that powers our Capitalization program and Business Operations functions . This role is at the intersection of finance, strategy, and engineering , where data precision and operational rigor directly support the company’s financial integrity and execution effectiveness. You will collaborate with stakeholders across Finance, BizOps, and Technology to build scalable data solutions that ensure capitalization accuracy, enable deep operational analytics, and streamline financial and business reporting at scale. What you will do: Design, build, and maintain high-scale data pipelines and transformation logic to support CapEx/OpEx classification, capitalization tracking, and operational data modeling. Deliver clean, well-documented, governed datasets that drive finance reporting, strategic planning, and key operational dashboards. Partner with cross-functional teams (Finance, Engineering, Strategy) to translate business and compliance requirements into technical solutions. Lead the development of data models and ETL processes to support performance monitoring, workforce utilization, project financials, and business KPIs. Establish and enforce data quality, lineage, and access control standards to ensure trust in business-critical data. Proactively identify and resolve data reliability issues related to financial close processes, budget tracking, and capitalization rules. Serve as a technical advisor to BizOps and Finance stakeholders, recommending improvements in tooling, architecture, and process automation. Mentor other engineers and contribute to the growth of a high-performance data team culture. Who you are: 6+ years of experience in data engineering , analytics engineering , or data infrastructure roles with a focus on operational and financial data. Expertise in SQL and Python , and experience with data pipeline orchestration tools such as Airflow , dbt , or equivalent. Strong understanding of cloud-based data platforms (e.g., Snowflake, BigQuery, Redshift, or Databricks). Deep familiarity with capitalization standards , CapEx/OpEx distinction, and operational reporting in a tech-driven environment. Demonstrated ability to build scalable, reliable ETL/ELT workflows that serve diverse analytical and reporting needs. Experience working cross-functionally in complex organizations with multiple stakeholder groups. Passion for operational excellence, data governance, and driving actionable business insights from data. Preferred qualifications: Experience supporting BizOps , FP&A , or Product Finance teams with data tooling and reporting. Familiarity with BI platforms like Looker , Power BI , or Tableau . Exposure to agile delivery frameworks and enterprise-level operational rhythms. Accommodation requests If you need assistance with any part of the application or recruiting process due to a disability, or other physical or mental health conditions, please reach out to our Recruiting Accommodations Team through the Accommodation Request. We are proud to be named as a Best Place to Work on Glassdoor in 2024 and be recognized for award-winning culture by organizations like Forbes, TIME, Disability:IN, and others. Expedia Group's family of brands includes: Brand Expedia®, Hotels.com®, Expedia® Partner Solutions, Vrbo®, trivago®, Orbitz®, Travelocity®, Hotwire®, Wotif®, ebookers®, CheapTickets®, Expedia Group™ Media Solutions, Expedia Local Expert®, CarRentals.com™, and Expedia Cruises™. © 2024 Expedia, Inc. All rights reserved. Trademarks and logos are the property of their respective owners. CST: 2029030-50 Employment opportunities and job offers at Expedia Group will always come from Expedia Group’s Talent Acquisition and hiring teams. Never provide sensitive, personal information to someone unless you’re confident who the recipient is. Expedia Group does not extend job offers via email or any other messaging tools to individuals with whom we have not made prior contact. Our email domain is @expediagroup.com. The official website to find and apply for job openings at Expedia Group is careers.expediagroup.com/jobs. Expedia is committed to creating an inclusive work environment with a diverse workforce. All qualified applicants will receive consideration for employment without regard to race, religion, gender, sexual orientation, national origin, disability or age. India - Haryana - Gurgaon Technology Full-Time Regular 06/10/2025 ID # R-95367
Posted 1 week ago
0.0 years
0 Lacs
Gurugram, Haryana
On-site
202408432 Gurugram, Haryana, India Thane, Maharashtra, India Bevorzugt Description The Role: Partner with other architecture resources to lead the end-to-end architecture of the health and benefits data platform using Azure services, ensuring scalability, flexibility, and reliability. Develop broad understanding of the data lake architecture, including the impact of changes on a whole system, the onboarding of clients and the security implications of the solution. Design a new or improve upon existing architecture including data ingestion, storage, transformation and consumption layers. Define data models, schemas, and database structures optimized for H&B use cases including claims, census, placement, broking and finance sources. Designing solutions for seamless integration of diverse health and benefits data sources. Implement data governance and security best practices in compliance with industry standards and regulations using Microsoft Purview. Evaluate data lake architecture to understand how technical decisions may impact business outcomes and suggest new solutions/technologies that better align to the Health and Benefits Data strategy. Draw on internal and external practices to establish data lake architecture best practices and standards within the team and ensure that they are shared and understood. Continuously develop technical knowledge and be recognised as a key resource across the global team. Collaborate with other specialists and/or technical experts to ensure H&B Data Platform is delivering to the highest possible standards and that solutions support stakeholder needs and business requirements. Initiate practices that will increase code quality, performance and security. Develop recommendations for continuous improvements initiatives, applying deep subject matter knowledge to provide guidance at all levels on the potential implications of changes. Build the team’s technical expertise/capabilities/skills through the delivery of regular feedback, knowledge sharing, and coaching. High learning adaptability, demonstrating understanding of the implications of technical issues on business requirements and / or operations. Analyze existing data design and suggest improvements that promote performance, stability and interoperability. Work with product management and business subject matter experts to translate business requirements into good data lake design. Maintain the governance model on the data lake architecture through training, design reviews, code reviews, and progress reviews. Participate in the development of Data lake Architecture and Roadmaps in support of business strategies Communication with key stakeholders and development teams on technical solutions. Convince and present proposals by way of high-level solutions to end users and/or stakeholders. The Requirement: Candidate must have significant experience in a technology related discipline, such as IT or Engineering with a Bachelor's/College Degree in these areas being beneficial. Strong experience in databases, tools and methodologies Strong skills across a broad range of database technologies including Azure Data Factory, Azure Synapse Analytics, Azure Databricks, Azure SQL Database, Azure Data Lake Storage, and other Azure Services. Working knowledge of Microsoft Fabric is preferred. Data Analysis, Data Modeling, Data Integration, Data Warehousing, Database Design Experience with database performance evaluation and remediation Develop strategies for data acquisitions, archive recovery and implementation Be able to design and develop Databases, Data Warehouses and Multidimensional Databases Experience in Data Governance including Microsoft Purview, Azure Data Catalogue, Azure Data Share, and other Azure tools. Familiarity with legal risks related to data usage and rights. Experience in data security, including Azure Key Vault, Azure Data Encryption, Azure Data Masking, Azure Data Anonymization, and Azure Active Directory. Ability to develop database strategies for flexible high-performance reporting and business intelligence Experience using data modeling tools & methodology Experience working within an Agile Scrum Development Life Cycle, across varying levels of Agile maturity Experience working with geographically distributed scrum teams Excellent verbal and writing skills, including the ability to research, design, and write new documentation, as well as to maintain and improve existing material Technical competencies: Subject Matter Expertise Developing expertise You strengthen your depth and/or breadth of subject matter knowledge and skills across multiple areas. You define the expertise required in your area based on emerging technologies, industry practices. You build the team’s capability accordingly. Applying expertise You apply subject matter knowledge and skills across multiple areas to assess the impact of complex issues and implement long-term solutions. You foster innovation using subject matter knowledge to enhance tools, practices, and processes for the team. Solution Development Systems thinking You lead and foster collaboration across H&B Data Platform Technology to develop solutions to complex issues. You apply a whole systems approach to evaluating impact, and take ownership for ensuring links between structure, people and processes are made. Focusing on quality You instill a quality mindset to the team and ensure the appropriate methods, processes and standards are in place for teams to deliver quality solutions. You create and deliver improvement initiatives. Technical Communication Simplifying complexity You develop tools, aids and/or original content to support the delivery and/or understanding of complex information. You guide others on best practice. Qualifications Candidate must have significant experience in a technology related discipline, such as IT or Engineering with a Bachelor's/College Degree in these areas being beneficial.
Posted 1 week ago
0.0 - 5.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Role: Senior Analyst Experience: 3 to 5 years Location: Bangalore Job Description: Focus on ML model load testing and creation of E2E test cases Evaluate models’ scalability and latency by running suites of metrics under different RPS and creating and automating the test cases for individual models, ensuring a smooth rollout of the models Enhance monitoring of model scalability, and handle incident of increased error rate Collaborate with existing machine learning engineers, backend engineers and QA test engineers from cross-functional teams Job Responsibilities: Focus on ML model load testing and creation of E2E test cases Evaluate models’ scalability and latency by running suites of metrics under different RPS and creating and automating the test cases for individual models, ensuring a smooth rollout of the models Enhance monitoring of model scalability, and handle incident of increased error rate Collaborate with existing machine learning engineers, backend engineers and QA test engineers from cross-functional teams Skills Required Databricks ; mlFlow ; Seldon ; Kubeflow ; Tecton ; Jenkins ; AWS services , At least one of the programing languages among (Java, Python, Scala) ; ML Load Testing , Job Monitoring , Evaluate Scalability & latency of models , Good communication skills ; Experience with production level Models , worked on models with high data volumes with low latency Job Snapshot Updated Date 10-06-2025 Job ID J_3721 Location Bengaluru, Karnataka, India Experience 3 - 5 Years Employee Type Permanent
Posted 1 week ago
0.0 - 16.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Category: Software Development/ Engineering Main location: India, Karnataka, Bangalore Position ID: J0625-0079 Employment Type: Full Time Position Description: Company Profile: Founded in 1976, CGI is among the largest independent IT and business consulting services firms in the world. With 94,000 consultants and professionals across the globe, CGI delivers an end-to-end portfolio of capabilities, from strategic IT and business consulting to systems integration, managed IT and business process services and intellectual property solutions. CGI works with clients through a local relationship model complemented by a global delivery network that helps clients digitally transform their organizations and accelerate results. CGI Fiscal 2024 reported revenue is CA$14.68 billion and CGI shares are listed on the TSX (GIB.A) and the NYSE (GIB). Learn more at cgi.com. Position: Manage Consulting Expert- AI Architect Experience: 13-16 years Category: Software Development/ Engineering Shift Timing: General Shift Location: Bangalore Position ID: J0625-0079 Employment Type: Full Time Education Qualification: Bachelor's degree in Computer Science or related field or higher with minimum 13 years of relevant experience. We are looking for an experienced and visionary AI Architect with a strong engineering background and hands-on implementation experience to lead the development and deployment of AI-powered solutions. The ideal candidate will have a minimum of 13–16 years of experience in software and AI systems design, including extensive exposure to large language models (LLMs), vector databases, and modern AI frameworks such as LangChain. This role requires a balance of strategic architectural planning and tactical engineering execution, working across teams to bring intelligent applications to life. Your future duties and responsibilities: Design robust, scalable architectures for AI/ML systems, including LLM-based and generative AI solutions. Lead the implementation of AI features and services in enterprise-grade products with clear, maintainable code. Develop solutions using LangChain, orchestration frameworks, and vector database technologies. Collaborate with product managers, data scientists, ML engineers, and business stakeholders to gather requirements and translate them into technical designs. Guide teams on best practices for AI system integration, deployment, and monitoring. Define and implement architecture governance, patterns, and reusable frameworks for AI applications. Stay current with emerging AI trends, tools, and methodologies to continuously enhance architecture strategy. Oversee development of Proof-of-Concepts (PoCs) and Minimum Viable Products (MVPs) to validate innovative ideas. Ensure systems are secure, scalable, and high-performing in production environments. Mentor junior engineers and architects to build strong AI and engineering capabilities within the team. Required qualifications to be successful in this role: Must to have Skills- 13–16 years of overall experience in software development, with at least 5+ years in AI/ML system architecture and delivery. Proven expertise in developing and deploying AI/ML models in production environments. Deep knowledge of LLMs, LangChain, prompt engineering, RAG (retrieval-augmented generation), and vector search. Strong programming and system design skills with a solid engineering foundation. Exceptional ability to communicate complex concepts clearly to technical and non-technical stakeholders. Experience with Agile methodologies and cross-functional team leadership. Programming Languages: Python, Java, Scala, SQL AI/ML Frameworks: LangChain, TensorFlow, PyTorch, Scikit-learn, Hugging Face Transformers Data Processing: Apache Spark, Kafka, Pandas, Dask Vector Stores & Retrieval Systems: FAISS, Pinecone, Weaviate, Chroma Cloud Platforms: AWS (SageMaker, Lambda), Azure (ML Studio, OpenAI), Google Cloud AI MLOps & DevOps: Docker, Kubernetes, MLflow, Kubeflow, Airflow, CI/CD tools (GitHub Actions, Jenkins) Databases: PostgreSQL, MongoDB, Redis, BigQuery, Snowflake Tools & Platforms: Databricks, Jupyter Notebooks, Git, Terraform Good to have Skills- Solution Engineering and Implementation Experience in AI Project. Skills: AWS Machine Learning English GitHub Python Jenkins Kubernetes Prometheus Snowflake What you can expect from us: Together, as owners, let’s turn meaningful insights into action. Life at CGI is rooted in ownership, teamwork, respect and belonging. Here, you’ll reach your full potential because… You are invited to be an owner from day 1 as we work together to bring our Dream to life. That’s why we call ourselves CGI Partners rather than employees. We benefit from our collective success and actively shape our company’s strategy and direction. Your work creates value. You’ll develop innovative solutions and build relationships with teammates and clients while accessing global capabilities to scale your ideas, embrace new opportunities, and benefit from expansive industry and technology expertise. You’ll shape your career by joining a company built to grow and last. You’ll be supported by leaders who care about your health and well-being and provide you with opportunities to deepen your skills and broaden your horizons. Come join our team—one of the largest IT and business consulting services firms in the world.
Posted 1 week ago
0.0 - 5.0 years
0 Lacs
Chennai, Tamil Nadu
On-site
Comcast brings together the best in media and technology. We drive innovation to create the world's best entertainment and online experiences. As a Fortune 50 leader, we set the pace in a variety of innovative and fascinating businesses and create career opportunities across a wide range of locations and disciplines. We are at the forefront of change and move at an amazing pace, thanks to our remarkable people, who bring cutting-edge products and services to life for millions of customers every day. If you share in our passion for teamwork, our vision to revolutionize industries and our goal to lead the future in media and technology, we want you to fast-forward your career at Comcast. Job Summary Responsible for designing, building and overseeing the deployment and operation of technology architecture, solutions and software to capture, manage, store and utilize structured and unstructured data from internal and external sources. Establishes and builds processes and structures based on business and technical requirements to channel data from multiple inputs, route appropriately and store using any combination of distributed (cloud) structures, local databases, and other applicable storage forms as required. Develops technical tools and programming that leverage artificial intelligence, machine learning and big-data techniques to cleanse, organize and transform data and to maintain, defend and update data structures and integrity on an automated basis. Creates and establishes design standards and assurance processes for software, systems and applications development to ensure compatibility and operability of data connections, flows and storage requirements. Reviews internal and external business and product requirements for data operations and activity and suggests changes and upgrades to systems and storage to accommodate ongoing needs. Work with data modelers/analysts to understand the business problems they are trying to solve then create or augment data assets to feed their analysis. Integrates knowledge of business and functional priorities. Acts as a key contributor in a complex and crucial environment. May lead teams or projects and shares expertise. Job Description Position: Data Engineer 4 Experience: 8 years to 11.5 years Job Location: Chennai Tamil Nadu Job Description: Requirements Databases: Deep knowledge of relational databases (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., MongoDB, Cassandra, Couchbase). ( MUST) Big Data Technologies: Experience with, Spark, Kafka, and other big data ecosystem tools. (NICE TO HAVE) Cloud Platforms: Experience with cloud services such as AWS, Azure, or Google Cloud Platform, with a particular focus on data engineering services . ( NICE TO HAVE ) Version Control: Experience with version control systems like Git. ( MUST) CI/CD: Knowledge of CI/CD pipelines for automating development and deployment processes. (MUST) Proficiency in Elasticsearch and experience managing large-scale clusters. ( MUST) Hands-on experience with containerization technologies like Docker and Kubernetes. (MUST docker) Strong programming skills in scripting languages such as Python, Bash, or similar. ( NICE to HAVE) Key Responsibilities Design, develop, and maintain scalable data pipelines and infrastructure. Ensure compliance with security regulations and implement advanced security measures to protect company data. Implement and manage CI/CD pipelines for data applications. Work with containerization technologies (Docker, Kubernetes) to deploy and manage data services. Optimize and manage Elasticsearch clusters for log ingestion , and tools such as Logstash, fluent.d , promtail, used to forward logs to Elastic Istance or other Log ingestion Tool . ( Loki+ Grafana). Collaborate with other departments (e.g., Data Science, IT, DevOps) to integrate data solutions with existing business systems. Optimize the performance of data pipelines and resolve data integrity and quality issues. Document data processes and architectures to ensure transparency and facilitate maintenance. Monitor industry trends and adopt best practices to continuously improve our data engineering solutions. Core Responsibilities Develops data structures and pipelines aligned to established standards and guidelines to organize, collect, standardize and transform data that helps generate insights and address reporting needs. Focuses on ensuring data quality during ingest, processing as well as final load to the target tables. Creates standard ingestion frameworks for structured and unstructured data as well as checking and reporting on the quality of the data being processed. Creates standard methods for end users / downstream applications to consume data including but not limited to database views, extracts and Application Programming Interfaces. Develops and maintains information systems (e.g., data warehouses, data lakes) including data access Application Programming Interfaces. Participates in the implementation of solutions via data architecture, data engineering, or data manipulation on both on-prem platforms like Kubernetes and Teradata as well as Cloud platforms like Databricks. Determines the appropriate storage platform across different on-prem (minIO and Teradata) and Cloud (AWS S3, Redshift) depending on the privacy, access and sensitivity requirements. Understands the data lineage from source to the final semantic layer along with the transformation rules applied to enable faster troubleshooting and impact analysis during changes. Collaborates with technology and platform management partners to optimize data sourcing and processing rules to ensure appropriate data quality as well as process optimization. Creates and establishes design standards and assurance processes for software, systems and applications development to ensure compatibility and operability of data connections, flows and storage requirements. Reviews internal and external business and product requirements for data operations and activity and suggests changes and upgrades to systems and storage to accommodate ongoing needs. Develops strategies for data acquisition, archive recovery, and database implementation. Manages data migrations/conversions and troubleshooting data processing issues. Understands the data sensitivity, customer data privacy rules and regulations and applies them consistently in all Information Lifecycle Management activities. Identifies and reacts to system notification and log to ensure quality standards for databases and applications. Solves abstract problems beyond single development language or situation by reusing data file and flags already set. Solves critical issues and shares knowledge such as trends, aggregate, quantity volume regarding specific data sources. Consistent exercise of independent judgment and discretion in matters of significance. Regular, consistent and punctual attendance. Must be able to work nights and weekends, variable schedule(s) as necessary. Other duties and responsibilities as assigned. Employees at all levels are expected to: Understand our Operating Principles; make them the guidelines for how you do your job. Own the customer experience - think and act in ways that put our customers first, give them seamless digital options at every touchpoint, and make them promoters of our products and services. Know your stuff - be enthusiastic learners, users and advocates of our game-changing technology, products and services, especially our digital tools and experiences. Win as a team - make big things happen by working together and being open to new ideas. Be an active part of the Net Promoter System - a way of working that brings more employee and customer feedback into the company - by joining huddles, making call backs and helping us elevate opportunities to do better for our customers. Drive results and growth. Respect and promote inclusion & diversity. Do what's right for each other, our customers, investors and our communities. Disclaimer:This information has been designed to indicate the general nature and level of work performed by employees in this role. It is not designed to contain or be interpreted as a comprehensive inventory of all duties, responsibilities and qualifications. Comcast is proud to be an equal opportunity workplace. We will consider all qualified applicants for employment without regard to race, color, religion, age, sex, sexual orientation, gender identity, national origin, disability, veteran status, genetic information, or any other basis protected by applicable law. Base pay is one part of the Total Rewards that Comcast provides to compensate and recognize employees for their work. Most sales positions are eligible for a Commission under the terms of an applicable plan, while most non-sales positions are eligible for a Bonus. Additionally, Comcast provides best-in-class Benefits to eligible employees. We believe that benefits should connect you to the support you need when it matters most, and should help you care for those who matter most. That’s why we provide an array of options, expert guidance and always-on tools, that are personalized to meet the needs of your reality – to help support you physically, financially and emotionally through the big milestones and in your everyday life. Please visit the compensation and benefits summary on our careers site for more details. Education Bachelor's Degree While possessing the stated degree is preferred, Comcast also may consider applicants who hold some combination of coursework and experience, or who have extensive related professional experience. Relevant Work Experience 7-10 Years
Posted 1 week ago
0.0 - 5.0 years
0 Lacs
Chennai, Tamil Nadu
On-site
Overview Our analysts transform data into meaningful insights that drive strategic decision making. They analyze trends, interpret data, and discover opportunities. Working cross-functionally, they craft narratives from the numbers - directly contributing to our success. Their work influences key business decisions and shape the direction of Comcast. Success Profile What makes a successful Data Engineer 4 at Comcast? Check out these top traits and explore role-specific skills in the job description below. Good Listener Problem Solver Organized Collaborative Perceptive Analytical Benefits We’re proud to offer comprehensive benefits to help support you physically, financially and emotionally through the big milestones and in your everyday life. Paid Time off We know how important it can be to spend time away from work to relax, recover from illness, or take time to care for others needs. Physical Wellbeing We offer a range of benefits and support programs to ensure that you and your loved ones get the care you need. Financial Wellbeing These benefits give you personalized support designed entirely around your unique needs today and for the future. Emotional Wellbeing No matter how you’re feeling or what you’re dealing with, there are benefits to help when you need it, in the way that works for you. Life Events + Family Support Benefits that support you no matter where you are in life’s journey. Data Engineer 4 Location Chennai, India Req ID R412866 Job Type Full Time Category Analytics Date posted 06/10/2025 Comcast brings together the best in media and technology. We drive innovation to create the world's best entertainment and online experiences. As a Fortune 50 leader, we set the pace in a variety of innovative and fascinating businesses and create career opportunities across a wide range of locations and disciplines. We are at the forefront of change and move at an amazing pace, thanks to our remarkable people, who bring cutting-edge products and services to life for millions of customers every day. If you share in our passion for teamwork, our vision to revolutionize industries and our goal to lead the future in media and technology, we want you to fast-forward your career at Comcast. Job Summary Responsible for designing, building and overseeing the deployment and operation of technology architecture, solutions and software to capture, manage, store and utilize structured and unstructured data from internal and external sources. Establishes and builds processes and structures based on business and technical requirements to channel data from multiple inputs, route appropriately and store using any combination of distributed (cloud) structures, local databases, and other applicable storage forms as required. Develops technical tools and programming that leverage artificial intelligence, machine learning and big-data techniques to cleanse, organize and transform data and to maintain, defend and update data structures and integrity on an automated basis. Creates and establishes design standards and assurance processes for software, systems and applications development to ensure compatibility and operability of data connections, flows and storage requirements. Reviews internal and external business and product requirements for data operations and activity and suggests changes and upgrades to systems and storage to accommodate ongoing needs. Work with data modelers/analysts to understand the business problems they are trying to solve then create or augment data assets to feed their analysis. Integrates knowledge of business and functional priorities. Acts as a key contributor in a complex and crucial environment. May lead teams or projects and shares expertise. Job Description Position: Data Engineer 4 Experience: 8 years to 11.5 years Job Location: Chennai Tamil Nadu Job Description: Requirements Databases: Deep knowledge of relational databases (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., MongoDB, Cassandra, Couchbase). ( MUST) Big Data Technologies: Experience with, Spark, Kafka, and other big data ecosystem tools. (NICE TO HAVE) Cloud Platforms: Experience with cloud services such as AWS, Azure, or Google Cloud Platform, with a particular focus on data engineering services . ( NICE TO HAVE ) Version Control: Experience with version control systems like Git. ( MUST) CI/CD: Knowledge of CI/CD pipelines for automating development and deployment processes. (MUST) Proficiency in Elasticsearch and experience managing large-scale clusters. ( MUST) Hands-on experience with containerization technologies like Docker and Kubernetes. (MUST docker) Strong programming skills in scripting languages such as Python, Bash, or similar. ( NICE to HAVE) Key Responsibilities Design, develop, and maintain scalable data pipelines and infrastructure. Ensure compliance with security regulations and implement advanced security measures to protect company data. Implement and manage CI/CD pipelines for data applications. Work with containerization technologies (Docker, Kubernetes) to deploy and manage data services. Optimize and manage Elasticsearch clusters for log ingestion , and tools such as Logstash, fluent.d , promtail, used to forward logs to Elastic Istance or other Log ingestion Tool . ( Loki+ Grafana). Collaborate with other departments (e.g., Data Science, IT, DevOps) to integrate data solutions with existing business systems. Optimize the performance of data pipelines and resolve data integrity and quality issues. Document data processes and architectures to ensure transparency and facilitate maintenance. Monitor industry trends and adopt best practices to continuously improve our data engineering solutions. Core Responsibilities Develops data structures and pipelines aligned to established standards and guidelines to organize, collect, standardize and transform data that helps generate insights and address reporting needs. Focuses on ensuring data quality during ingest, processing as well as final load to the target tables. Creates standard ingestion frameworks for structured and unstructured data as well as checking and reporting on the quality of the data being processed. Creates standard methods for end users / downstream applications to consume data including but not limited to database views, extracts and Application Programming Interfaces. Develops and maintains information systems (e.g., data warehouses, data lakes) including data access Application Programming Interfaces. Participates in the implementation of solutions via data architecture, data engineering, or data manipulation on both on-prem platforms like Kubernetes and Teradata as well as Cloud platforms like Databricks. Determines the appropriate storage platform across different on-prem (minIO and Teradata) and Cloud (AWS S3, Redshift) depending on the privacy, access and sensitivity requirements. Understands the data lineage from source to the final semantic layer along with the transformation rules applied to enable faster troubleshooting and impact analysis during changes. Collaborates with technology and platform management partners to optimize data sourcing and processing rules to ensure appropriate data quality as well as process optimization. Creates and establishes design standards and assurance processes for software, systems and applications development to ensure compatibility and operability of data connections, flows and storage requirements. Reviews internal and external business and product requirements for data operations and activity and suggests changes and upgrades to systems and storage to accommodate ongoing needs. Develops strategies for data acquisition, archive recovery, and database implementation. Manages data migrations/conversions and troubleshooting data processing issues. Understands the data sensitivity, customer data privacy rules and regulations and applies them consistently in all Information Lifecycle Management activities. Identifies and reacts to system notification and log to ensure quality standards for databases and applications. Solves abstract problems beyond single development language or situation by reusing data file and flags already set. Solves critical issues and shares knowledge such as trends, aggregate, quantity volume regarding specific data sources. Consistent exercise of independent judgment and discretion in matters of significance. Regular, consistent and punctual attendance. Must be able to work nights and weekends, variable schedule(s) as necessary. Other duties and responsibilities as assigned. Employees at all levels are expected to: Understand our Operating Principles; make them the guidelines for how you do your job. Own the customer experience - think and act in ways that put our customers first, give them seamless digital options at every touchpoint, and make them promoters of our products and services. Know your stuff - be enthusiastic learners, users and advocates of our game-changing technology, products and services, especially our digital tools and experiences. Win as a team - make big things happen by working together and being open to new ideas. Be an active part of the Net Promoter System - a way of working that brings more employee and customer feedback into the company - by joining huddles, making call backs and helping us elevate opportunities to do better for our customers. Drive results and growth. Respect and promote inclusion & diversity. Do what's right for each other, our customers, investors and our communities. Disclaimer:This information has been designed to indicate the general nature and level of work performed by employees in this role. It is not designed to contain or be interpreted as a comprehensive inventory of all duties, responsibilities and qualifications. Comcast is proud to be an equal opportunity workplace. We will consider all qualified applicants for employment without regard to race, color, religion, age, sex, sexual orientation, gender identity, national origin, disability, veteran status, genetic information, or any other basis protected by applicable law. Base pay is one part of the Total Rewards that Comcast provides to compensate and recognize employees for their work. Most sales positions are eligible for a Commission under the terms of an applicable plan, while most non-sales positions are eligible for a Bonus. Additionally, Comcast provides best-in-class Benefits to eligible employees. We believe that benefits should connect you to the support you need when it matters most, and should help you care for those who matter most. That’s why we provide an array of options, expert guidance and always-on tools, that are personalized to meet the needs of your reality – to help support you physically, financially and emotionally through the big milestones and in your everyday life. Please visit the compensation and benefits summary on our careers site for more details. Education Bachelor's Degree While possessing the stated degree is preferred, Comcast also may consider applicants who hold some combination of coursework and experience, or who have extensive related professional experience. Relevant Work Experience 7-10 Years
Posted 1 week ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Skillsets required: Application and API development SQL data modeling and automation Experience working with GIS map services and spatial databases Experience creating GIS map services Data and application architecture Handling of legacy data Platforms: DataBricks Snowflake ESRI ArcGIS / ArcSDE New GenAI app being developed Tasks that will need to be done: Combining and integrating spatial databases from different sources to be used with the new GenAI application Building of map services with associated metadata to support questions from geoscience users Set up necessary updating cycles for databases and map services to ensure evergreen results Help with constructing APIs for these databases and map services to structure the best possible workflows for users Assistance with data and application architecture Help with handling legacy data, such as links to existing applications, databases, and services Ensure that IT requirements are being met as we build our project, including integration, data tiers, access control and status monitoring Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : Business Agility Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and deliver effective solutions that meet business needs. Additionally, you will monitor and optimize data workflows to enhance performance and reliability, ensuring that data is accessible and actionable for stakeholders. Roles & Responsibilities: - Need Databricks resource with Azure cloud experience - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Collaborate with data architects and analysts to design scalable data solutions. - Implement best practices for data governance and security throughout the data lifecycle. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. - Good To Have Skills: Experience with Business Agility. - Strong understanding of data modeling and database design principles. - Experience with data integration tools and ETL processes. - Familiarity with cloud platforms and services related to data storage and processing. Additional Information: - The candidate should have minimum 3 years of experience in Databricks Unified Data Analytics Platform. - This position is based at our Pune office. - A 15 years full time education is required. 15 years full time education Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Bhubaneswar, Odisha, India
On-site
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : Business Agility Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and deliver effective solutions that meet business needs. Additionally, you will monitor and optimize data workflows to enhance performance and reliability, ensuring that data is accessible and actionable for stakeholders. Roles & Responsibilities: - Need Databricks resource with Azure cloud experience - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Collaborate with data architects and analysts to design scalable data solutions. - Implement best practices for data governance and security throughout the data lifecycle. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. - Good To Have Skills: Experience with Business Agility. - Strong understanding of data modeling and database design principles. - Experience with data integration tools and ETL processes. - Familiarity with cloud platforms and services related to data storage and processing. Additional Information: - The candidate should have minimum 3 years of experience in Databricks Unified Data Analytics Platform. - This position is based at our Pune office. - A 15 years full time education is required. 15 years full time education Show more Show less
Posted 1 week ago
3.0 - 5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
We use cookies to offer you the best possible website experience. Your cookie preferences will be stored in your browser’s local storage. This includes cookies necessary for the website's operation. Additionally, you can freely decide and change any time whether you accept cookies or choose to opt out of cookies to improve website's performance, as well as cookies used to display content tailored to your interests. Your experience of the site and the services we are able to offer may be impacted if you do not accept all cookies. Press Tab to Move to Skip to Content Link Skip to main content Home Page Home Page Life At YASH Core Values Careers Business Consulting Jobs Digital Jobs ERP IT Infrastructure Jobs Sales & Marketing Jobs Software Development Jobs Solution Architects Jobs Join Our Talent Community Social Media LinkedIn Twitter Instagram Facebook Search by Keyword Search by Location Home Page Home Page Life At YASH Core Values Careers Business Consulting Jobs Digital Jobs ERP IT Infrastructure Jobs Sales & Marketing Jobs Software Development Jobs Solution Architects Jobs Join Our Talent Community Social Media LinkedIn Twitter Instagram Facebook View Profile Employee Login Search by Keyword Search by Location Show More Options Loading... Requisition ID All Skills All Select How Often (in Days) To Receive An Alert: Create Alert Select How Often (in Days) To Receive An Alert: Apply now » Apply Now Start apply with LinkedIn Please wait... Software Engineer - Microsoft Fabric 1 Job Date: Jun 9, 2025 Job Requisition Id: 61230 Location: Hyderabad, TG, IN Indore, MP, IN, 452001 Indore, IN Pune, MH, IN Pune, IN YASH Technologies is a leading technology integrator specializing in helping clients reimagine operating models, enhance competitiveness, optimize costs, foster exceptional stakeholder experiences, and drive business transformation. At YASH, we’re a cluster of the brightest stars working with cutting-edge technologies. Our purpose is anchored in a single truth – bringing real positive changes in an increasingly virtual world and it drives us beyond generational gaps and disruptions of the future. We are looking forward to hire Microsoft Fabric Professionals in the following areas : Experience 3-5 Years Job Description Experience in Azure Fabric, Azure Data factory, Azure Databricks, Azure Synapse, Azure SQL, ETL Create Pipelines, datasets, dataflows, Integration runtimes and monitoring Pipelines and trigger runs Extract Transformation and Load Data from source system and processing the data in Azure Databricks Create SQL scripts to perform complex queries Create Synapse pipelines to migrate data from Gen2 to Azure SQL Data Migration pipeline to Azure cloud (Azure SQL). Database migration from on-prem SQL server to Azure Dev Environment by using Azure DMS and Data Migration Assistant Experience in using azure data catalog Experience in Big Data Batch Processing Solutions; Interactive Processing Solutions; Real Time Processing Solutions Certifications Good To Have At YASH, you are empowered to create a career that will take you to where you want to go while working in an inclusive team environment. We leverage career-oriented skilling models and optimize our collective intelligence aided with technology for continuous learning, unlearning, and relearning at a rapid pace and scale. Our Hyperlearning workplace is grounded upon four principles Flexible work arrangements, Free spirit, and emotional positivity Agile self-determination, trust, transparency, and open collaboration All Support needed for the realization of business goals, Stable employment with a great atmosphere and ethical corporate culture Apply now » Apply Now Start apply with LinkedIn Please wait... Find Similar Jobs: Careers Home View All Jobs Top Jobs Quick Links Blogs Events Webinars Media Contact Contact Us Copyright © 2020. YASH Technologies. All Rights Reserved. Show more Show less
Posted 1 week ago
2.0 - 3.0 years
0 Lacs
Pune, Maharashtra, India
Remote
Description The Data Engineer supports, develops, and maintains a data and analytics platform to efficiently process, store, and make data available to analysts and other consumers. This role collaborates with Business and IT teams to understand requirements and best leverage technologies for agile data delivery at scale. Note:- Even though the role is categorized as Remote, it will follow a hybrid work model. Key Responsibilities Implement and automate deployment of distributed systems for ingesting and transforming data from various sources (relational, event-based, unstructured). Develop and operate large-scale data storage and processing solutions using cloud-based platforms (e.g., Data Lakes, Hadoop, HBase, Cassandra, MongoDB, DynamoDB). Ensure data quality and integrity through continuous monitoring and troubleshooting. Implement data governance processes, managing metadata, access, and data retention. Develop scalable, efficient, and quality data pipelines with monitoring and alert mechanisms. Design and implement physical data models and storage architectures based on best practices. Analyze complex data elements and systems, data flow, dependencies, and relationships to contribute to conceptual, physical, and logical data models. Participate in testing and troubleshooting of data pipelines. Utilize agile development technologies such as DevOps, Scrum, and Kanban for continuous improvement in data-driven applications. Responsibilities Qualifications, Skills, and Experience: Must-Have 2-3 years of experience in data engineering with expertise in Azure Databricks and Scala/Python. Hands-on experience with Spark (Scala/PySpark) and SQL. Strong understanding of SPARK Streaming, SPARK Internals, and Query Optimization. Proficiency in Azure Cloud Services. Agile Development experience. Experience in Unit Testing of ETL pipelines. Expertise in creating ETL pipelines integrating ML models. Knowledge of Big Data storage strategies (optimization and performance). Strong problem-solving skills. Basic understanding of Data Models (SQL/NoSQL) including Delta Lake or Lakehouse. Exposure to Agile software development methodologies. Quick learner with adaptability to new technologies. Nice-to-Have Understanding of the ML lifecycle. Exposure to Big Data open-source technologies. Experience with clustered compute cloud-based implementations. Familiarity with developing applications requiring large file movement in cloud environments. Experience in building analytical solutions. Exposure to IoT technology. Competencies System Requirements Engineering: Translates stakeholder needs into verifiable requirements. Collaborates: Builds partnerships and works collaboratively with others. Communicates Effectively: Develops and delivers clear communications for various audiences. Customer Focus: Builds strong customer relationships and delivers customer-centric solutions. Decision Quality: Makes timely and informed decisions to drive progress. Data Extraction: Performs ETL activities from various sources using appropriate tools and technologies. Programming: Writes and tests computer code using industry standards, tools, and automation. Quality Assurance Metrics: Applies measurement science to assess solution effectiveness. Solution Documentation: Documents and communicates solutions to enable knowledge transfer. Solution Validation Testing: Ensures configuration changes meet design and customer requirements. Data Quality: Identifies and corrects data flaws to support governance and decision-making. Problem Solving: Uses systematic analysis to identify and resolve issues effectively. Values Differences: Recognizes and values diverse perspectives and cultures. Qualifications Education, Licenses, and Certifications: College, university, or equivalent degree in a relevant technical discipline, or equivalent experience required. This position may require licensing for compliance with export controls or sanctions regulations. Work Schedule Work primarily with stakeholders in the US, requiring a 2-3 hour overlap during EST hours as needed. Job Systems/Information Technology Organization Cummins Inc. Role Category Remote Job Type Exempt - Experienced ReqID 2411641 Relocation Package No Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : PySpark, Microsoft Azure Databricks Minimum 5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an Application Developer, you will be responsible for designing, building, and configuring applications to meet business process and application requirements. Your typical day will involve collaborating with teams to develop innovative solutions and streamline processes. Roles & Responsibilities: - Expected to be an SME - Collaborate and manage the team to perform - Responsible for team decisions - Engage with multiple teams and contribute on key decisions - Provide solutions to problems for their immediate team and across multiple teams - Lead the development and implementation of new applications - Conduct code reviews and ensure coding standards are met - Stay updated on industry trends and best practices Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform - Good To Have Skills: Experience with PySpark - Strong understanding of data engineering concepts - Experience in building and optimizing data pipelines - Knowledge of cloud platforms like Microsoft Azure - Familiarity with data governance and security practices Additional Information: - The candidate should have a minimum of 5 years of experience in Databricks Unified Data Analytics Platform - This position is based at our Bengaluru office - A 15 years full-time education is required 15 years full time education Show more Show less
Posted 1 week ago
1.0 years
0 Lacs
Bengaluru, Karnataka, India
Remote
As part of the Astellas commitment to delivering value for our patients, our organization is currently undergoing transformation to achieve this critical goal. This is an opportunity to work on digital transformation and make a real impact within a company dedicated to improving lives. DigitalX our new information technology function is spearheading this value driven transformation across Astellas. We are looking for people who excel in embracing change, manage technical challenges and have exceptional communication skills. This position is based in Bengaluru and will require some on-site work. Purpose And Scope As a Junior Data Engineer, you will play a crucial role in assisting in the design, build, and maintenance of our data infrastructure focusing on BI and DWH capabilities. Working with the Senior Data Engineer, your foundational expertise in BI, Databricks, PySpark, SQL, Talend and other related technologies, will be instrumental in driving data-driven decision-making across the organization. You will play a pivotal role in building maintaining and enhancing our systems across the organization. This is a fantastic global opportunity to use your proven agile delivery skills across a diverse range of initiatives, utilize your development skills, and contribute to the continuous improvement/delivery of critical IT solutions. Essential Job Responsibilities Collaborate with FoundationX Engineers to design and maintain scalable data systems. Assist in building robust infrastructure using technologies like PowerBI, Qlik or alternative, Databricks, PySpark, and SQL. Contribute to ensuring system reliability by incorporating accurate business-driving data. Gain experience in BI engineering through hands-on projects. Data Modelling and Integration: Collaborate with cross-functional teams to analyse requirements and create technical designs, data models, and migration strategies. Design, build, and maintain physical databases, dimensional data models, and ETL processes specific to pharmaceutical data. Cloud Expertise: Evaluate and influence the selection of cloud-based technologies such as Azure, AWS, or Google Cloud. Implement data warehousing solutions in a cloud environment, ensuring scalability and security. BI Expertise: Leverage and create PowerBI, Qlik or equivalent technology for data visualization, dashboards, and self-service analytics. Data Pipeline Development: Design, build, and optimize data pipelines using Databricks and PySpark. Ensure data quality, reliability, and scalability. Application Transition: Support the migration of internal applications to Databricks (or equivalent) based solutions. Collaborate with application teams to ensure a seamless transition. Mentorship and Leadership: Lead and mentor junior data engineers. Share best practices, provide technical guidance, and foster a culture of continuous learning. Data Strategy Contribution: Contribute to the organization’s data strategy by identifying opportunities for data-driven insights and improvements. Participate in smaller focused mission teams to deliver value driven solutions aligned to our global and bold move priority initiatives and beyond. Design, develop and implement robust and scalable data analytics using modern technologies. Collaborate with cross functional teams and practises across the organisation including Commercial, Manufacturing, Medical, DataX, GrowthX and support other X (transformation) Hubs and Practices as appropriate, to understand user needs and translate them into technical solutions. Provide Technical Support to internal users troubleshooting complex issues and ensuring system uptime as soon as possible. Champion continuous improvement initiatives identifying opportunities to optimise performance security and maintainability of existing data and platform architecture and other technology investments. Participate in the continuous delivery pipeline. Adhering to DevOps best practises for version control automation and deployment. Ensuring effective management of the FoundationX backlog. Leverage your knowledge of data engineering principles to integrate with existing data pipelines and explore new possibilities for data utilization. Stay-up to date on the latest trends and technologies in data engineering and cloud platforms. Qualifications Required Bachelor's degree in computer science, Information Technology, or related field (master’s preferred) or equivalent experience 1-3+ years of experience in data engineering with a strong understanding of BI technologies, PySpark and SQL, building data pipelines and optimization. 1-3 +years + experience in data engineering and integration tools (e.g., Databricks, Change Data Capture) 1-3+ years + experience of utilizing cloud platforms (AWS, Azure, GCP). A deeper understanding/certification of AWS and Azure is considered a plus. Experience with relational and non-relational databases. Any relevant cloud-based integration certification at foundational level or above. (Any QLIK or BI certification, AWS certified DevOps engineer, AWS Certified Developer, Any Microsoft Certified Azure qualification, Proficient in RESTful APIs, AWS, CDMP, MDM, DBA, SQL, SAP, TOGAF, API, CISSP, VCP or any relevant certification) Experience in MuleSoft (Anypoint platform, its components, Designing and managing API-led connectivity solutions). Experience in AWS (environment, services and tools), developing code in at least one high level programming language. Experience with continuous integration and continuous delivery (CI/CD) methodologies and tools Experience with Azure services related to computing, networking, storage, and security Understanding of cloud integration patterns and Azure integration services such as Logic Apps, Service Bus, and API Management Preferred Subject Matter Expertise: possess a strong understanding of data architecture/ engineering/operations/ reporting within Life Sciences/ Pharma industry across Commercial, Manufacturing and Medical domains. Other complex and highly regulated industry experience will be considered across diverse areas like Commercial, Manufacturing and Medical. Data Analysis and Automation Skills: Proficient in identifying, standardizing, and automating critical reporting metrics and modelling tools Analytical Thinking: Demonstrated ability to lead ad hoc analyses, identify performance gaps, and foster a culture of continuous improvement. Technical Proficiency: Strong coding skills in SQL, R, and/or Python, coupled with expertise in machine learning techniques, statistical analysis, and data visualization. Agile Champion: Adherence to DevOps principles and a proven track record with CI/CD pipelines for continuous delivery. Working Environment At Astellas we recognize the importance of work/life balance, and we are proud to offer a hybrid working solution allowing time to connect with colleagues at the office with the flexibility to also work from home. We believe this will optimize the most productive work environment for all employees to succeed and deliver. Hybrid work from certain locations may be permitted in accordance with Astellas’ Responsible Flexibility Guidelines. \ Category FoundationX Astellas is committed to equality of opportunity in all aspects of employment. EOE including Disability/Protected Veterans Show more Show less
Posted 1 week ago
8.0 - 13.0 years
20 - 35 Lacs
Hyderabad
Remote
Databricks Administrator Azure/AWS | Remote | 6+ Years Job Description: We are seeking an experienced Databricks Administrator with 6+ years of expertise in managing and optimizing Databricks environments. The ideal candidate should have hands-on experience with Azure/AWS Databricks , cluster management, security configurations, and performance optimization. This role requires close collaboration with data engineering and analytics teams to ensure smooth operations and scalability. Key Responsibilities: Deploy, configure, and manage Databricks workspaces, clusters, and jobs . Monitor and optimize Databricks performance, auto-scaling, and cost management . Implement security best practices , including role-based access control (RBAC) and encryption. Manage Databricks integration with cloud storage (Azure Data Lake, S3, etc.) and other data services . Automate infrastructure provisioning and management using Terraform, ARM templates, or CloudFormation . Troubleshoot Databricks runtime issues, job failures, and performance bottlenecks . Support CI/CD pipelines for Databricks workloads and notebooks. Collaborate with data engineering teams to enhance ETL pipelines and data processing workflows . Ensure compliance with data governance policies and regulatory requirements . Maintain and upgrade Databricks versions and libraries as needed. Required Skills & Qualifications: 6+ years of experience as a Databricks Administrator or in a similar role. Strong knowledge of Azure/AWS Databricks and cloud computing platforms . Hands-on experience with Databricks clusters, notebooks, libraries, and job scheduling . Expertise in Spark optimization, data caching, and performance tuning . Proficiency in Python, Scala, or SQL for data processing. Experience with Terraform, ARM templates, or CloudFormation for infrastructure automation. Familiarity with Git, DevOps, and CI/CD pipelines . Strong problem-solving skills and ability to troubleshoot Databricks-related issues. Excellent communication and stakeholder management skills. Preferred Qualifications: Databricks certifications (e.g., Databricks Certified Associate/Professional). Experience in Delta Lake, Unity Catalog, and MLflow . Knowledge of Kubernetes, Docker, and containerized workloads . Experience with big data ecosystems (Hadoop, Apache Airflow, Kafka, etc.). Email : Hrushikesh.akkala@numerictech.com Phone /Whatsapp : 9700111702 For immediate response and further opportunities, connect with me on LinkedIn: https://www.linkedin.com/in/hrushikesh-a-74a32126a/
Posted 1 week ago
8.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Kyndryl Data Science Bengaluru, Karnataka, India Chennai, Tamil Nadu, India Posted on Jun 9, 2025 Apply now Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. As GCP Data Engineer at Kyndryl, you will be responsible for designing and developing data pipelines, participating in architectural discussions, and implementing data solutions in a cloud environment using GCP data services. You will collaborate with global architects and business teams to design and deploy innovative solutions, supporting data analytics, automation, and transformation needs. Responsibilities Design, develop, and maintain scalable data pipelines using GCP services such as BigQuery, Dataflow, Pub/Sub, and Cloud Storage. Participate in architectural discussions, conduct system analysis, and suggest optimal solutions that are scalable, future-proof, and aligned with business requirements. Collaborate with stakeholders to gather requirements and create high-level and detailed technical designs. Design data models suitable for both transactional and big data environments, supporting Machine Learning workflows. Build and optimize ETL/ELT infrastructure using a variety of data sources and GCP services. Develop and maintain Python / PySpark for data processing and integrate with GCP services for seamless data operations. Develop and optimize SQL queries for data analysis and reporting. Monitor and troubleshoot data pipeline issues to ensure timely resolution. Implement data governance and security best practices within GCP. Perform data quality checks and validation to ensure accuracy and consistency. Support DevOps automation efforts to ensure smooth integration and deployment of data pipelines. Provide design expertise in Master Data Management (MDM), Data Quality, and Metadata Management. Provide technical support and guidance to junior data engineers and other team members. Participate in code reviews and contribute to continuous improvement of data engineering practices. Implement best practices for cost management and resource utilization within GCP. If you're ready to embrace the power of data to transform our business and embark on an epic data adventure, then join us at Kyndryl. Together, let's redefine what's possible and unleash your potential. Your Future at Kyndryl Every position at Kyndryl offers a way forward to grow your career. We have opportunities that you won’t find anywhere else, including hands-on experience, learning opportunities, and the chance to certify in all four major platforms. Whether you want to broaden your knowledge base or narrow your scope and specialize in a specific sector, you can find your opportunity here. Who You Are You’re good at what you do and possess the required experience to prove it. However, equally as important – you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused – someone who prioritizes customer success in their work. And finally, you’re open and borderless – naturally inclusive in how you work with others. Required Technical And Professional Experience Bachelor’s or master’s degree in computer science, Engineering, or a related field with over 8 years of experience in data engineering More than 3 years of experience with the GCP data ecosystem Hands-on experience and Strong proficiency in GCP components such as Dataflow, Dataproc, BigQuery, Cloud Functions, Composer, Data Fusion. Excellent command of SQL with the ability to write complex queries and perform advanced data transformation. Strong programming skills in PySpark and/or Python, specifically for building cloud-native data pipelines. Familiarity with GCP tools like Looker, Airflow DAGs, Data Studio, App Maker, etc. Hands-on experience implementing enterprise-wide cloud data lake and data warehouse solutions on GCP. Knowledge of data governance, security, and compliance best practices. Experience with private and public cloud architectures, pros/cons, and migration considerations. Excellent problem-solving, analytical, and critical thinking skills. Ability to manage multiple projects simultaneously, while maintaining a high level of attention to detail. Communication Skills: Must be able to communicate with both technical and nontechnical. Able to derive technical requirements with the stakeholders. Ability to work independently and in agile teams. Preferred Technical And Professional Experience GCP Data Engineer Certification is highly preferred. Professional certification, e.g., Open Certified Technical Specialist with Data Engineering Specialization. Experience working as a Data Engineer and/or in cloud modernization. Knowledge of Databricks, Snowflake, for data analytics. Experience in NoSQL databases Familiarity with containerization and orchestration tools (e.g., Docker, Kubernetes). Familiarity with BI dashboards and Google Data Studio is a plus. Being You Diversity is a whole lot more than what we look like or where we come from, it’s how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we’re not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you – and everyone next to you – the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That’s the Kyndryl Way. What You Can Expect With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter – wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed. Get Referred! If you know someone that works at Kyndryl, when asked ‘How Did You Hear About Us’ during the application process, select ‘Employee Referral’ and enter your contact's Kyndryl email address. Apply now See more open positions at Kyndryl Show more Show less
Posted 1 week ago
4.0 - 7.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Category: Data Engineer Job Type: Hybrid Job Location: Bangalore Job Experience: 4-7 years We are seeking a Data Engineer to join our growing team. The Data Engineer will be responsible for designing, developing, and maintaining our ETL pipelines and managing our database systems. The ideal candidate should have a strong background in SQL, database design, and ETL processes. Key Responsibilities Responsibilities for the job Analyse the different source systems, profile data, understand, document & fix Data Quality issues. Gather requirements and business process knowledge to transform the data in a way that is geared towards the needs of end users. Write complex SQLs to extract & format source data for ETL/data pipeline. Design, implement, and maintain systems that collect and analyze business intelligence data. Design and architect an analytical data store or cluster for the enterprise and implement data pipelines that extract, transform, and load data into an information product that helps the organization reach strategic goals. Create design documents, Source to Target Mapping documents and any supporting documents needed for deployment/migration. Design, Develop and Test ETL/Data pipelines. Design & build metadata-based frameworks needs for data pipelines. Write Unit Test cases, execute Unit Testing and document Unit Test results. Manage and maintain the database, warehouse, & cluster with other dependent infrastructure. Expertise in managing and optimizing Spark clusters, along with other implementations of Spark. Strong programming skills in Python and Py-spark. Strong proficiency in SQL and experience with relational databases (PostgreSQL, MySQL, Oracle, etc.) and NoSQL databases (MongoDB, Cassandra, DynamoDB). Knowledge of data modelling techniques such as star/snowflake, data vault, etc. Knowledge of semantic modelling. Strong problem-solving skills Be able to hone business acumen with a capacity for straddling between macro business strategy to micro tangible data and AI products. Perform data cleaning, transformation, and validation to ensure accuracy and consistency across various data sources. Technologies preferred: Azure, Databricks Eligibility Criteria for the Job Education B.E/B.Tech in any specialization, BCA, MTech in any specialization, MCA. Primary Skill SQL Databricks Any one of the cloud experiences (AWS, Azure, GCP). Python, Pyspark Management Skills Ability to handle given tasks and projects simultaneously in an organized and timely manner. Soft Skills Good communication skills, verbal and written. Attention to details. Positive attitude and confidence. Show more Show less
Posted 1 week ago
15.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Purpose Over 15 years, we have become a premier global provider of multi-cloud management, cloud-native application development solutions, and strategic end-to-end digital transformation services. Headquartered in Canada and with regional headquarters in the U.S. and the United Kingdom, Centrilogic delivers smart, streamlined solutions to clients worldwide. We are looking for a passionate and experienced Data Engineer to work with our other 70 Software, Data and DevOps engineers to guide and assist our clients’ data modernization journey. Our team works with companies with ambitious missions - clients who are creating new, innovative products, often in uncharted markets. We work as embedded members and leaders of our clients' development and data teams. We bring experienced senior engineers, leading-edge technologies and mindsets, and creative thinking. We show our clients how to move to the modern frameworks of data infrastructures and processing, and we help them reach their full potential with the power of data. In this role, you'll be the day-to-day primary point of contact with our clients to modernize their data infrastructures, architecture, and pipelines. Principal Responsibilities Consulting clients on cloud-first strategies for core bet-the-company data initiatives Providing thought leadership on both process and technical matters Becoming a real champion and trusted advisor to our clients on all facets of Data Engineering Designing, developing, deploying, and supporting the modernization and transformation of our client’s end-to-end data strategy, including infrastructure, collection, transmission, processing, and analytics Mentoring and educating clients’ teams to keep them up to speed with the latest approaches, tools and skills, and setting them up for continued success post-delivery Required Experience And Skills Must have either Microsoft Certified Azure Data Engineer Associate or Fabric Data Engineer Associate certification. Must have experience working in a consulting or contracting capacity on large data management and modernization programs. Experience with SQL Servers, data engineering, on platforms such as Azure Data Factory, Databricks, Data Lake, and Synapse. Strong knowledge and demonstrated experience with Delta Lake and Lakehouse Architecture. Strong knowledge of securing Azure environment, such as RBAC, Key Vault, and Azure Security Center. Strong knowledge of Kafka and Spark and extensive experience using them in a production environment. Strong and demonstrable experience as DBA in large-scale MS SQL environments deployed in Azure. Strong problem-solving skills, with the ability to get to the route of an issue quickly. Strong knowledge of Scala or Python. Strong knowledge of Linux administration and networking. Scripting skills and Infrastructure as Code (IaC) experience using PowerShell, Bash, and ARM templates. Understanding of security and corporate governance issues related with cloud-first data architecture, as well as accepted industry solutions. Experience in enabling continuous delivery for development teams using scripted cloud provisioning and automated tooling. Experience working with Agile development methodology that is fit for purpose. Sound business judgment and demonstrated leadership Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Vadodara, Gujarat, India
On-site
We’re reinventing the market research industry. Let’s reinvent it together. At Numerator, we believe tomorrow’s success starts with today’s market intelligence. We empower the world’s leading brands and retailers with unmatched insights into consumer behavior and the influencers that drive it. The Data Methods Data Science Lead will be expected to independently define and lead all analyses and develop solutions to support key innovation initiatives and methodology development, The individual will have a relatively advanced understanding of our E2E processes, and be able to relatively independently work on identifying areas for improvement, assessing what enhancements should be made and working with other teams to implement these changes. There will be a strong focus on using panel data across our services working via our dedicated Data Processing Platform on Databricks. Main Duties And Responsibilities Data Methods / Data Science Primary Role: Able to drive independently key initiatives and projects related to Data Methods Project Plans: Outline project scope, objectives, timelines and milestones. Data Collection / Preparation: Management of data (e.g. cleaning, preprocessing). Exploratory Data Analysis: Conduct feasibility studies potential solutions / lead prototyping. Model Development: Building and validating models using various data science algorithms. Model Deployment: Designing requirements / testing / deploying models in production. Reporting and Documentation: Methods used, findings and recommendations. Training and Support: Training and providing deployment support. Project Review and Evaluation: Conducting reviews of project outcomes Able to form views on how new processes ought to be constructed. Co-ordinate work with On-Shore Methods Manager / Leads Able to contribute both to BAU enhancements and to work under the umbrella of a project. Understanding the wider business Develops a basic understanding of the Operations functions Develop an understanding of the Commercial usage of our data Develop a broader understanding of our direct competitors Training & Development Take ownership for self-development and where available participate in structured training. Gains proficiency in all relevant databases, data interrogation and reporting tools (for example Databricks, SQL, Python , Excel, etc.) Communication & Collaboration Ability to collaborate across key stakeholders in operations, technology and product teams Ability to collaborate and communicate with stakeholders upwards and with team members Be able to communicate in an appropriate manner (e.g. verbally, presenting or creating a PowerPoint, Word document, email) Adhering to deadlines and escalating where there is a risk of delays Demonstrate and role model best practise and techniques including positive communication style. Displays a proactive attitude when working both within and outside of the team. Demonstrates clear, direct and to the point communication at Data Methods team meetings Issue Management and Best Practice Proactive identification and root cause analysis of Data Methods issues and development of best practice solutions to improve the underlying methodology and processes. Support regular methodology review meetings with On-Shore Manager and Leads to establish priorities and future requirements. Knowledge sharing through the team, in either team meeting or day-to-day with the wider Data Methods team Able to think through complex processes and to how to construct and improve them, considering in detail the positive and negative implications of different approaches and how best to test and assess them. Resource management Organising workload efficiently Adhering to schedules Escalating any risks to deadlines and capacity challenges What You'll Bring to Numerator Education & Experience Bachelors, Masters, Doctorate Degree 3+ years experience Knowledge Domain expertise in 3-5 of: Data collection methods Fraud detection methods Data cleaning methods Demographics methods Sampling methods Bias methods Eligibility methods Weighting methods Data aggregation methods Outlier methods Statistical modelling Metrics / KPI methods Tools Python [or R] (advanced - ability to independently script end-to-end functioning programmes) Databricks (intermediate) SQL (intermediate) Azure Dev Ops (basic) Git (basic) Excel (advanced) Power BI (intermediate) Passion and Drive Passionate about data quality, integrity and best practices Passionate about delivering high quality data to clients on schedule Communication Ability to communicate complex technical concepts to stakeholders in simple language Good English communication, presentation, interpersonal and writing skills Good listening skills Good online, virtual, and in-person collaboration skills Comfortable presenting panel and data methods to external audiences (internal) Show more Show less
Posted 1 week ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
Senior Specialist - BI answer Design & Transformation Bengaluru (Bangalore) India or Gurgaon India AXA XL recognizes digital, data and information assets are critical for the business, both in terms of managing risk and enabling new business opportunities. Data and Insights should not only be high quality, but also actionable - enabling AXA XL’ s executive leadership team to maximize benefits and achieve sustained dynamic advantage. Our Innovation, Data & Analytics (IDA) function is focused on driving innovation by optimizing how we leverage digital, data and AI to drive strategy and differentiate ourselves from the competition. As we develop an enterprise-wide data and digital strategy that moves us toward greater focus on the use of data and strengthen our capabilities, we are seeking a Senior Specialist to join our BI & Reporting function, as part of our BI Solution Design & Transformation team. In this role, you will be a key technical BI & Reporting lead for all our global Power BI (inc. Power Apps, Power Automate etc.) solutions, and will lead efforts to enhance, transform and streamline the BI platform landscape. You will be an ambassador of new and existing BI solutions across all regions and must be comfortable proactively engaging with business users. This will enable the organization to gain necessary, timely insights to drive business decisions, build our dynamic advantage and help differentiate in the market through easy-to-use, scalable, cost effective, secure, and high-performance BI platforms. In this role, you will report to the Division Lead, BI Solution Design & Transformation, based in London. You will work within a global team in your day-to-day work with teams and business users around the world. What You’ll Be Doing What will your essential responsibilities include? Be a technical lead and go to expert in the team on the development of new BI Reporting solutions globally, and make sure new products are fit for customer purpose, built to best practice standards, and efficiently utilize company resources and data assets. Be able to work directly with customers in the business, to understand requirements, design new solutions and troubleshoot issues, as well as prioritize work based upon a deep customer understanding. Be able to advise the wider team and bring industry best practice techniques to make sure our BI platforms are well-governed, maintained, remain secure and optimized for our customers. This includes AI in BI and related design, testing, end to end roll out and ongoing continuous improvement. Enable business to self-serve and meet BI demands through provision and ongoing management of appropriate tools, coupled with leading relevant training for the business. Proactively partner with key areas within Global Technology, Transformation & Change Delivery, Group and other teams for Projects and for Business-as-Usual deliverables. Instill a customer-first culture, prioritizing service for our business stakeholders above all else. What You Will BRING We’re looking for someone who has these abilities and skills: Required Skills And Abilities Very high technical proficiency in use, development and solution design in Microsoft Power BI (inc. Power Apps, Power Automate etc.), with experience of usage in large, global, and complicated organization. Be an expert on data modelling best practices, data engineering and data management (inc. SQL). Able to articulate these to both technical and non-technical users. Ability to communicate effectively directly with users, peers, senior management and teams across the globe, manage stakeholders effectively and navigate a matrixed virtual global organization. Brings in a collaborative spirit, can-do attitude and a Customer First mindset to everything they put their mind to. Passion for digital, data and AI and experience working within a digital and data driven organization. A minimum of a bachelor’s or master's degree in a relevant discipline. Applies in-depth knowledge of business and specialized areas to solve problems and understand integration challenges and long-term impact creatively and strategically. Is a self-starter who can operate independently and lead others in the team in strategic thinking and solution design. Desired Skills And Abilities Use and development skills in AI in BI; Power BI Copilot, connections with LLMs/models. Databricks and Databricks Genie ideal. Other programing language proficiency, e.g. R, Python PowerShell etc. Ability to operate and thrive in an agile team working environment (inc. use of Jira and other workflow tools). Insurance experience with both financial and non-financial metric understanding. Who WE Are AXA XL, the P&C and specialty risk division of AXA, is known for solving complex risks. For mid-sized companies, multinationals and even some inspirational individuals we don’t just provide re/insurance, we reinvent it. How? By combining a comprehensive and efficient capital platform, data-driven insights, leading technology, and the best talent in an agile and inclusive workspace, empowered to deliver top client service across all our lines of business − property, casualty, professional, financial lines and specialty. With an innovative and flexible approach to risk solutions, we partner with those who move the world forward. Learn more at axaxl.com What We OFFER Inclusion AXA XL is committed to equal employment opportunity and will consider applicants regardless of gender, sexual orientation, age, ethnicity and origins, marital status, religion, disability, or any other protected characteristic. At AXA XL, we know that an inclusive culture and a diverse workforce enable business growth and are critical to our success. That’s why we have made a strategic commitment to attract, develop, advance and retain the most diverse workforce possible, and create an inclusive culture where everyone can bring their full selves to work and can reach their highest potential. It’s about helping one another — and our business — to move forward and succeed. Five Business Resource Groups focused on gender, LGBTQ+, ethnicity and origins, disability and inclusion with 20 Chapters around the globe Robust support for Flexible Working Arrangements Enhanced family friendly leave benefits Named to the Diversity Best Practices Index Signatory to the UK Women in Finance Charter Learn more at axaxl.com/about-us/inclusion-and-diversity. AXA XL is an Equal Opportunity Employer. Total Rewards AXA XL’s Reward program is designed to take care of what matters most to you, covering the full picture of your health, wellbeing, lifestyle and financial security. It provides dynamic compensation and personalized, inclusive benefits that evolve as you do. We’re committed to rewarding your contribution for the long term, so you can be your best self today and look forward to the future with confidence. Sustainability At AXA XL, Sustainability is integral to our business strategy. In an ever-changing world, AXA XL protects what matters most for our clients and communities. We know that sustainability is at the root of a more resilient future. Our 2023-26 Sustainability strategy, called “Roots of resilience,” focuses on protecting natural ecosystems, addressing climate change, and embedding sustainable practices across our operations. Our Pillars Valuing nature: How we impact nature affects how nature impacts us. Resilient ecosystems - the foundation of a sustainable planet and society - are essential to our future. We’re committed to protecting and restoring nature - from mangrove forests to the bees in our backyard - by increasing biodiversity awareness and inspiring clients and colleagues to put nature at the heart of their plans. Addressing climate change: The effects of a changing climate are far reaching and significant. Unpredictable weather, increasing temperatures, and rising sea levels cause both social inequalities and environmental disruption. We're building a net zero strategy, developing insurance products and services, and mobilizing to advance thought leadership and investment in societal-led solutions. Integrating ESG: All companies have a role to play in building a more resilient future. Incorporating ESG considerations into our internal processes and practices builds resilience from the roots of our business. We’re training our colleagues, engaging our external partners, and evolving our sustainability governance and reporting. AXA Hearts in Action: We have established volunteering and charitable giving programs to help colleagues support causes that matter most to them, known as AXA XL’ s “Hearts in Action” programs. These include our Matching Gifts program, Volunteering Leave, and our annual volunteering day - the Global Day of Giving. For more information, please see Sustainability at AXA XL. The U.S. pay range for this position is USD 106,500 - 186,500. Actual pay will be determined based upon the individual’s skills, experience, and location. We strive for market alignment and internal equity with our colleagues’ pay. At AXA XL, we know how important physical, mental, and financial health are to our employees, which is why we are proud to offer benefits such as a dynamic retirement savings plan, health and wellness programs, and many other benefits. We also believe in fostering our colleagues' development and offer a wide range of learning opportunities for colleagues to hone their professional skills and to position themselves for the next step of their careers. For more details about AXA XL’ s benefits offerings, please visit US Benefits at a Glance 2025. Show more Show less
Posted 1 week ago
8.0 - 10.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. Senior Manager Technology – US Commercial Data & Analytics What You Will Do Let’s do this. Let’s change the world. In this vital role you will lead the engagement model between Amgen's Technology organization and our global business partners in Commercial Data & Analytics. We seek a technology leader with a passion for innovation and a collaborative working style that partners effectively with business and technology leaders. Are you interested in building a team that consistently delivers business value in an agile model using technologies such as AWS, Databricks, Airflow, and Tableau? Come join our team! Roles & Responsibilities: Establish an effective engagement model to collaborate with senior leaders on the Sales Insights product team within the Commercial Data & Analytics organization, focused on operations within the United States Serve as the technology product owner for an agile product team committed to delivering business value to Commercial stakeholders via data pipeline buildout for sales data Lead and mentor junior team members to deliver on the needs of the business Interact with business clients and technology management to create technology roadmaps, build cases, and drive DevOps to achieve the roadmaps Help to mature Agile operating principles through deployment of creative and consistent practices for user story development, robust testing and quality oversight, and focus on user experience Ability to connect and understand our vast array Commercial and other functional data sources including Sales, Activity, and Digital data, etc. into consumable and user-friendly modes (e.g., dashboards, reports, mobile, etc.) for key decision makers such as executives, brand leads, account managers, and field representatives. Become the lead subject matter expert in reporting technology capabilities by researching and implementing new tools and features, internal and external methodologies What We Expect Of You We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Master’s degree with 8 - 10 years of experience in Information Systems experience OR Bachelor’s degree with 10 - 14 years of experience in Information Systems experience OR Diploma with 14 - 18 years of experience in Information Systems experience Must-Have Skills Excellent problem-solving skills and a passion for tackling complex challenges in data and analytics with technology Experience leading data and analytics teams in a Scaled Agile Framework (SAFe) Excellent interpersonal skills, strong attention to detail, and ability to influence based on data and business value Ability to build compelling business cases with accurate cost and effort estimations Has experience with writing user requirements and acceptance criteria in agile project management systems such as Jira Ability to explain sophisticated technical concepts to non-technical clients Strong understanding of sales and incentive compensation value streams Preferred Qualifications: Jira Align & Confluence experience Experience of DevOps, Continuous Integration, and Continuous Delivery methodology Understanding of software systems strategy, governance, and infrastructure Experience in managing product features for PI planning and developing product roadmaps and user journeys Familiarity with low-code, no-code test automation software Technical thought leadership Soft Skills: Able to work effectively across multiple geographies (primarily India, Portugal, and the United States) under minimal supervision Demonstrated proficiency in written and verbal communication in English language Skilled in providing oversight and mentoring team members. Demonstrated ability in effectively delegating work Intellectual curiosity and the ability to question partners across functions Ability to prioritize successfully based on business value High degree of initiative and self-motivation Ability to manage multiple priorities successfully across virtual teams Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills Technical Skills: ETL tools: Experience in ETL tools such as Databricks Redshift or equivalent cloud-based dB Big Data, Analytics, Reporting, Data Lake, and Data Integration technologies S3 or equivalent storage system AWS (similar cloud-based platforms) BI Tools (Tableau and Power BI preferred) What You Can Expect Of Us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation. Show more Show less
Posted 1 week ago
5.0 - 10.0 years
35 - 40 Lacs
Bengaluru
Work from Office
Role & responsibilities Collaborate with cross-functional teams to understand data requirements and design scalable and efficient data processing solutions. Develop and maintain data pipelines using PySpark and SQL on the Databricks platform. Optimize and tune data processing jobs for performance and reliability. Implement automated testing and monitoring processes to ensure data quality and reliability. Work closely with data scientists, data analysts, and other stakeholders to understand their data needs and provide effective solutions. Troubleshoot and resolve data-related issues, including performance bottlenecks and data quality problems. Stay up to date with industry trends and best practices in data engineering and Databricks. Preferred candidate profile 5+ years of experience as a Data Engineer , with a focus on Databricks and cloud-based data platforms with a minimum of 2 years of experience in writing unit/end-to-end tests for data pipelines and ETL processes on Databricks. Hands-on experience in PySpark programming for data manipulation, transformation, and analysis. Strong experience in SQL and writing complex queries for data retrieval and manipulation. Experience in developing and implementing test cases for data processing pipelines using a test-driven development approach. Experience in Docker for containerising and deploying data engineering applications is good to have. Experience in the scripting language Python is mandatory. Strong knowledge of Databricks platform and its components, including Databricks notebooks, clusters, and jobs. Experience in designing and implementing data models to support analytical and reporting needs will be an added advantage. Strong Knowledge of Azure Data Factory for Data orchestration, ETL workflows, and data integration is good to have. Good to have knowledge of cloud-based storage such as Amazon S3 and Azure Blob Storage . Bachelor's or Master's degree in Computer Science, Engineering, or a related field. Strong analytical and problem-solving skills. Strong English communication skills, both written and spoken, are crucial. Capability to solve complex technical issues and comprehend risks prior to the circumstance.
Posted 1 week ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Apply now » Apply Now Start applying with LinkedIn Please wait... Date: Jun 7, 2025 Location: Pune, MH, IN Company: HMH HMH is a learning technology company committed to delivering connected solutions that engage learners, empower educators and improve student outcomes. As a leading provider of K–12 core curriculum, supplemental and intervention solutions, and professional learning services, HMH partners with educators and school districts to uncover solutions that unlock students’ potential and extend teachers’ capabilities. HMH serves more than 50 million students and 4 million educators in 150 countries. HMH Technology India Pvt. Ltd. is our technology and innovation arm in India focused on developing novel products and solutions using cutting-edge technology to better serve our clients globally. HMH aims to help employees grow as people, and not just as professionals. For more information, visit www.hmhco.com The data architect is responsible for designing, creating, and managing an organization’s data architecture. This role is critical in establishing a solid foundation for data management within an organization, ensuring that data is organized, accessible, secure, and aligned with business objectives. The data architect designs data models, warehouses, file systems and databases, and defines how data will be collected and organized. Responsibilities Interprets and delivers impactful strategic plans improving data integration, data quality, and data delivery in support of business initiatives and roadmaps Designs the structure and layout of data systems, including databases, warehouses, and lakes Selects and designs database management systems that meet the organization’s needs by defining data schemas, optimizing data storage, and establishing data access controls and security measures Defines and implements the long-term technology strategy and innovations roadmaps across analytics, data engineering, and data platforms Designs processes for the ETL process from various sources into the organization’s data systems Translates high-level business requirements into data models and appropriate metadata, test data, and data quality standards Manages senior business stakeholders to secure strong engagement and ensures that the delivery of the project aligns with longer-term strategic roadmaps Simplifies the existing data architecture, delivering reusable services and cost-saving opportunities in line with the policies and standards of the company Leads and participates in the peer review and quality assurance of project architectural artifacts across the EA group through governance forums Defines and manages standards, guidelines, and processes to ensure data quality Works with IT teams, business analysts, and data analytics teams to understand data consumers’ needs and develop solutions Evaluates and recommends emerging technologies for data management, storage, and analytics Design, create, and implement logical and physical data models for both IT and business solutions to capture the structure, relationships, and constraints of relevant datasets Build and operationalize complex data solutions, correct problems, apply transformations, and recommend data cleansing/quality solutions Effectively collaborate and communicate with various stakeholders to understand data and business requirements and translate them into data models Create entity-relationship diagrams (ERDs), data flow diagrams, and other visualization tools to represent data models Collaborate with database administrators and software engineers to implement and maintain data models in databases, data warehouses, and data lakes Develop data modeling best practices, and use these standards to identify and resolve data modeling issues and conflicts Conduct performance tuning and optimization of data models for efficient data access and retrieval Incorporate core data management competencies, including data governance, data security and data quality Education Job Requirements A bachelor’s degree in computer science, data science, engineering, or related field Experience At least five years of relevant experience in design and implementation of data models for enterprise data warehouse initiatives Experience leading projects involving data warehousing, data modeling, and data analysis Design experience in Azure Databricks, PySpark, and Power BI/Tableau Skills Ability in programming languages such as Java, Python, and C/C++ Ability in data science languages/tools such as SQL, R, SAS, or Excel Proficiency in the design and implementation of modern data architectures and concepts such as cloud services (AWS, Azure, GCP), real-time data distribution (Kafka, Dataflow), and modern data warehouse tools (Snowflake, Databricks) Experience with database technologies such as SQL, NoSQL, Oracle, Hadoop, or Teradata Understanding of entity-relationship modeling, metadata systems, and data quality tools and techniques Ability to think strategically and relate architectural decisions and recommendations to business needs and client culture Ability to assess traditional and modern data architecture components based on business needs Experience with business intelligence tools and technologies such as ETL, Power BI, and Tableau Ability to regularly learn and adopt new technology, especially in the ML/AI realm Strong analytical and problem-solving skills Ability to synthesize and clearly communicate large volumes of complex information to senior management of various technical understandings Ability to collaborate and excel in complex, cross-functional teams involving data scientists, business analysts, and stakeholders Ability to guide solution design and architecture to meet business needs Expert knowledge of data modeling concepts, methodologies, and best practices Proficiency in data modeling tools such as Erwin or ER/Studio Knowledge of relational databases and database design principles Familiarity with dimensional modeling and data warehousing concepts Strong SQL skills for data querying, manipulation, and optimization, and knowledge of other data science languages, including JavaScript, Python, and R Ability to collaborate with cross-functional teams and stakeholders to gather requirements and align on data models Excellent analytical and problem-solving skills to identify and resolve data modeling issues Strong communication and documentation skills to effectively convey complex data modeling concepts to technical and business stakeholders HMH Technology Private Limited is an Equal Opportunity Employer and considers applicants for all positions without regard to race, colour, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. We are committed to creating a dynamic work environment that values diversity and inclusion, respect and integrity, customer focus, and innovation. For more information, visit https://careers.hmhco.com/. Follow us on Twitter, Facebook, LinkedIn, and YouTube. Job Segment: Curriculum, Social Media, Education, Marketing Apply now » Apply Now Start applying with LinkedIn Please wait... Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
Remote
Data Engineer – Azure This is a hands on data platform engineering role that places significant emphasis on consultative data engineering engagements with a wide range of customer stakeholders; Business Owners, Business Analytics, Data Engineering teams, Application Development, End Users and Management teams. You Will: Design and build resilient and efficient data pipelines for batch and real-time streaming Collaborate with product managers, software engineers, data analysts, and data scientists to build scalable and data-driven platforms and tools Provide technical product expertise, advise on deployment architectures, and handle in-depth technical questions around data infrastructure, PaaS services, design patterns and implementation approaches Collaborate with enterprise architects, data architects, ETL developers & engineers, data scientists, and information designers to lead the identification and definition of required data structures, formats, pipelines, metadata, and workload orchestration capabilities Address aspects such as data privacy & security, data ingestion & processing, data storage & compute, analytical & operational consumption, data modeling, data virtualization, self-service data preparation & analytics, AI enablement, and API integrations Execute projects with an Agile mindset Build software frameworks to solve data problems at scale Technical Requirements: 3+ years of data engineering experience leading implementations of large-scale lakehouses on Databricks, Snowflake, or Synapse. Prior experience using DBT and PowerBI will be a plus Extensive experience with Azure data services (Databricks, Synapse, ADF) and related azure infrastructure services like firewall, storage, key vault etc. is required Strong programming / scripting experience using SQL and python and Spark Knowledge of software configuration management environments and tools such as JIRA, Git, Jenkins, TFS, Shell, PowerShell, Bitbucket Experience with Agile development methods in data-oriented projects Other Requirements: Highly motivated self-starter and team player and demonstrated success in prior roles Track record of success working through technical challenges within enterprise organizations Ability to prioritize deals, training, and initiatives through highly effective time management Excellent problem solving, analytical, presentation, and whiteboarding skills Track record of success dealing with ambiguity (internal and external) and working collaboratively with other departments and organizations to solve challenging problems Strong knowledge of technology and industry trends that affect data analytics decisions for enterprise organizations Certifications on Azure Data Engineering and related technologies "Remote postings are limited to candidates residing within the country specified in the posting location" About Rackspace Technology We are the multicloud solutions experts. We combine our expertise with the world’s leading technologies — across applications, data and security — to deliver end-to-end solutions. We have a proven record of advising customers based on their business challenges, designing solutions that scale, building and managing those solutions, and optimizing returns into the future. Named a best place to work, year after year according to Fortune, Forbes and Glassdoor, we attract and develop world-class talent. Join us on our mission to embrace technology, empower customers and deliver the future. More on Rackspace Technology Though we’re all different, Rackers thrive through our connection to a central goal: to be a valued member of a winning team on an inspiring mission. We bring our whole selves to work every day. And we embrace the notion that unique perspectives fuel innovation and enable us to best serve our customers and communities around the globe. We welcome you to apply today and want you to know that we are committed to offering equal employment opportunity without regard to age, color, disability, gender reassignment or identity or expression, genetic information, marital or civil partner status, pregnancy or maternity status, military or veteran status, nationality, ethnic or national origin, race, religion or belief, sexual orientation, or any legally protected characteristic. If you have a disability or special need that requires accommodation, please let us know. Show more Show less
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Databricks is a popular technology in the field of big data and analytics, and the job market for Databricks professionals in India is growing rapidly. Companies across various industries are actively looking for skilled individuals with expertise in Databricks to help them harness the power of data. If you are considering a career in Databricks, here is a detailed guide to help you navigate the job market in India.
The average salary range for Databricks professionals in India varies based on experience level: - Entry-level: INR 4-6 lakhs per annum - Mid-level: INR 8-12 lakhs per annum - Experienced: INR 15-25 lakhs per annum
In the field of Databricks, a typical career path may include: - Junior Developer - Senior Developer - Tech Lead - Architect
In addition to Databricks expertise, other skills that are often expected or helpful alongside Databricks include: - Apache Spark - Python/Scala programming - Data modeling - SQL - Data visualization tools
As you prepare for Databricks job interviews, make sure to brush up on your technical skills, stay updated with the latest trends in the field, and showcase your problem-solving abilities. With the right preparation and confidence, you can land your dream job in the exciting world of Databricks in India. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
16951 Jobs | Dublin
Wipro
9154 Jobs | Bengaluru
EY
7414 Jobs | London
Amazon
5846 Jobs | Seattle,WA
Uplers
5736 Jobs | Ahmedabad
IBM
5617 Jobs | Armonk
Oracle
5448 Jobs | Redwood City
Accenture in India
5221 Jobs | Dublin 2
Capgemini
3420 Jobs | Paris,France
Tata Consultancy Services
3151 Jobs | Thane