Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 6.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Responsibilities: Develop and execute test scripts to validate data pipelines, transformations, and integrations. Formulate and maintain test strategies—including smoke, performance, functional, and regression testing—to ensure data processing and ETL jobs meet requirements. Collaborate with development teams to assess changes in data workflows and update test cases to preserve data integrity. Design and run tests for data validation, storage, and retrieval using Azure services like Data Lake, Synapse, and Data Factory, adhering to industry standards. Continuously enhance automated tests as new features are developed, ensuring timely delivery per defined quality standards. Participate in data reconciliation and verify Data Quality frameworks to maintain data accuracy, completeness, and consistency across the platform. Share knowledge and best practices by collaborating with business analysts and technology teams to document testing processes and findings. Communicate testing progress effectively with stakeholders, highlighting issues or blockers, and ensuring alignment with business objectives. Maintain a comprehensive understanding of the Azure Data Lake platform's data landscape to ensure thorough testing coverage. Skills & Experience: 3-6 years of QA experience with a strong focus on Big Data testing, particularly in Data Lake environments on Azure's cloud platform. Proficient in Azure Data Factory, Azure Synapse Analytics and Databricks for big data processing and scaled data quality checks. Proficiency in SQL, capable of writing and optimizing both simple and complex queries for data validation and testing purposes. Proficient in PySpark, with experience in data manipulation and transformation, and a demonstrated ability to write and execute test scripts for data processing and validation. Hands-on experience with Functional & system integration testing in big data environments, ensuring seamless data flow and accuracy across multiple systems. Knowledge and ability to design and execute test cases in a behaviour-driven development environment. Fluency in Agile methodologies, with active participation in Scrum ceremonies and a strong understanding of Agile principles. Familiarity with tools like Jira, including experience with X-Ray or Jira Zephyr for defect management and test case management. Proven experience working on high-traffic and large-scale software products, ensuring data quality, reliability, and performance under demanding conditions. Show more Show less
Posted 6 days ago
8.0 years
0 Lacs
India
Remote
Who We Are At Twilio, we’re shaping the future of communications, all from the comfort of our homes. We deliver innovative solutions to hundreds of thousands of businesses and empower millions of developers worldwide to craft personalized customer experiences. Our dedication to remote-first work, and strong culture of connection and global inclusion means that no matter your location, you’re part of a vibrant team with diverse experiences making a global impact each day. As we continue to revolutionize how the world interacts, we’re acquiring new skills and experiences that make work feel truly rewarding. Your career at Twilio is in your hands. See yourself at Twilio Join the team as our next Staff Backend Engineer on Twilio’s Segment Engineering teams. About The Job As a Staff Backend Engineer on the Twilio Segment Engineering team, you’ll help us build and scale systems that support the leading Customer Data Platform (CDP) in a rapidly evolving and competitive market. Our products process billions of data points per hour, enabling customers to orchestrate and activate their data efficiently and flexibly. Segment provides a best-in-class data infrastructure and orchestration platform that supports a wide range of customer use cases, from identity resolution to real-time audience segmentation. As an engineer on this team, you will be responsible for designing, developing, and optimizing backend services that power data pipelines, APIs, and event-driven architectures. If you thrive in fast-moving environments, enjoy working on scalable systems, and are passionate about building high-performance backend services, this role is for you. Responsibilities In this role, you’ll: Design, develop, and maintain backend services that power Twilio Segment’s high scale data platform. Build scalable and high-performance APIs and data pipelines to support customer data orchestration. Improve the reliability, scalability, and efficiency of Segment’s backend systems. Collaborate with cross-functional teams including product, design, and infrastructure to deliver customer-focused solutions. Drive best practices in software engineering, including code reviews, testing, and deployment processes. Ensure high operational excellence by monitoring, troubleshooting, and maintaining always-on cloud services. Contribute to architectural discussions and technical roadmaps that align with Twilio’s CXaaS vision and Segment’s strategic initiatives. Qualifications Not all applicants will have skills that match a job description exactly. Twilio values diverse experiences in other industries, and we encourage everyone who meets the required qualifications to apply. While having “desired” qualifications make for a strong candidate, we encourage applicants with alternative experiences to also apply. If your career is just starting or hasn't followed a traditional path, don't let that stop you from considering Twilio. We are always looking for people who will bring something new to the table! Required: 8+ years of experience writing production-grade backend code in a modern programming language (e.g., Golang, Python, Java, Scala, or similar). Strong fundamentals and experience in building fault tolerant distributed systems, event-driven architectures, and database design. Experience working with AWS cloud-based infrastructure. Well-versed in designing and building high-scale, low-latency APIs. Solid grasp of Linux systems and networking concepts. Strong debugging and troubleshooting skills for complex distributed applications. Experience shipping services (products) following the CI/CD development paradigm. Effective communication skills and ability to collaborate in a fast-paced team environment. Comfortable with ambiguity and problem-solving in a rapidly growing company. Desired Experience working with event streaming technologies (Kafka, Pulsar, or similar). Experience with database technologies like PostgreSQL, DynamoDB, or Databricks SQ>. Familiarity with containerization and orchestration tools (Docker, Kubernetes). Background in building multi-tenant SaaS platforms at scale. Experience working with observability tools such as Prometheus, Grafana, or Datadog. Experience working in a geographically distributed team. Location This role will be remote and based in India (Karnataka, Tamil Nadu, Telangana, Maharashtra, Delhi). Travel We prioritize connection and opportunities to build relationships with our customers and each other. For this role, you may be required to travel occasionally to participate in project or team in-person meetings. What We Offer Working at Twilio offers many benefits, including competitive pay, generous time off, ample parental and wellness leave, healthcare, a retirement savings program, and much more. Offerings vary by location. Twilio thinks big. Do you? We like to solve problems, take initiative, pitch in when needed, and are always up for trying new things. That's why we seek out colleagues who embody our values — something we call Twilio Magic. Additionally, we empower employees to build positive change in their communities by supporting their volunteering and donation efforts. So, if you're ready to unleash your full potential, do your best work, and be the best version of yourself, apply now! If this role isn't what you're looking for, please consider other open positions. Twilio is proud to be an equal opportunity employer. We do not discriminate based upon race, religion, color, national origin, sex (including pregnancy, childbirth, reproductive health decisions, or related medical conditions), sexual orientation, gender identity, gender expression, age, status as a protected veteran, status as an individual with a disability, genetic information, political views or activity, or other applicable legally protected characteristics. We also consider qualified applicants with criminal histories, consistent with applicable federal, state and local law. Qualified applicants with arrest or conviction records will be considered for employment in accordance with the Los Angeles County Fair Chance Ordinance for Employers and the California Fair Chance Act. Additionally, Twilio participates in the E-Verify program in certain locations, as required by law. Show more Show less
Posted 6 days ago
7.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Job Description The candidate must possess knowledge relevant to the functional area, and act as a subject matter expert in providing advice in the area of expertise, and also focus on continuous improvement for maximum efficiency. It is vital to focus on the high standard of delivery excellence, provide top-notch service quality and develop successful long-term business partnerships with internal/external customers by identifying and fulfilling customer needs. He/she should be able to break down complex problems into logical and manageable parts in a systematic way, and generate and compare multiple options, and set priorities to resolve problems. The ideal candidate must be proactive, and go beyond expectations to achieve job results and create new opportunities. He/she must positively influence the team, motivate high performance, promote a friendly climate, give constructive feedback, provide development opportunities, and manage career aspirations of direct reports. Communication skills are key here, to explain organizational objectives, assignments, and the big picture to the team, and to articulate team vision and clear objectives. Senior Process Manager Roles And Responsibilities We are seeking a talented and motivated Data Engineer to join our dynamic team. The ideal candidate will have a deep understanding of data integration processes and experience in developing and managing data pipelines using Python, SQL, and PySpark within Databricks. You will be responsible for designing robust backend solutions, implementing CI/CD processes, and ensuring data quality and consistency. Data Pipeline Development: Using Data bricks features to explore raw datasets and understand their structure. Creating and optimizing Spark-based workflows. Create end-to-end data processing pipelines, including ingesting raw data, transforming it, and running analyses on the processed data. Create and maintain data pipelines using Python and SQL. Solution Design and Architecture: Design and architect backend solutions for data integration, ensuring they are robust, scalable, and aligned with business requirements. Implement data processing pipelines using various technologies, including cloud platforms, big data tools, and streaming frameworks. Automation and Scheduling: Automate data integration processes and schedule jobs on servers to ensure seamless data flow. Data Quality and Monitoring: Develop and implement data quality checks and monitoring systems to ensure data accuracy and consistency. CI/CD Implementation: Use Jenkins and Bit bucket to create and maintain metadata and job files. Implement continuous integration and continuous deployment (CI/CD) processes in both development and production environments to deploy data pipelines efficiently. Collaboration and Documentation: Work effectively with cross-functional teams, including software engineers, data scientists, and DevOps, to ensure successful project delivery. Document data pipelines and architecture to ensure knowledge transfer and maintainability. Participate in stakeholder interviews, workshops, and design reviews to define data models, pipelines, and workflows. Technical And Functional Skills Education and Experience: Bachelor’s Degree with 7+ years of experience, including at least 3+ years of hands-on experience in SQL/ and Python. Technical Proficiency: Proficiency in writing and optimizing SQL queries in MySQL and SQL Server. Expertise in Python for writing reusable components and enhancing existing ETL scripts. Solid understanding of ETL concepts and data pipeline architecture, including CDC, incremental loads, and slowly changing dimensions (SCDs). Hands-on experience with PySpark. Knowledge and experience with using Data bricks will be a bonus. Familiarity with data warehousing solutions and ETL processes. Understanding of data architecture and backend solution design. Cloud and CI/CD Experience: Experience with cloud platforms such as AWS, Azure, or Google Cloud. Familiarity with Jenkins and Bit bucket for CI/CD processes. Additional Skills: Ability to work independently and manage multiple projects simultaneously. About Us At eClerx, we serve some of the largest global companies – 50 of the Fortune 500 clients. Our clients call upon us to solve their most complex problems, and deliver transformative insights. Across roles and levels, you get the opportunity to build expertise, challenge the status quo, think bolder, and help our clients seize value About The Team eClerx is a global leader in productized services, bringing together people, technology and domain expertise to amplify business results. Our mission is to set the benchmark for client service and success in our industry. Our vision is to be the innovation partner of choice for technology, data analytics and process management services. Since our inception in 2000, we've partnered with top companies across various industries, including financial services, telecommunications, retail, and high-tech. Our innovative solutions and domain expertise help businesses optimize operations, improve efficiency, and drive growth. With over 18,000 employees worldwide, eClerx is dedicated to delivering excellence through smart automation and data-driven insights. At eClerx, we believe in nurturing talent and providing hands-on experience. eClerx is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability or protected veteran status, or any other legally protected basis, per applicable law. Show more Show less
Posted 6 days ago
3.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Company Overview Viraaj HR Solutions is dedicated to connecting top talent with forward-thinking companies. Our mission is to provide exceptional talent acquisition services while fostering a culture of trust, integrity, and collaboration. We prioritize our clients' needs and work tirelessly to ensure the ideal candidate-job match. Join us in our commitment to excellence and become part of a dynamic team focused on driving success for individuals and organizations alike. Role Responsibilities Design, develop, and implement data pipelines using Azure Data Factory. Create and maintain data models for structured and unstructured data. Extract, transform, and load (ETL) data from various sources into data warehouses. Develop analytical solutions and dashboards using Azure Databricks. Perform data integration and migration tasks with Azure tools. Ensure optimal performance and scalability of data solutions. Collaborate with cross-functional teams to understand data requirements. Utilize SQL Server for database management and data queries. Implement data quality checks and ensure data integrity. Work on data governance and compliance initiatives. Monitor and troubleshoot data pipeline issues to ensure reliability. Document data processes and architecture for future reference. Stay current with industry trends and Azure advancements. Train and mentor junior data engineers and team members. Participate in design reviews and provide feedback for process improvements. Qualifications Bachelor's degree in Computer Science, Information Technology, or a related field. 3+ years of experience in a data engineering role. Strong expertise in Azure Data Factory and Azure Databricks. Proficient in SQL for data manipulation and querying. Experience with data warehousing concepts and practices. Familiarity with ETL tools and processes. Knowledge of Python or other programming languages for data processing. Ability to design scalable cloud architecture. Experience with data modeling and database design. Effective communication and collaboration skills. Strong analytical and problem-solving abilities. Familiarity with performance tuning and optimization techniques. Knowledge of data visualization tools is a plus. Experience with Agile methodologies. Ability to work independently and manage multiple tasks. Willingness to learn and adapt to new technologies. Skills: etl,azure databricks,sql server,azure,data governance,azure data factory,python,data warehousing,data engineer,data integration,performance tuning,python scripting,sql,data modeling,data migration,data visualization,analytical solutions,pyspark,agile methodologies,data quality checks Show more Show less
Posted 6 days ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
Gurgaon/Bangalore, India AXA XL recognizes data and information as critical business assets, both in terms of managing risk and enabling new business opportunities. This data should not only be high quality, but also actionable - enabling AXA XL’s executive leadership team to maximize benefits and facilitate sustained industrious advantage. Our Chief Data Office also known as our Innovation, Data Intelligence & Analytics team (IDA) is focused on driving innovation through optimizing how we leverage data to drive strategy and create a new business model - disrupting the insurance market. As we develop an enterprise-wide data and digital strategy that moves us toward greater focus on the use of data and data-driven insights, we are seeking an Data Engineer. The role will support the team’s efforts towards creating, enhancing, and stabilizing the Enterprise data lake through the development of the data pipelines. This role requires a person who is a team player and can work well with team members from other disciplines to deliver data in an efficient and strategic manner . What You’ll Be DOING What will your essential responsibilities include? Act as a data engineering expert and partner to Global Technology and data consumers in controlling complexity and cost of the data platform, whilst enabling performance, governance, and maintainability of the estate. Understand current and future data consumption patterns, architecture (granular level), partner with Architects to ensure optimal design of data layers. Apply best practices in Data architecture. For example, balance between materialization and virtualization, optimal level of de-normalization, caching and partitioning strategies, choice of storage and querying technology, performance tuning. Leading and hands-on execution of research into new technologies. Formulating frameworks for assessment of new technology vs business benefit, implications for data consumers. Act as a best practice expert, blueprint creator of ways of working such as testing, logging, CI/CD, observability, release, enabling rapid growth in data inventory and utilization of Data Science Platform. Design prototypes and work in a fast-paced iterative solution delivery model. Design, Develop and maintain ETL pipelines using Pyspark in Azure Databricks using delta tables. Use Harness for deployment pipeline. Monitor Performance of ETL Jobs, resolve any issue that arose and improve the performance metrics as needed. Diagnose system performance issue related to data processing and implement solution to address them. Collaborate with other teams to ensure successful integration of data pipelines into larger system architecture requirement. Maintain integrity and quality across all pipelines and environments. Understand and follow secure coding practice to make sure code is not vulnerable. You will report to Technical Lead. What You Will BRING We’re looking for someone who has these abilities and skills: Required Skills And Abilities Effective Communication skills. Bachelor’s degree in computer science, Mathematics, Statistics, Finance, related technical field, or equivalent work experience. Relevant years of extensive work experience in various data engineering & modeling techniques (relational, data warehouse, semi-structured, etc.), application development, advanced data querying skills. Relevant years of programming experience using Databricks. Relevant years of experience using Microsoft Azure suite of products (ADF, synapse and ADLS). Solid knowledge on network and firewall concepts. Solid experience writing, optimizing and analyzing SQL. Relevant years of experience with Python. Ability to break complex data requirements and architect solutions into achievable targets. Robust familiarity with Software Development Life Cycle (SDLC) processes and workflow, especially Agile. Experience using Harness. Technical lead responsible for both individual and team deliveries. Desired Skills And Abilities Worked in big data migration projects. Worked on performance tuning both at database and big data platforms. Ability to interpret complex data requirements and architect solutions. Distinctive problem-solving and analytical skills combined with robust business acumen. Excellent basics on parquet files and delta files. Effective Knowledge of Azure cloud computing platform. Familiarity with Reporting software - Power BI is a plus. Familiarity with DBT is a plus. Passion for data and experience working within a data-driven organization. You care about what you do, and what we do. Who WE are AXA XL, the P&C and specialty risk division of AXA, is known for solving complex risks. For mid-sized companies, multinationals and even some inspirational individuals we don’t just provide re/insurance, we reinvent it. How? By combining a comprehensive and efficient capital platform, data-driven insights, leading technology, and the best talent in an agile and inclusive workspace, empowered to deliver top client service across all our lines of business − property, casualty, professional, financial lines and specialty. With an innovative and flexible approach to risk solutions, we partner with those who move the world forward. Learn more at axaxl.com What we OFFER Inclusion AXA XL is committed to equal employment opportunity and will consider applicants regardless of gender, sexual orientation, age, ethnicity and origins, marital status, religion, disability, or any other protected characteristic. At AXA XL, we know that an inclusive culture and enables business growth and is critical to our success. That’s why we have made a strategic commitment to attract, develop, advance and retain the most inclusive workforce possible, and create a culture where everyone can bring their full selves to work and reach their highest potential. It’s about helping one another — and our business — to move forward and succeed. Five Business Resource Groups focused on gender, LGBTQ+, ethnicity and origins, disability and inclusion with 20 Chapters around the globe. Robust support for Flexible Working Arrangements Enhanced family-friendly leave benefits Named to the Diversity Best Practices Index Signatory to the UK Women in Finance Charter Learn more at axaxl.com/about-us/inclusion-and-diversity. AXA XL is an Equal Opportunity Employer. Total Rewards AXA XL’s Reward program is designed to take care of what matters most to you, covering the full picture of your health, wellbeing, lifestyle and financial security. It provides competitive compensation and personalized, inclusive benefits that evolve as you do. We’re committed to rewarding your contribution for the long term, so you can be your best self today and look forward to the future with confidence. Sustainability At AXA XL, Sustainability is integral to our business strategy. In an ever-changing world, AXA XL protects what matters most for our clients and communities. We know that sustainability is at the root of a more resilient future. Our 2023-26 Sustainability strategy, called “Roots of resilience”, focuses on protecting natural ecosystems, addressing climate change, and embedding sustainable practices across our operations. Our Pillars Valuing nature: How we impact nature affects how nature impacts us. Resilient ecosystems - the foundation of a sustainable planet and society - are essential to our future. We’re committed to protecting and restoring nature - from mangrove forests to the bees in our backyard - by increasing biodiversity awareness and inspiring clients and colleagues to put nature at the heart of their plans. Addressing climate change: The effects of a changing climate are far-reaching and significant. Unpredictable weather, increasing temperatures, and rising sea levels cause both social inequalities and environmental disruption. We're building a net zero strategy, developing insurance products and services, and mobilizing to advance thought leadership and investment in societal-led solutions. Integrating ESG: All companies have a role to play in building a more resilient future. Incorporating ESG considerations into our internal processes and practices builds resilience from the roots of our business. We’re training our colleagues, engaging our external partners, and evolving our sustainability governance and reporting. AXA Hearts in Action: We have established volunteering and charitable giving programs to help colleagues support causes that matter most to them, known as AXA XL’s “Hearts in Action” programs. These include our Matching Gifts program, Volunteering Leave, and our annual volunteering day - the Global Day of Giving. For more information, please see axaxl.com/sustainability. Show more Show less
Posted 6 days ago
3.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
Data Analyst2 We are seeking a highly skilled **Technical Data Analyst** to join our team and play a key role in building a **single source of truth** for our high-volume, direct-to-consumer accounting and financial data warehouse. The ideal candidate will have a strong background in data analysis, SQL, and data transformation, with experience in financial data warehousing and reporting. This role will involve working closely with finance and accounting teams to gather requirements, build dashboards, and transform data to support month-end accounting, tax reporting, and financial forecasting. The financial data warehouse is currently built in **Snowflake** and will be migrated to **Databricks**. The candidate will be responsible for transitioning reporting and transformation processes to Databricks while ensuring data accuracy and consistency. Key Responsibilities **Data Analysis & Reporting:** Build and maintain **month-end accounting and tax dashboards** using SQL and Snowsight in Snowflake. Transition reporting processes to **Databricks**, creating dashboards and reports to support finance and accounting teams. Gather requirements from finance and accounting stakeholders to design and deliver actionable insights. **Data Transformation & Aggregation:** Develop and implement data transformation pipelines in **Databricks** to aggregate financial data and create **balance sheet look-forward views**. Ensure data accuracy and consistency during the migration from Snowflake to Databricks. Collaborate with the data engineering team to optimize data ingestion and transformation processes. **Data Integration & ERP Collaboration:** Support the integration of financial data from the data warehouse into **NetSuite ERP** by ensuring data is properly transformed and validated. Work with cross-functional teams to ensure seamless data flow between systems. **Data Ingestion & Tools:** Understand and work with **Fivetran** for data ingestion (no need to be an expert, but familiarity is required). Troubleshoot and resolve data-related issues in collaboration with the data engineering team. Additional Qualifications 3+ years of experience as a **Data Analyst** or similar role, preferably in a financial or accounting context. Strong proficiency in **SQL** and experience with **Snowflake** and **Databricks**. Experience building dashboards and reports for financial data (e.g., month-end close, tax reporting, balance sheets). Familiarity with **Fivetran** or similar data ingestion tools. Understanding of financial data concepts (e.g., general ledger, journals, balance sheets, income statements). Experience with data transformation and aggregation in a cloud-based environment. Strong communication skills to collaborate with finance and accounting teams. Nice-to-have: Experience with **NetSuite ERP** or similar financial systems. Show more Show less
Posted 6 days ago
65.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
What We Offer At Magna, you can expect an engaging and dynamic environment where you can help to develop industry-leading automotive technologies. We invest in our employees, providing them with the support and resources they need to succeed. As a member of our global team, you can expect exciting, varied responsibilities as well as a wide range of development prospects. Because we believe that your career path should be as unique as you are. Group Summary Magna is more than one of the world’s largest suppliers in the automotive space. We are a mobility technology company built to innovate, with a global, entrepreneurial-minded team. With 65+ years of expertise, our ecosystem of interconnected products combined with our complete vehicle expertise uniquely positions us to advance mobility in an expanded transportation landscape. Job Responsibilities The Senior Power BI Developer will be responsible for interpreting the business needs and transforming them into powerful Power BI report or other data insights related products / apps. This includes the design, development, maintenance, integration and reporting of business systems through cubes, ad-hoc reports, and dashboards in relation with trending technologies such as Microsoft Fabric or Databricks. The selected candidate will work closely with international team members from Europe, North America and Asia. Major Responsibilities Collaborate with business analysts and stakeholders to understand data visualization requirements and translate them into effective BI solutions. Design and develop visually appealing and user-friendly reports, dashboards, and interactive data visualizations to present complex insights in a comprehensible manner. Leverage proficiency in DAX to create calculated measures, columns, and tables that enhance data analysis capabilities within Power BI models. Develop and optimize ETL processes using Power Query, SQL, Databricks, and MS Fabric to transform and integrate data from diverse sources, ensuring accuracy and consistency. Leverage Databricks' or MS Fabric’s capabilities, including Apache Spark, Delta Lake, and AutoML/AzureML to enhance data processing and analytics. Implement best practices for data modeling, performance optimization, and data governance within Power BI projects. Work closely with database administrators and data engineers to ensure seamless data flow from source systems to Power BI and maintain data integrity. Identify and address performance bottlenecks in Power BI reports and dashboards. Optimize queries and data models for improved speed and efficiency. Implement security measures to ensure the confidentiality and integrity of data in Power BI. Ensure compliance with data governance and privacy policies. Create and maintain the documentation on all owned Power BI reports. Stay up to date with Power BI advancements and industry trends, constantly seek for better, more optimized solutions and technologies and implement newly gained knowledge to Magna’s PBI processes. Provide training sessions and technical support to end users to foster self-service analytics and maximize Power BI utilization. Provide the support to junior team members. Collaborate with cross-functional teams to identify opportunities for data-driven insights and contribute to strategic decision-making processes. Knowledge and Education Completion of University Degree. Work Experience More than 3 Years of Work-Related Experience. Skills And Competencies Required To Perform The Job More than 3 years experience in the development of Business Intelligence solutions based on Microsoft Tabular models including Power BI visualization and complex DAX expressions. Experience in designing and implementing data models both on Tabular, SQL or Delta Lake based storage solutions. Solid understanding of ETL processes, Data Warehouse and Lakehouse concepts. Strong SQL coding skills. Advanced skills in Microsoft BI stack (including Analysis Services, Paginated Reports and Power Pivot, Azure SQL). Experience with Synapse, Data Flow, AutoML/AzureML, Tabular Editor or Dax Studio is a plus. Knowledge of programming languages (Python, C# or similar is a big plus). Self-motivated and self-managed with a high degree of analytical skills (quick comprehension, abstract thinking, recognize relationships). Ability to be a strong team member, and communicate effectively Ability to prioritize and multi-task, and reasonably estimate work effort for tasks. Excellent English language skills (written and verbal). Working Conditions and Environment Work in second or third shift (starting at 4:30 PM or later India time). Travel 10-25% regular travel. Any Additional Information Excellent English in spoken and written is a must. Ability to clarify the complex issues to the non-technical audience is mandatory. Work Environment Regular travel: 10-20% of the time. For dedicated and motivated employees, we offer an interesting and diversified job within a dynamic global team together with the individual and functional development in a professional environment of a global acting business. Fair treatment and a sense of responsibility towards employees are the principle of the Magna culture. We strive to offer an inspiring and motivating work environment. Additional Information We offer attractive benefits (e.g. discretionary performance bonus) and a salary which is in line with market conditions depending on your skills and experience. Awareness, Unity, Empowerment At Magna, we believe that a diverse workforce is critical to our success. That’s why we are proud to be an equal opportunity employer. We hire on the basis of experience and qualifications, and in consideration of job requirements, regardless of, in particular, color, ancestry, religion, gender, origin, sexual orientation, age, citizenship, marital status, disability or gender identity. Magna takes the privacy of your personal information seriously. We discourage you from sending applications via email to comply with GDPR requirements and your local Data Privacy Law. Worker Type Regular / Permanent Group Magna Corporate Show more Show less
Posted 6 days ago
8.0 years
0 Lacs
Bengaluru, Karnataka, India
Remote
Role Title: Data Scientist Location: India Worker Type: Full-Time Employee (FTE) Years of Experience: 8+ years Start Date: Within 2 weeks Engagement Type: Full-time Salary Range: Flexible Remote/Onsite: Hybrid (India-based candidates) Job Overview: We are looking for an experienced Data Scientist to join our team and contribute to developing AIdriven data conversion tools. You will work closely with engineers and business stakeholders to build intelligent systems for data mapping, validation, and transformation. Required Skills and Experience: • Bachelor’s or Master’s in Data Science, Computer Science, AI, or a related field • Strong programming skills in Python and SQL • Experience with ML frameworks like TensorFlow or PyTorch • Solid understanding of AI-based data mapping, code generation, and validation • Familiarity with databases like SQL Server and MongoDB • Excellent collaboration, problem-solving, and communication skills • At least 8 years of relevant experience in Data Science • Open mindset with a willingness to experiment and learn from failures Preferred Qualifications: • Experience in the financial services domain • Certifications in Data Science or AI/ML • Background in data wrangling, ETL, or master data management • Exposure to DevOps tools like Jira, Confluence, BitBucket • Knowledge of cloud and AI/ML tools like Azure Synapse, Azure ML, Cognitive Services, and Databricks • Prior experience delivering AI solutions for data conversion or transformation Show more Show less
Posted 6 days ago
0 years
0 Lacs
India
On-site
Job Title : Automation Engineer- Databricks Job Type : Full-time, Contractor Location : Hybrid - Hyderabad | Pune| Delhi About Us: Our mission at micro1 is to match the most talented people in the world with their dream jobs. If you are looking to be at the forefront of AI innovation and work with some of the fastest-growing companies in Silicon Valley, we invite you to apply for a role. By joining the micro1 community, your resume will become visible to top industry leaders, unlocking access to the best career opportunities on the market. Job Summary: We are seeking a detail-oriented and innovative Automation Engineer- Databricks to join our customer's team. In this critical role, you will design, develop, and execute automated tests to ensure the quality, reliability, and integrity of data within Databricks environments. If you are passionate about data quality, thrive in collaborative environments, and excel at both written and verbal communication, we'd love to meet you. Key Responsibilities: Design, develop, and maintain robust automated test scripts using Python, Selenium, and SQL to validate data integrity within Databricks environments. Execute comprehensive data validation and verification activities to ensure accuracy and consistency across multiple systems, data warehouses, and data lakes. Create detailed and effective test plans and test cases based on technical requirements and business specifications. Integrate automated tests with CI/CD pipelines to facilitate seamless and efficient testing and deployment processes. Work collaboratively with data engineers, developers, and other stakeholders to gather data requirements and achieve comprehensive test coverage. Document test cases, results, and identified defects; communicate findings clearly to the team. Conduct performance testing to ensure data processing and retrieval meet established benchmarks. Provide mentorship and guidance to junior team members, promoting best practices in test automation and data validation. Required Skills and Qualifications: Strong proficiency in Python, Selenium, and SQL for developing test automation solutions. Hands-on experience with Databricks, data warehouse, and data lake architectures. Proven expertise in automated testing of data pipelines, preferably with tools such as Apache Airflow, dbt Test, or similar. Proficient in integrating automated tests within CI/CD pipelines on cloud platforms (AWS, Azure preferred). Excellent written and verbal communication skills with the ability to translate technical concepts to diverse audiences. Bachelor’s degree in Computer Science, Information Technology, or a related discipline. Demonstrated problem-solving skills and a collaborative approach to teamwork. Preferred Qualifications: Experience with implementing security and data protection measures in data-driven applications. Ability to integrate user-facing elements with server-side logic for seamless data experiences. Demonstrated passion for continuous improvement in test automation processes, tools, and methodologies. Show more Show less
Posted 6 days ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Data Analyst2 We are seeking a highly skilled **Technical Data Analyst** to join our team and play a key role in building a **single source of truth** for our high-volume, direct-to-consumer accounting and financial data warehouse. The ideal candidate will have a strong background in data analysis, SQL, and data transformation, with experience in financial data warehousing and reporting. This role will involve working closely with finance and accounting teams to gather requirements, build dashboards, and transform data to support month-end accounting, tax reporting, and financial forecasting. The financial data warehouse is currently built in **Snowflake** and will be migrated to **Databricks**. The candidate will be responsible for transitioning reporting and transformation processes to Databricks while ensuring data accuracy and consistency. Key Responsibilities **Data Analysis & Reporting:** Build and maintain **month-end accounting and tax dashboards** using SQL and Snowsight in Snowflake. Transition reporting processes to **Databricks**, creating dashboards and reports to support finance and accounting teams. Gather requirements from finance and accounting stakeholders to design and deliver actionable insights. **Data Transformation & Aggregation:** Develop and implement data transformation pipelines in **Databricks** to aggregate financial data and create **balance sheet look-forward views**. Ensure data accuracy and consistency during the migration from Snowflake to Databricks. Collaborate with the data engineering team to optimize data ingestion and transformation processes. **Data Integration & ERP Collaboration:** Support the integration of financial data from the data warehouse into **NetSuite ERP** by ensuring data is properly transformed and validated. Work with cross-functional teams to ensure seamless data flow between systems. **Data Ingestion & Tools:** Understand and work with **Fivetran** for data ingestion (no need to be an expert, but familiarity is required). Troubleshoot and resolve data-related issues in collaboration with the data engineering team. Additional Qualifications 3+ years of experience as a **Data Analyst** or similar role, preferably in a financial or accounting context. Strong proficiency in **SQL** and experience with **Snowflake** and **Databricks**. Experience building dashboards and reports for financial data (e.g., month-end close, tax reporting, balance sheets). Familiarity with **Fivetran** or similar data ingestion tools. Understanding of financial data concepts (e.g., general ledger, journals, balance sheets, income statements). Experience with data transformation and aggregation in a cloud-based environment. Strong communication skills to collaborate with finance and accounting teams. Nice-to-have: Experience with **NetSuite ERP** or similar financial systems. Show more Show less
Posted 6 days ago
0 years
0 Lacs
Greater Nashik Area
On-site
Dreaming big is in our DNA. It’s who we are as a company. It’s our culture. It’s our heritage. And more than ever, it’s our future. A future where we’re always looking forward. Always serving up new ways to meet life’s moments. A future where we keep dreaming bigger. We look for people with passion, talent, and curiosity, and provide them with the teammates, resources and opportunities to unleash their full potential. The power we create together – when we combine your strengths with ours – is unstoppable. Are you ready to join a team that dreams as big as you do? AB InBev GCC was incorporated in 2014 as a strategic partner for Anheuser-Busch InBev. The center leverages the power of data and analytics to drive growth for critical business functions such as operations, finance, people, and technology. The teams are transforming Operations through Tech and Analytics. Do You Dream Big? We Need You. Job Title: Data Scientist Location: Bangalore Reporting to: Senior Manager Analytics Purpose of the role We are looking for a highly skilled and motivated Data Scientist with 3+ of professional experience to join our dynamic team. The ideal candidate will excel in data analytics, working with complex datasets, and applying machine learning and deep learning techniques to solve real-world problems. If you are passionate about leveraging data to drive insights and innovation, this is the role for you. Key tasks & accountabilities Analyze and interpret complex datasets to uncover actionable insights. Design, develop, and implement machine learning and deep learning models using tools and frameworks such as Pandas, Scikit-learn, TensorFlow, Keras, PyTorch, etc. Collaborate with cross-functional teams to understand business requirements and provide data-driven solutions. Create and maintain scalable data pipelines and workflows. Use statistical techniques to test hypotheses and validate models. Optimize machine learning algorithms for efficiency and scalability. Communicate insights and model performance to stakeholders via data visualization and presentations. Stay up-to-date with the latest advancements in data science, machine learning, and big data technologies. Qualifications, Experience, Skills Level Of Educational Attainment Required B. Tech in Computer Science, or Background in Statistics, Economics, Mathematics. Previous Work Experience & Skills Required Data Analytics: Proficiency in statistical analysis and deriving insights from data. Business Exposure: Experience in building optimization model, Marketing mix model. Machine Learning & Deep Learning Frameworks: Strong working knowledge of libraries like Pandas, Scikit-learn, TensorFlow, Keras, and PyTorch. Programming: Proficiency in Python and experience using GitHub for version control. Databases: Expertise in working with structured and unstructured data using databases. Cloud Platforms: Hands-on experience with Azure infrastructure for data storage, Azure DataBricks processing, and deployment. Data Visualization: Ability to create compelling visualizations using tools like Matplotlib, Seaborn, or Power BI. Complex Datasets: Exposure to working with large and intricate datasets in various domains. Version Control: Experience using GitHub for version control, collaboration, and managing repositories effectively And above all of this, an undying love for beer! We dream big to create future with more cheer Show more Show less
Posted 6 days ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title And Summary Lead, Data Engineer Who is Mastercard? Mastercard is a global technology company in the payments industry. Our mission is to connect and power an inclusive, digital economy that benefits everyone, everywhere by making transactions safe, simple, smart, and accessible. Using secure data and networks, partnerships and passion, our innovations and solutions help individuals, financial institutions, governments, and businesses realize their greatest potential. Our decency quotient, or DQ, drives our culture and everything we do inside and outside of our company. With connections across more than 210 countries and territories, we are building a sustainable world that unlocks priceless possibilities for all. Overview The Mastercard Services Technology team is looking for a Lead in Data Engineering, to drive our mission to unlock potential of data assets by consistently innovating, eliminating friction in how we manage big data assets, store those assets, accessibility of data and, enforce standards and principles in the Big Data space both on public cloud and on-premises set up. We are looking for a hands-on, passionate Data Engineer who is not only technically strong in PySpark, cloud platforms, and building modern data architectures, but also deeply committed to learning, growing, and lifting others. The person will play a key role in designing and building scalable data solutions, shaping our engineering culture, and mentoring team members. This is a role for builders and collaborators—engineers who love clean data pipelines, cloud-native design, and helping teammates succeed. Role Design and build scalable, cloud-native data platforms using PySpark, Python, and modern data engineering practices. Mentor and guide other engineers, sharing knowledge, reviewing code, and fostering a culture of curiosity, growth, and continuous improvement. Create robust, maintainable ETL/ELT pipelines that integrate with diverse systems and serve business-critical use cases. Lead by example—write high-quality, testable code and participate in architecture and design discussions with a long-term view in mind. Decompose complex problems into modular, efficient, and scalable components that align with platform and product goals. Champion best practices in data engineering, including testing, version control, documentation, and performance tuning. Drive collaboration across teams, working closely with product managers, data scientists, and other engineers to deliver high-impact solutions. Support data governance and quality efforts, ensuring data lineage, cataloging, and access management are built into the platform. Continuously learn and apply new technologies, frameworks, and tools to improve team productivity and platform reliability. Own and optimize cloud infrastructure components related to data engineering workflows, storage, processing, and orchestration. Participate in architectural discussions, iteration planning, and feature sizing meetings Adhere to Agile processes and participate actively in agile ceremonies Stakeholder management skills All About You 5+ years of hands-on experience in data engineering with strong PySpark and Python skills. Solid experience designing and implementing data models, pipelines, and batch/stream processing systems. Proven ability to work with cloud platforms (AWS, Azure, or GCP), especially in data-related services like S3, Glue, Data Factory, Databricks, etc. Strong foundation in data modeling, database design, and performance optimization. Understanding of modern data architectures (e.g., lakehouse, medallion) and data lifecycle management. Comfortable with CI/CD practices, version control (e.g., Git), and automated testing. Demonstrated ability to mentor and uplift junior engineers—strong communication and collaboration skills. Bachelor’s degree in computer science, Engineering, or related field—or equivalent hands-on experience. Comfortable working in Agile/Scrum development environments. Curious, adaptable, and driven by problem-solving and continuous improvement. Good To Have Experience integrating heterogeneous systems and building resilient data pipelines across cloud environments. Familiarity with orchestration tools (e.g., Airflow, dbt, Step Functions, etc.). Exposure to data governance tools and practices (e.g., Lake Formation, Purview, or Atlan). Experience with containerization and infrastructure automation (e.g., Docker, Terraform) will be a good addition. Master’s degree, relevant certifications (e.g., AWS Certified Data Analytics, Azure Data Engineer), or demonstrable contributions to open source/data engineering communities will be a bonus. Exposure to machine learning data pipelines or MLOps is a plus. Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines. R-251380 Show more Show less
Posted 6 days ago
5.0 - 9.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Position-Azure Data Engineer Location- Pune Mandatory Skills- Azure Databricks, pyspark Experience-5 to 9 Years Notice Period- 0 to 30 days/ Immediately Joiner/ Serving Notice period Must have Experience: Strong design and data solutioning skills PySpark hands-on experience with complex transformations and large dataset handling experience Good command and hands-on experience in Python. Experience working with following concepts, packages, and tools, Object oriented and functional programming NumPy, Pandas, Matplotlib, requests, pytest Jupyter, PyCharm and IDLE Conda and Virtual Environment Working experience must with Hive, HBase or similar Azure Skills Must have working experience in Azure Data Lake, Azure Data Factory, Azure Databricks, Azure SQL Databases Azure DevOps Azure AD Integration, Service Principal, Pass-thru login etc. Networking – vnet, private links, service connections, etc. Integrations – Event grid, Service Bus etc. Database skills Oracle, Postgres, SQL Server – any one database experience Oracle PL/SQL or T-SQL experience Data modelling Thank you Show more Show less
Posted 6 days ago
2.0 - 5.0 years
4 - 7 Lacs
Hyderabad
Work from Office
We are seeking an MDM Associate Data Engineer with 2 5 years of experience to support and enhance our enterprise MDM (Master Data Management) platforms using Informatica/Reltio. This role is critical in delivering high-quality master data solutions across the organization, utilizing modern tools like Databricks and AWS to drive insights and ensure data reliability. The ideal candidate will have strong SQL, data profiling, and experience working with cross-functional teams in a pharma environment. To succeed in this role, the candidate must have strong data engineering experience along with MDM knowledge, hence the candidates having only MDM experience are not eligible for this role. Candidate must have data engineering experience on technologies like (SQL, Python, PySpark , Databricks, AWS etc ), along with knowledge of MDM (Master Data Management) Roles & Responsibilities: Analyze and manage customer master data using Reltio or Informatica MDM solutions. Perform advanced SQL queries and data analysis to validate and ensure master data integrity. Leverage Python, PySpark, and Databricks for scalable data processing and automation. Collaborate with business and data engineering teams for continuous improvement in MDM solutions. Implement data stewardship processes and workflows, including approval and DCR mechanisms. Utilize AWS cloud services for data storage and compute processes related to MDM. Contribute to metadata and data modeling activities. Track and manage data issues using tools such as JIRA and document processes in Confluence. Apply Life Sciences/Pharma industry context to ensure data standards and compliance. Basic Qualifications and Experience: Masters degree with 1 - 3 years of experience in Business, Engineering, IT or related field OR Bachelors degree with 2 - 5 years of experience in Business, Engineering, IT or related field OR Diploma with 6 - 8 years of experience in Business, Engineering, IT or related field Functional Skills: Must-Have Skills: Advanced SQL expertise and data wrangling. Strong experience in Python and PySpark for data transformation workflows. Strong experience with Databricks and AWS architecture. Must have knowledge of MDM, data governance, stewardship, and profiling practices. In addition to above, candidates having experience with Informatica or Reltio MDM platforms will be preferred. Good-to-Have Skills: Experience with IDQ, data modeling and approval workflow/DCR. Background in Life Sciences/Pharma industries. Familiarity with project tools like JIRA and Confluence. Strong grip on data engineering concepts. Professional Certifications : Any ETL certification (e.g. Informatica) Any Data Analysis certification (SQL, Python, Databricks) Any cloud certification (AWS or AZURE) Soft Skills: Strong analytical abilities to assess and improve master data processes and solutions. Excellent verbal and written communication skills, with the ability to convey complex data concepts clearly to technical and non-technical stakeholders. Effective problem-solving skills to address data-related issues and implement scalable solutions. Ability to work effectively with global, virtual teams We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Posted 6 days ago
3.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Data Analyst2 We are seeking a highly skilled **Technical Data Analyst** to join our team and play a key role in building a **single source of truth** for our high-volume, direct-to-consumer accounting and financial data warehouse. The ideal candidate will have a strong background in data analysis, SQL, and data transformation, with experience in financial data warehousing and reporting. This role will involve working closely with finance and accounting teams to gather requirements, build dashboards, and transform data to support month-end accounting, tax reporting, and financial forecasting. The financial data warehouse is currently built in **Snowflake** and will be migrated to **Databricks**. The candidate will be responsible for transitioning reporting and transformation processes to Databricks while ensuring data accuracy and consistency. Key Responsibilities **Data Analysis & Reporting:** Build and maintain **month-end accounting and tax dashboards** using SQL and Snowsight in Snowflake. Transition reporting processes to **Databricks**, creating dashboards and reports to support finance and accounting teams. Gather requirements from finance and accounting stakeholders to design and deliver actionable insights. **Data Transformation & Aggregation:** Develop and implement data transformation pipelines in **Databricks** to aggregate financial data and create **balance sheet look-forward views**. Ensure data accuracy and consistency during the migration from Snowflake to Databricks. Collaborate with the data engineering team to optimize data ingestion and transformation processes. **Data Integration & ERP Collaboration:** Support the integration of financial data from the data warehouse into **NetSuite ERP** by ensuring data is properly transformed and validated. Work with cross-functional teams to ensure seamless data flow between systems. **Data Ingestion & Tools:** Understand and work with **Fivetran** for data ingestion (no need to be an expert, but familiarity is required). Troubleshoot and resolve data-related issues in collaboration with the data engineering team. Additional Qualifications 3+ years of experience as a **Data Analyst** or similar role, preferably in a financial or accounting context. Strong proficiency in **SQL** and experience with **Snowflake** and **Databricks**. Experience building dashboards and reports for financial data (e.g., month-end close, tax reporting, balance sheets). Familiarity with **Fivetran** or similar data ingestion tools. Understanding of financial data concepts (e.g., general ledger, journals, balance sheets, income statements). Experience with data transformation and aggregation in a cloud-based environment. Strong communication skills to collaborate with finance and accounting teams. Nice-to-have: Experience with **NetSuite ERP** or similar financial systems. Show more Show less
Posted 6 days ago
0 years
0 Lacs
New Delhi, Delhi, India
Remote
Company Description Muoro.io partners with organizations to build dedicated engineering teams and offshore development centers with top talent curated for specific objectives. The company focuses on addressing talent shortage and managing remote technology teams, particularly in emerging technologies. Role Description This is a remote contract role for an Azure Data Engineer at Muoro. The Azure Data Engineer will be responsible for designing and implementing data solutions using Azure services, building and maintaining data pipelines, and optimizing data workflows for efficiency. Qualifications Experience with Azure services, including Azure Data Factory, Azure Databricks, and Azure Synapse Analytics Proficiency in SQL and NoSQL databases Expertise in data modeling and ETL processes Strong analytical and problem-solving skills Experience with data visualization tools like Power BI or Tableau Bachelor's degree in Computer Science, Engineering, or a related field Show more Show less
Posted 6 days ago
6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
We're Hiring – API Developer (AI/ML Deployment & Cloud Integration) 📍 Location: Hyderabad, India ( only Telangana & A.P people) 💼 Type: Full-Time | Permanent | Offshore 🕛 Working Hours: US shift (Night shift) What You’ll Work On: 🔹 RESTful & GraphQL APIs for AI/ML services 🔹 AWS (API Gateway, Lambda, EKS, CodePipeline) 🔹 Databricks ML tools (Model Serving, Registry, Unity Catalog) 🔹 Deploying batch & streaming ML pipelines 🔹 Collaborating with cross-functional teams You Bring: ✅ 6+ years in API development ✅ Hands-on with AWS & Databricks ✅ Docker + Kubernetes experience ✅ Experience in ML model deployment at scale Interview Process: 1️⃣ Real-world Code Task (24 hours) 2️⃣ Technical Interview Show more Show less
Posted 6 days ago
7.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Purpose Client calls, guide towards optimized, cloud-native architectures, future state of their data platform, strategic recommendations and Microsoft Fabric integration. Desired Skills And Experience Candidates should have a B.E./B.Tech/MCA/MBA in Finance, Information Systems, Computer Science or a related field 7+ years of experience as a Data and Cloud architecture with client stakeholders AZ Data Platform Expertise: Synapse, Databricks, Azure Data Factory (ADF), Azure SQL (DW/DB), Power BI (PBI). Define modernization roadmaps and target architecture. Strong understanding of data governance best practices for data quality, Cataloguing, and lineage. Proven ability to lead client engagements and present complex findings. Excellent communication skills, both written and verbal Extremely strong organizational and analytical skills with strong attention to detail Strong track record of excellent results delivered to internal and external clients Able to work independently without the needs for close supervision and collaboratively as part of cross-team efforts Experience with delivering projects within an agile environment Experience in project management and team management Key responsibilities include: Lead all interviews & workshops to capture current/future needs. Direct the technical review of Azure (AZ) infrastructure (Databricks, Synapse Analytics, Power BI) and critical on-premises (on-prem) systems. Come up with architecture designs (Arch. Designs), focusing on refined processing strategies and Microsoft Fabric. Understand and refine the Data Governance (Data Gov.) roadmap, including data cataloguing (Data Cat.), lineage, and quality. Lead project deliverables, ensuring actionable and strategic outputs. Evaluate and ensure quality of deliverables within project timelines Develop a strong understanding of equity market domain knowledge Collaborate with domain experts and business stakeholders to understand business rules/logics Ensure effective, efficient, and continuous communication (written and verbally) with global stakeholders Independently troubleshoot difficult and complex issues on dev, test, UAT and production environments Responsible for end-to-end delivery of projects, coordination between client and internal offshore teams and manage client queries Demonstrate high attention to detail, should work in a dynamic environment whilst maintaining high quality standards, a natural aptitude to develop good internal working relationships and a flexible work ethic Responsible for Quality Checks and adhering to the agreed Service Level Agreement (SLA) / Turn Around Time (TAT) Show more Show less
Posted 6 days ago
6.0 - 8.0 years
8 - 12 Lacs
Hyderabad
Work from Office
What you will do In this vital role We are looking for a creative and technically skilled Senior Software Engineer/AI Engineer Search to help design and build cutting-edge AI-powered search solutions for the pharmaceutical industry. In this role, you'll develop intelligent systems that surface the most relevant insights from clinical trials, scientific literature, regulatory documents, and internal knowledge assets. Your work will empower researchers, clinicians, and decision-makers with faster, smarter access to the right information. . Design and implement search algorithms using NLP, machine learning, semantic understanding, and deep learning models Build and fine-tune models for information retrieval, query expansion, document ranking, summarization, and Q&A systems Support integration of LLMs (e.g., GPT, BERT, Bio BERT) for semantic and generative search Train and evaluate custom models for biomedical named entity recognition, relevance ranking, and similarity search. Build and deploy vector-based search systems using embeddings and vector databases Work closely with platform engineers to integrate AI models into scalable cloud-based infrastructures (AWS, Azure, GCP) Package and deploy search services using containerization (Docker, Kubernetes) and modern MLOps pipelines. Preprocess and structure unstructured content such as clinical trial reports, research articles, and regulatory documents Apply knowledge graphs, taxonomies, and ontologies (e.g., MeSH, UMLS, SNOMED) to enhance search results Build and deploy recommendation systems models, utilize AIML infrastructure, and contribute to model optimization and data processing Experience in Generative AI on Search Engines Experience in integrating Generative AI capabilities and Vision Models to enrich content quality and user engagement Experience Generative AI tasks such as content summarization. deduping and metadata quality. Researching and developing advanced AI algorithms, including Vision Models for visual content analysis. Implementing KPI measurement frameworks to evaluate the quality and performance of delivered models, including those utilizing Generative AI. Developing and maintaining Deep Learning models for data quality checks, visual similarity scoring, and content tagging. Continually researching current and emerging technologies and proposing changes where needed. Implement GenAI solutions, utilize ML infrastructure, and contribute to data preparation, optimization, and performance enhancements. Basic Qualifications: Degree in computer science & engineering preferred with 6-8 years of software development experience 2-4 years of experience building AI/ML models, ideally in search, NLP, or biomedical domains. Proficiency in Python and frameworks such as PyTorch, TensorFlow, Hugging Face Transformers Experience with search technologies like Elasticsearch, OpenSearch, or vector search tools Solid understanding of NLP techniques: embeddings, transformers, entity recognition, text classification Hands-on experience with various AI models, GCP Search Engines, GCP Cloud services Proficient in programming language AI/ML, Python, Java Crawlers, Java Script, SQL/NoSQL, Databricks/RDS, Data engineering, S3 Buckets, dynamo DB Strong problem solving, analytical skills; Ability to learn quickly; Excellent communication and interpersonal skills Preferred Qualifications: Experience in AI/ML, Java, Python Experienced with Fast Pythons API, GraphQL Experience with design patterns, data structures, data modelling, data algorithms Experienced with AWS /Azure Platform, building and deploying the code Experience in Postgres SQL /Mongo DB SQL database, vector database for large language models, Databricks or RDS, S3 Buckets Knowledge of LLMs, generative AI, and their use in enterprise search Experience in Google cloud Search and Google cloud Storage Experience with popular large language models Experience with LangChain or LlamaIndex framework for language models Experience with prompt engineering, model fine tuning Knowledge of NLP techniques for text analysis and sentiment analysis Experience in Agile software development methodologies Experience in End-to-End testing as part of Test-Driven Development Good to Have Skills Willingness to work on Full stack Applications Exposure to MLOps tools like MLflow, Airflow, or SageMaker Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills. Ability to work effectively with global, remote teams. High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Strong presentation and public speaking skills.
Posted 6 days ago
8.0 - 10.0 years
27 - 32 Lacs
Hyderabad
Work from Office
What you will do In this vital role you will lead the engagement model between Amgen's Technology organization and our global business partners in Commercial Data & Analytics. We seek a technology leader with a passion for innovation and a collaborative working style that partners effectively with business and technology leaders. Are you interested in building a team that consistently delivers business value in an agile model using technologies such as AWS, Databricks, Airflow, and Tableau Come join our team! Roles & Responsibilities: Establish an effective engagement model to collaborate with senior leaders on the Sales Insights product team within the Commercial Data & Analytics organization, focused on operations within the United States Serve as the technology product owner for an agile product team committed to delivering business value to Commercial stakeholders via data pipeline buildout for sales data Lead and mentor junior team members to deliver on the needs of the business Interact with business clients and technology management to create technology roadmaps, build cases, and drive DevOps to achieve the roadmaps Help to mature Agile operating principles through deployment of creative and consistent practices for user story development, robust testing and quality oversight, and focus on user experience Ability to connect and understand our vast array Commercial and other functional data sources including Sales, Activity, and Digital data, etc. into consumable and user-friendly modes (e.g., dashboards, reports, mobile, etc.) for key decision makers such as executives, brand leads, account managers, and field representatives. Become the lead subject matter expert in reporting technology capabilities by researching and implementing new tools and features, internal and external methodologies Basic Qualifications: Masters degree with 8 - 10 years of experience in Information Systems experience OR Bachelors degree with 10 - 14 years of experience in Information Systems experience OR Diploma with 14 - 18 years of experience in Information Systems experience Must-Have Skills Excellent problem-solving skills and a passion for tackling complex challenges in data and analytics with technology Experience leading data and analytics teams in a Scaled Agile Framework (SAFe) Excellent interpersonal skills, strong attention to detail, and ability to influence based on data and business value Ability to build compelling business cases with accurate cost and effort estimations Has experience with writing user requirements and acceptance criteria in agile project management systems such as Jira Ability to explain sophisticated technical concepts to non-technical clients Strong understanding of sales and incentive compensation value streams Preferred Qualifications: Jira Align & Confluence experience Experience of DevOps, Continuous Integration, and Continuous Delivery methodology Understanding of software systems strategy, governance, and infrastructure Experience in managing product features for PI planning and developing product roadmaps and user journeys Familiarity with low-code, no-code test automation software Technical thought leadership Soft Skills: Able to work effectively across multiple geographies (primarily India, Portugal, and the United States) under minimal supervision Demonstrated proficiency in written and verbal communication in English language Skilled in providing oversight and mentoring team members. Demonstrated ability in effectively delegating work Intellectual curiosity and the ability to question partners across functions Ability to prioritize successfully based on business value High degree of initiative and self-motivation Ability to manage multiple priorities successfully across virtual teams Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills Technical Skills: ETL tools: Experience in ETL tools such as Databricks Redshift or equivalent cloud-based dB Big Data, Analytics, Reporting, Data Lake, and Data Integration technologies S3 or equivalent storage system AWS (similar cloud-based platforms) BI Tools (Tableau and Power BI preferred)
Posted 6 days ago
3.0 - 6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Position Summary Senior Analyst – Data Engineer - Deloitte Technology - Deloitte Support Services India Private Limited Do you thrive on developing creative and innovative insights to solve complex challenges? Want to work on next-generation, cutting-edge products and services that deliver outstanding value and that are global in vision and scope? Work with premier thought leaders in your field? Work for a world-class organization that provides an exceptional career experience with an inclusive and collaborative culture? Work you’ll do Seeking a candidate with extensive experience on designing, delivering and maintaining implementations of solutions in the cloud, specifically Microsoft Azure. This candidate should also possess strong cross-discipline communication skills, strong analytical aptitude with critical thinking, a solid understanding of how data would translate into reporting / dashboarding capabilities, and the tools and platforms that support them. Responsibilities Role Specific Designing a well-structured data model using methodologies (e.g., Kimball or Inmon) that accurately represents the business requirements, ensures data integrity and minimizes redundancies. Developing and implementing data pipelines to extract, transform, and load (ETL) data from various sources into Azure data services. This includes using Azure Data Factory, Azure Databricks, or other tools to orchestrate data workflows and data movement. Build, Test and Run of data assets tied to tasks and user stories from the Azure DevOps instance of Enterprise Data & Analytics. Bring a level of technical expertise of the Big Data space that contributes to the strategic roadmaps for Enterprise Data Architecture, Global Data Cloud Architecture, and Global Business Intelligence Architecture, as well contributes to the development of the broader Enterprise Data & Analytics Engineering community Actively participate in regularly scheduled contact calls to transparently review the status of in-flight projects, priorities of backlog projects, and review adoption of previous deliveries from Enterprise Data & Analytics with the Data Insights team. Handle break fixes and participate in a rotational on-call schedule. On-call includes monitoring of scheduled jobs and ETL pipelines. Actively participate in team meetings to transparently review the status of in-flight projects and their progress. Follow standard practice and frameworks on each project from development, to testing and then productionizing, each within the appropriate environment laid out by Data Architecture. Challenge’s self and others to make an impact that matters and help team connect their contributions with broader purpose. Sets expectations to the team, aligns the work based on the strengths and competencies, and challenges them to raise the bar while providing the support. Extensive knowledge of multiple technologies, tools, and processes to improve the design and architecture of the assigned applications. Knowledge Sharing / Documentation Contribute to, produce, and maintain processes, procedures, operational and architectural documentation. Change Control - ensure compliance with Processes and adherence to standards and documentation. Work with Deloitte Technology leadership and service teams in reviewing documentation and aligning KPIs to critical steps in our service operations. Active participation in ongoing training within BI space. The team At Deloitte, we’re all about collaboration. And nowhere is this more apparent than among our 2,000-strong internal services team. With our combined specialist skills, we provide all the essential support and advice our client-facing colleagues need, right across the firm. This enables them to focus all of their efforts on delivering the best service possible to their clients. Covering seven distinct areas; Human Resources, Clients & Industries, Finance & Legal, Practice Support Services, Quality & Risk Services, IT Services, and Workplace Services & Real Estate, together we live, breathe and deliver the Deloitte experience. Location: Hyderabad Work shift Timings: 11 AM to 8 PM Qualifications Bachelor of Engineering/ Bachelor of Technology 3-6 years of broad-based IT experience with technical knowledge of Microsoft SQL Server, Azure SQL Data Warehouse, Azure Data Lake Store, Azure Data Factory Demonstrated experience in Apache Framework (Spark, Scala, etc.) Well versed in SQL and comfortable in scripting using Python or similar language. First Month Critical Outcomes: Absorb strategic projects from the backlog and complete the related Azure SQL Data Warehouse Development work. Inspect existing run-state SQL Server databases and Azure SQL Data Warehouses and identify optimizations for potential development. Deliver new databases assigned as needed. Integration to on-call rotation (First 90 days). Contribute to legacy content and architecture migration to data lake (First 90 days). Delivery of first 2 data ingestion pipelines to include ingestion, QA and automation using Azure Big Data tools (First 90 days). Ability to document all work following standard documentation practices set forth by Data Governance (First 90 days). How You’ll Grow At Deloitte, we’ve invested a great deal to create a rich environment in which our professionals can grow. We want all our people to develop in their own way, playing to their own strengths as they hone their leadership skills. And, as a part of our efforts, we provide our professionals with a variety of learning and networking opportunities—including exposure to leaders, sponsors, coaches, and challenging assignments—to help accelerate their careers along the way. No two people learn in exactly the same way. So, we provide a range of resources including live classrooms, team-based learning, and eLearning. DU: The Leadership Center in India, our state-of-the-art, world-class learning Center in the Hyderabad offices is an extension of the Deloitte University (DU) in Westlake, Texas, and represents a tangible symbol of our commitment to our people’s growth and development. Explore DU: The Leadership Center in India Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Deloitte’s culture Our positive and supportive culture encourages our people to do their best work every day. We celebrate individuals by recognizing their uniqueness and offering them the flexibility to make daily choices that can help them to be healthy, centered, confident, and aware. We offer well-being programs and are continuously looking for new ways to maintain a culture that is inclusive, invites authenticity, leverages our diversity, and where our people excel and lead healthy, happy lives. Learn more about Life at Deloitte. Corporate citizenship Deloitte is led by a purpose: to make an impact that matters. This purpose defines who we are and extends to relationships with our clients, our people and our communities. We believe that business has the power to inspire and transform. We focus on education, giving, skill-based volunteerism, and leadership to help drive positive social impact in our communities. Learn more about Deloitte’s impact on the world. #EAG-Technology Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Professional development From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to build new skills, take on leadership opportunities and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career. Requisition code: 304653 Show more Show less
Posted 6 days ago
40.0 years
0 Lacs
Hyderābād
On-site
India - Hyderabad JOB ID: R-216330 ADDITIONAL LOCATIONS: India - Hyderabad WORK LOCATION TYPE: On Site DATE POSTED: Jun. 12, 2025 CATEGORY: Engineering ABOUT AMGEN Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. ABOUT THE ROLE Role Description: We are seeking a seasoned and passionate Principal Architect (Enterprise Architect – Data Platform Engineering) in our Data Architecture & Engineering group to drive the architecture, development and implementation of ourstrategy spanning across Data Fabric, Data Management, and Data Analytics Platform stack. The ideal candidate possesses a deep technical expertise and understanding of data and analytics landscape, current tools and technology trends, and data engineering principles, coupled with strong leadership and data-driven problem-solving skills.As a Principal Architect, you will play a crucial role in building the strategy and driving the implementation of best practices across data and analyticsplatforms. Roles & Responsibilities: Must be passionate about Data, Content and AI technologies - with ability to evaluate and assess new technology and trends in the market quickly - with enterprise architecture in mind Drive the strategy and implementation ofenterprise data platform and technical roadmapsthat align with the Amgen Data strategy Maintain the pulse of current market trends in data & AI space and be able to quickly perform hands-on experimentation and evaluations Provide expert guidance and influence the management and peers from functional groups with Enterprise mindset and goals Responsible for design, develop, optimize, delivery and support of Enterprise Data platform on AWS and Databricks architecture Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions. Advice and support Application teams (product managers, architects, business analysts, and developers) on tools, technology, and methodology related to the design and development of applications that have large data volume and variety of data types Collaborate and align withEARB, Cloud Infrastructure, Securityand other technology leaders on Enterprise Data Architecture changes Ensure scalability, reliability, and performance of data platforms by implementing best practices for architecture, cloud resource optimization, and system tuning. Collaboration with RunOps engineers to continuously increase our ability to push changes into production with as little manual overhead and as much speed as possible. Basic Qualifications and Experience: Master’s degree with 8 - 10 years of experience in Computer Science, IT or related field OR Bachelor’s degree with 10 - 14 years of experience in Computer Science, IT or related field Functional Skills: Must-Have Skills: 8+ years of experience in data architecture and engineering or related roles with hands-on experience building enterprise data platforms in a cloud environment (AWS, Azure, GCP). 5+ years of experience in leading enterprise scale data platforms and solutions Expert-level proficiency with Databricks and experience in optimizing data pipelines and workflows in Databricks environments. Deep understanding of distributed computing, data architecture, and performance optimization in cloud-based environments. Experience with Enterprise mindset / certifications like TOGAF etc. are a plus. Highlypreferred to have Big Tech or Big Consulting experience. Solid knowledge of data security, governance, and compliance practices in cloud environments. Must have exceptional communication to engage and influence architects and leaders in the organization Good-to-Have Skills: Experience with Gen AI tools in databricks Experience with unstructured data architecture and pipelines Experience working with agile development methodologies such as Scaled Agile. Professional Certifications AWS Certified Data Engineer preferred Databricks Certificate preferred Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills. EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation.
Posted 6 days ago
40.0 years
0 Lacs
Hyderābād
On-site
India - Hyderabad JOB ID: R-216678 ADDITIONAL LOCATIONS: India - Hyderabad WORK LOCATION TYPE: On Site DATE POSTED: Jun. 12, 2025 CATEGORY: Engineering ABOUT AMGEN Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. ABOUT THE ROLE Role Description: We are seeking a seasoned Principal Architect – Solutions to drive the architecture, development and implementation of data solutions to Amgen functional groups. The ideal candidate able to work in large scale Data Analytic initiatives, engage and work along with Business, Program Management, Data Engineering and Analytic Engineering teams. Be champions of enterprise data analytic strategy, data architecture blueprints and architectural guidelines. As a Principal Architect, you will play a crucial role in designing, building, and optimizing data solutions to Amgen functional groups such as R&D, Operations and GCO. Roles & Responsibilities: Implement and manage large scale data analytic solutions to Amgen functional groups that align with the Amgen Data strategy Collaborate with Business, Program Management, Data Engineering and Analytic Engineering teams to deliver data solutions Responsible for design, develop, optimize, delivery and support of Data solutions on AWS and Databricks architecture Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions. Provide expert guidance and mentorship to the team members, fostering a culture of innovation and best practices. Be passionate and hands-on to quickly experiment with new data related technologies Define guidelines, standards, strategies, security policies and change management policies to support the Enterprise Data platform. Collaborate and align with EARB, Cloud Infrastructure, Security and other technology leaders on Enterprise Data Architecture changes Work with different project and application groups to drive growth of the Enterprise Data Platform using effective written/verbal communication skills, and lead demos at different roadmap sessions Overall management of the Enterprise Data Platform on AWS environment to ensure that the service delivery is cost effective and business SLAs around uptime, performance and capacity are met Ensure scalability, reliability, and performance of data platforms by implementing best practices for architecture, cloud resource optimization, and system tuning. Collaboration with RunOps engineers to continuously increase our ability to push changes into production with as little manual overhead and as much speed as possible. Maintain knowledge of market trends and developments in data integration, data management and analytics software/tools Work as part of team in a SAFe Agile/Scrum model Basic Qualifications and Experience: Master’s degree with 12 - 15 years of experience in Computer Science, IT or related field OR Bachelor’s degree with 14 - 17 years of experience in Computer Science, IT or related field Functional Skills: Must-Have Skills: 8+ years of hands-on experience in Data integrations, Data Management and BI technology stack. Strong experience with one or more Data Management tools such as AWS data lake, Snowflake or Azure Data Fabric Expert-level proficiency with Databricks and experience in optimizing data pipelines and workflows in Databricks environments. Strong experience with Python, PySpark, and SQL for building scalable data workflows and pipelines. Experience with Apache Spark, Delta Lake, and other relevant technologies for large-scale data processing. Familiarity with BI tools including Tableau and PowerBI Demonstrated ability to enhance cost-efficiency, scalability, and performance for data solutions Strong analytical and problem-solving skills to address complex data solutions Good-to-Have Skills: Preferred to have experience in life science or tech or consultative solution architecture roles Experience working with agile development methodologies such as Scaled Agile. Professional Certifications AWS Certified Data Engineer preferred Databricks Certificate preferred Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills. EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation.
Posted 6 days ago
8.0 years
0 Lacs
Hyderābād
On-site
India - Hyderabad JOB ID: R-216648 ADDITIONAL LOCATIONS: India - Hyderabad WORK LOCATION TYPE: On Site DATE POSTED: Jun. 12, 2025 CATEGORY: Engineering Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. What you will do Let’s do this. Let’s change the world. In this vital role you will manage and oversee the development of robust Data Architectures, Frameworks, Data product Solutions, while mentoring and guiding a small team of data engineers. You will be responsible for leading the development, implementation, and management of enterprise-level data data engineering frameworks and solutions that support the organization's data-driven strategic initiatives. You will continuously strive for innovation in the technologies and practices used for data engineering and build enterprise scale data frameworks and expert data engineers. This role will closely collaborate with counterparts in US and EU. You will collaborate with cross-functional teams, including platform, functional IT, and business stakeholders, to ensure that the solutions that are built align with business goals and are scalable, secure, and efficient. Roles & Responsibilities: Architect & Implement of scalable, high-performance Modern Data Engineering solutions (applications) that include data analysis, data ingestion, storage, data transformation (data pipelines), and analytics. Evaluate the new trends in data engineering area and build rapid prototypes Build Data Solution Architectures and Frameworks to accelerate the Data Engineering processes Build frameworks to improve the re-usability, reduce the development time and cost of data management & governance Integrate AI into data engineering practices to bring efficiency through automation Build best practices in Data Engineering capability and ensure their adoption across the product teams Build and nurture strong relationships with stakeholders, emphasizing value-focused engagement and partnership to align data initiatives with broader business goals. Lead and motivate a high-performing data engineering team to deliver exceptional results. Provide expert guidance and mentorship to the data engineering team, fostering a culture of innovation and best practices. Collaborate with counterparts in US and EU and work with business functions, functional IT teams, and others to understand their data needs and ensure the solutions meet the requirements. Engage with business stakeholders to understand their needs and priorities, ensuring that data and analytics solutions built deliver real value and meet business objectives. Drive adoption of the data and analytics solutions by partnering with the business stakeholders and functional IT teams in rolling out change management, trainings, communications, etc. Talent Growth & People Leadership: Lead, mentor, and manage a high-performing team of engineers, fostering an environment that encourages learning, collaboration, and innovation. Focus on nurturing future leaders and providing growth opportunities through coaching, training, and mentorship. Recruitment & Team Expansion: Develop a comprehensive talent strategy that includes recruitment, retention, onboarding, and career development and build a diverse and inclusive team that drives innovation, aligns with Amgen's culture and values, and delivers business priorities Organizational Leadership: Work closely with senior leaders within the function and across the Amgen India site to align engineering goals with broader organizational objectives and demonstrate leadership by contributing to strategic discussions What we expect of you We are all different, yet we all use our unique contributions to serve patients. The [vital attribute] professional we seek is a [type of person] with these qualifications. Basic Qualifications: Master’s degree and 8 to 10 years of computer science and engineering preferred, other Engineering fields will be considered OR Bachelor’s degree and 12 to 14 years of computer science and engineering preferred, other Engineering fields will be considered OR Diploma and 16 to 18 years of computer science and engineering preferred, other Engineering fields will be considered 10+ years of experience in Data Engineering, working in COE development or product building 5+ years of experience in leading enterprise scale data engineering solution development. Experience building enterprise scale data lake, data fabric solutions on cloud leveraging modern approaches like Data Mesh Demonstrated proficiency in leveraging cloud platforms (AWS, Azure, GCP) for data engineering solutions. Strong understanding of cloud architecture principles and cost optimization strategies. Hands-on experience using Databricks, Snowflake, PySpark, Python, SQL Proven ability to lead and develop high-performing data engineering teams. Strong problem-solving, analytical, and critical thinking skills to address complex data challenges. Preferred Qualifications: Experience in Integrating AI with Data Engineering and building AI ready data lakes Prior experience in data modeling especially star-schema modeling concepts. Familiarity with ontologies, information modeling, and graph databases. Experience working with agile development methodologies such as Scaled Agile. Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops. Education and Professional Certifications SAFe for Teams certification (preferred) Databricks certifications AWS cloud certification Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills. What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. for a career that defies imagination Objects in your future are closer than they appear. Join us. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Posted 6 days ago
6.0 - 10.0 years
2 - 7 Lacs
Hyderābād
On-site
Country/Region: IN Requisition ID: 26435 Work Model: Position Type: Salary Range: Location: INDIA - HYDERABAD - BIRLASOFT OFFICE Title: Technical Lead-Data Engg Description: Area(s) of responsibility Job Description: Years of experience 6 to 10 Years – Experience in Perform Design, Development & Deployment using Azure Services (Data Factory, Databricks, PySpark , SQL) Develop and maintain scalable data pipelines and build new Data Source integrations to support increasing data volume and complexity. Experience in creating Technical Specification Design , Application Interface Design. Developing Modern Data Warehouse solutions using Azure Stack ( Azure Data Lake , Azure Databricks) and PySpark Develop batch processing and integration solutions and process Structured and Non-Structured Data Demonstrated in-depth skills with Azure Databricks and PySpark, and SQL Collaborate and engage with BI & analytics and the business team Minimum 2 year of Project experience in Azure Databricks
Posted 6 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Databricks is a popular technology in the field of big data and analytics, and the job market for Databricks professionals in India is growing rapidly. Companies across various industries are actively looking for skilled individuals with expertise in Databricks to help them harness the power of data. If you are considering a career in Databricks, here is a detailed guide to help you navigate the job market in India.
The average salary range for Databricks professionals in India varies based on experience level: - Entry-level: INR 4-6 lakhs per annum - Mid-level: INR 8-12 lakhs per annum - Experienced: INR 15-25 lakhs per annum
In the field of Databricks, a typical career path may include: - Junior Developer - Senior Developer - Tech Lead - Architect
In addition to Databricks expertise, other skills that are often expected or helpful alongside Databricks include: - Apache Spark - Python/Scala programming - Data modeling - SQL - Data visualization tools
As you prepare for Databricks job interviews, make sure to brush up on your technical skills, stay updated with the latest trends in the field, and showcase your problem-solving abilities. With the right preparation and confidence, you can land your dream job in the exciting world of Databricks in India. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.