Home
Jobs

3140 Databricks Jobs - Page 17

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

7.0 years

0 Lacs

Kochi, Kerala, India

On-site

Linkedin logo

Greetings from TCS Recruitment Team! Role: DATABRICKS LEAD/ DATABRICKS SOLUTION ARCHITECT/ DATABRICKS ML ENGINEER Years of experience: 7 to 18 Years Walk-In-Drive Location: Kochi Walk-in-Location Details: Tata Consultancy Services TCS Centre SEZ Unit, Infopark Kochi Phase 1, Infopark Kochi P.O, Kakkanad, Kochi - 682042, Kerala India Drive Time: 9 am to 1:00 PM Date: 21-Jun-25 Must have 5+ years of experience in data engineering or related fields At least 2-3 years of hands-on experience with Databricks (using Apache Spark, Delta Lake, etc.) Solid experience in working with big data technologies such as Hadoop, Spark, Kafka, or similar Experience with cloud platforms (AWS, Azure, or GCP) and cloud-native data tools Experience with machine learning frameworks and pipelines, particularly in Databricks. Experience with AI/ML model deployment, MLOps, and ML lifecycle management using Databricks and related tools. Regards Sundar V Show more Show less

Posted 5 days ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Role: Data Engineer with GCP Experience: 5 + years Location: Bangalore | Gurugram | Noida | Pune Notice: Immediate Joiners Mode: Hybrid JD: Develop and automate Python scripts for data processing and transformation. Design, implement, and manage data pipelines to facilitate seamless data integration and flow. Utilize GCP services, particularly BigQuery and Cloud Functions, to support data processing needs. Create and optimize advanced SQL queries for efficient data retrieval and manipulation in BigQuery. Collaborate with cross-functional teams to gather requirements and implement data solutions. Work with Apache and Databricks to enhance data processing capabilities. Show more Show less

Posted 5 days ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Welcome to Warner Bros. Discovery… the stuff dreams are made of. Who We Are… When we say, “the stuff dreams are made of,” we’re not just referring to the world of wizards, dragons and superheroes, or even to the wonders of Planet Earth. Behind WBD’s vast portfolio of iconic content and beloved brands, are the storytellers bringing our characters to life, the creators bringing them to your living rooms and the dreamers creating what’s next… From brilliant creatives, to technology trailblazers, across the globe, WBD offers career defining opportunities, thoughtfully curated benefits, and the tools to explore and grow into your best selves. Here you are supported, here you are celebrated, here you can thrive. Your New Role Our Media Platform Operations team is looking for a highly capable DBA manager that can manage and lead a team of DBA members. You will manage a team of professionals to design and develop database systems, provide guidance to database team on database structures and features. You will create standard procedures to enhance scalability and performance of existing database architecture. Troubleshoot complex database issues in accurate and timely manner. Maintain database disaster recovery procedures to ensure continuous availability and speedy recovery. Develop best practices for performance and operational efficiency. Provide regular updates to management on database project status. Stay updated with new database technologies and analyze such technologies to bring into scope of existing infrastructure. This role requires a strategic thinker with excellent leadership skills and in-depth technical expertise to ensure the reliability, security, and performance of our data infrastructure Your Role Accountabilities OPERATIONS/PROJECT MANAGEMENT Lead, mentor, and manage a team of DBAs, fostering a culture of continuous improvement and professional growth. Conduct regular performance reviews, providing feedback and setting goals for team members. Coordinate with HR for recruiting, interviewing, and hiring new team members as needed. Provide expert-level guidance in database design, performance tuning, backup and recovery strategies, and data modeling. Lead troubleshooting efforts for complex database issues and optimize existing systems for better performance and cost efficiency. Keep abreast of emerging database technologies and best practices, recommending innovations that can enhance data management capabilities. Collaborate with cross-functional teams, including software development, network infrastructure, and security, to align database operations with business needs. Serve as the primary point of contact for database-related audits and ensure compliance with data protection regulations. Establish and maintain key performance indicators (KPIs) for database operations and provide regular reports to senior management. Plan and manage database projects, including migrations, upgrades, and new deployments, ensuring they are completed on time and within budget. Allocate resources efficiently to meet project demands and balance team workload. Coordinates DBA activities with the infrastructure team to ensure database servers are built according to customer requirements in a timely manner. Working effectively with a team that is globally dispersed. Serves as a mentor for Database Administrators and Associate Database Administrators. Provides additional support and guidance to DBAs/Associate DBAs with regards to problem solving, escalations and day to day work related challenges. Provide 24/7 support. STRATEGY Develop and execute a strategic roadmap for database management aligned with organizational objectives Oversee the design, deployment, and management of high-availability and disaster recovery solutions. Ensure databases are secure, scalable, and meet the performance requirements of applications. assess the skill set of the database team members and come up with a plan to bridge the gap by providing training or mentoring. Collaborate with IT and business leaders to define the architecture and roadmaps for future Database projects Lead the team to participate in identifying, proposing and implementing new and emerging technologies to support ongoing projects and business operations. Guide the team in conducting research, developing, and implementing new technologies to support future projects. Takes the lead in communicating with internal and external stakeholders. ANALYTICS Implement robust monitoring and alerting mechanisms to quickly detect and resolve database-related issues. Review the existing database standard documents and come up with new standards for all our database platforms (oracle, SQL Server, SAP Hana, db2, mysql, Potgresql, snowflake, & databricks) Lead the team to develop database structures and features according to organizational needs. Helps build the structure and design of the database. Lead the team for Creating, reviewing and maintaining database documentation. Lead the team to perform and plan upgrades and re-platforms to align with the company’s vision. Creating, reviewing and maintaining operational documentation that can be used by our 24/7 operations team and junior database administrators. Qualifications & Experiences Bachelor's degree in computer science, information systems, or information technology. 10+ years of experience in database management and leading database team. Proficiency in SQL and experience with multiple relational and non-relational database systems such as Oracle, SQL Server, MySQL, PostgreSQL, MongoDB, etc. Strong understanding of data architecture, data integration, and ETL processes. Familiarity with cloud database solutions (e.g., AWS RDS, Azure SQL, Google Cloud Spanner) and hybrid environments. Excellent problem-solving skills and ability to work under pressure. Exemplary communication skills, with the ability to communicate complex technical concepts to non-technical stakeholders. Very good experience in database administration of the various database platforms (oracle, SQL Server, SAP Hana, db2, mysql, Potgresql, snowflake, & databricks). Experience working on databases that are hosted both on-perm and AWS cloud. Good working experience AWS RDS databases is required. experience automating, scripting, and streamlining processes for efficiency and accuracy utilizing Unix shell scripting and Windows BAT. Ability and experience with the development of processes and procedures to standardize Database installations and configuration. Extensive experience with implementation and maintenance of Disaster Recovery and High availability. Ability to work on unusually complex technical problems and provide solutions that are highly innovative and ingenious. Ability to provide technical documentation and project plans for technical staff members. Excellent communication, presentation, and customer relationship skills. Must have the legal right to work in the United States. Ability to provide 24/7 support. Not Required But Preferred Experience Public speaking and presentation skills. How We Get Things Done… This last bit is probably the most important! Here at WBD, our guiding principles are the core values by which we operate and are central to how we get things done. You can find them at www.wbd.com/guiding-principles/ along with some insights from the team on what they mean and how they show up in their day to day. We hope they resonate with you and look forward to discussing them during your interview. Championing Inclusion at WBD Warner Bros. Discovery embraces the opportunity to build a workforce that reflects a wide array of perspectives, backgrounds and experiences. Being an equal opportunity employer means that we take seriously our responsibility to consider qualified candidates on the basis of merit, regardless of sex, gender identity, ethnicity, age, sexual orientation, religion or belief, marital status, pregnancy, parenthood, disability or any other category protected by law. If you’re a qualified candidate with a disability and you require adjustments or accommodations during the job application and/or recruitment process, please visit our accessibility page for instructions to submit your request. Show more Show less

Posted 5 days ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Hi {fullName} There is an opportunity for Azure Devops (CI/CD Pipelines, Migration) IN Hyderabad for which WALKIN interviewAT HYDERABAD is there on 21st JUN 25 between 9:30 AM TO 12:30 PM PLS SHARE below details to mamidi.p@tcs.com with subject line as Azure Devops 21st JUN 25 if you are interested Email id: Contact no: Total EXP: Preferred Location: CURRENT CTC: EXPECTED CTC: NOTICE PERIOD: CURRENT ORGANIZATION: HIGHEST QUALIFICATION THAT IS FULL TIME : HIGHEST QUALIFICATION UNIVERSITY: ANY GAP IN EDUCATION OR EMPLOYMENT: IF YES HOW MANY YEARS AND REASON FOR GAP: ARE U AVAILABLE FOR WALKIN INTERVIEW AT HYDERABAD ON 21ST JUN 25(YES/NO): We will share a mail to you by tom Night if you are shortlisted. Desired Competencies (Technical/Behavioral Competency) Must-Have Extensive automation experience in Azure DevOps Practical delivery experience in Infrastructure as Code (IaC) using bicep. Automate deployment tasks using scripts (e.g., PowerShell, Bash) within your CI/CD pipelines. Understand how to integrate the code artifacts of the following Azure data services into CI/CD pipelines: Azure Data Factory, Azure Databricks, Azure Machine Learning, SQL Database, Azure OpenAI Experience in Azure CI (integrate developers code into shared repo) and CD (deploy both IaC script and code artifacts into Dev, Test , and UAT environments). Good-to-Have Terraform experience is a plus Roles & Responsibilities Own and deliver DevOps pipelines for multiple, diverse devops teams Define, propose and execute features roadmap towards developing strong and mature DevOps pipelines Lead the design discussions and development of innovative cloud and DevOps solutions Work on Azure cloud for infrastructure automation using cloud native services Show more Show less

Posted 5 days ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your Role And Responsibilities Establish and implement best practices for DBT workflows, ensuring efficiency, reliability, and maintainability. Collaborate with data analysts, engineers, and business teams to align data transformations with business needs. Monitor and troubleshoot data pipelines to ensure accuracy and performance. Work with Azure-based cloud technologies to support data storage, transformation, and processing Preferred Education Master's Degree Required Technical And Professional Expertise Strong MS SQL, Azure Databricks experience Implement and manage data models in DBT, data transformation and alignment with business requirements. Ingest raw, unstructured data into structured datasets to cloud object store. Utilize DBT to convert raw, unstructured data into structured datasets, enabling efficient analysis and reporting. Write and optimize SQL queries within DBT to enhance data transformation processes and improve overall performance Preferred Technical And Professional Experience Establish best DBT processes to improve performance, scalability, and reliability. Design, develop, and maintain scalable data models and transformations using DBT in conjunction with Databricks Proven interpersonal skills while contributing to team effort by accomplishing related results as required Show more Show less

Posted 5 days ago

Apply

7.0 years

0 Lacs

Mumbai, Maharashtra, India

Remote

Linkedin logo

About This Role About Aladdin Financial Engineering (AFE): Join a diverse and collaborative team of over 300 modelers and technologists in Aladdin Financial Engineering (AFE) within BlackRock Solutions, the business responsible for the research and development of Aladdin’s financial models. This group is also accountable for analytics production, enhancing the infrastructure platform and delivering analytics content to portfolio and risk management professionals (both within BlackRock and across the Aladdin client community). The models developed and supported by AFE span a wide array of financial products covering equities, fixed income, commodities, derivatives, and private markets. AFE provides investment insights that range from an analysis of cash flows on a single bond, to the overall financial risk associated with an entire portfolio, balance sheet, or enterprise. Role Description We are looking for a person to join the Advanced Data Analytics team with AFE Single Security. Advanced Data Analytics is a team of Quantitative Data and Product Specialists, focused on delivering Single Security Data Content, Governance and Product Solutions and Research Platform. The team leverages data, cloud, and emerging technologies in building an innovative data platform, with the focus on business and research use cases in the Single Security space. The team uses various statistical/mathematical methodologies to derive insights and generate content to help develop predictive models, clustering, and classification solutions and enable Governance. The team works on Mortgage, Structured & Credit Products. We are looking for a person to work with a specialized focus on Data & Model governance and expand to working on the derived data and analytics content in MBS, Structured Products and Credit space." Experience Experience on Scala Knowledge of ETL, data curation and analytical jobs using distributed computing framework with Spark Knowledge and Experience of working with large enterprise databases like Snowflake, Cassandra & Cloud managed services like Dataproc, Databricks Knowledge of financial instruments like Corporate Bonds, Derivatives etc. Knowledge of regression methodologies Aptitude for design and building tools for Data Governance Python knowledge is a plus Qualifications Bachelors/master's in computer science with a major in Math, Econ, or related field 7+ years of relevant experience Our Benefits To help you stay energized, engaged and inspired, we offer a wide range of benefits including a strong retirement plan, tuition reimbursement, comprehensive healthcare, support for working parents and Flexible Time Off (FTO) so you can relax, recharge and be there for the people you care about. Our hybrid work model BlackRock’s hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all. Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week. Some business groups may require more time in the office due to their roles and responsibilities. We remain focused on increasing the impactful moments that arise when we work together in person – aligned with our commitment to performance and innovation. As a new joiner, you can count on this hybrid model to accelerate your learning and onboarding experience here at BlackRock. About BlackRock At BlackRock, we are all connected by one mission: to help more and more people experience financial well-being. Our clients, and the people they serve, are saving for retirement, paying for their children’s educations, buying homes and starting businesses. Their investments also help to strengthen the global economy: support businesses small and large; finance infrastructure projects that connect and power cities; and facilitate innovations that drive progress. This mission would not be possible without our smartest investment – the one we make in our employees. It’s why we’re dedicated to creating an environment where our colleagues feel welcomed, valued and supported with networks, benefits and development opportunities to help them thrive. For additional information on BlackRock, please visit @blackrock | Twitter: @blackrock | LinkedIn: www.linkedin.com/company/blackrock BlackRock is proud to be an Equal Opportunity Employer. We evaluate qualified applicants without regard to age, disability, family status, gender identity, race, religion, sex, sexual orientation and other protected attributes at law. Show more Show less

Posted 5 days ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Job Description Summary Designs, develops, tests, debugs and implements more complex operating systems components, software tools, and utilities with full competency. Coordinates with users to determine requirements. Reviews systems under development and related documentation. Makes more complex modifications to existing software to fit specialized needs and configurations, and maintains program libraries and technical documentation. May coordinate activities of the project team and assist in monitoring project schedules and costs. Essential Duties And Responsibilities Lead and Manage configuration, maintenance, and support of portfolio of AI models and related products. Manage model delivery to Production deployment team and coordinate model production deployments. Ability to analyze complex data requirements, understand exploratory data analysis, and design solutions that meet business needs. Work on analyzing data profiles, transformation, quality and security with the dev team to build and enhance data pipelines while maintaining proper quality and control around the data sets. Work closely with cross-functional teams, including business analysts, data engineers, and domain experts. Understand business requirements and translate them into technical solutions. Understand and review the business use cases for data pipelines for the Data Lake including ingestion, transformation and storing in the Lakehouse. Present architecture and solutions to executive-level. Minimum Qualifications Bachelor's or master's degree in computer science, Engineering, or related technical field Minimum of 5 years' experience in building data pipelines for both structured and unstructured data. At least 2 years' experience in Azure data pipeline development. Preferably 3 or more years' experience with Hadoop, Azure Databricks, Stream Analytics, Eventhub, Kafka, and Flink. Strong proficiency in Python and SQL Experience with big data technologies (Spark, Hadoop, Kafka) Familiarity with ML frameworks (TensorFlow, PyTorch, scikit-learn) Knowledge of model serving technologies (TensorFlow Serving, MLflow, KubeFlow) will be a plus Experience with one pof the cloud platforms (Azure preferred) and their Data Services. Understanding ML services will get preference. Understanding of containerization and orchestration (Docker, Kubernetes) Experience with data versioning and ML experiment tracking will be great addition Knowledge of distributed computing principles Familiarity with DevOps practices and CI/CD pipelines Preferred Qualifications Bachelor's degree in Computer Science or equivalent work experience. Experience with Agile/Scrum methodology. Experience with tax and accounting domain a plus. Azure Data Scientist certification a plus. Applicants may be required to appear onsite at a Wolters Kluwer office as part of the recruitment process. Show more Show less

Posted 5 days ago

Apply

4.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Job Title: Lead Data Engineer – C12 / Assistant Vice President (India) The Role The Data Engineer is accountable for developing high quality data products to support the Bank’s regulatory requirements and data driven decision making. A Data Engineer will serve as an example to other team members, work closely with customers, and remove or escalate roadblocks. By applying their knowledge of data architecture standards, data warehousing, data structures, and business intelligence they will contribute to business outcomes on an agile team. Responsibilities Developing and supporting scalable, extensible, and highly available data solutions Deliver on critical business priorities while ensuring alignment with the wider architectural vision Identify and help address potential risks in the data supply chain Follow and contribute to technical standards Design and develop analytical data models Required Qualifications & Work Experience First Class Degree in Engineering/Technology (4-year graduate course) 8 to 12 years’ experience implementing data-intensive solutions using agile methodologies Experience of relational databases and using SQL for data querying, transformation and manipulation Experience of modelling data for analytical consumers Ability to automate and streamline the build, test and deployment of data pipelines Experience in cloud native technologies and patterns A passion for learning new technologies, and a desire for personal growth, through self-study, formal classes, or on-the-job training Excellent communication and problem-solving skills An inclination to mentor; an ability to lead and deliver medium sized components independently T echnical Skills (Must Have) ETL: Hands on experience of building data pipelines. Proficiency in two or more data integration platforms such as Ab Initio, Apache Spark, Talend and Informatica Big Data: Experience of ‘big data’ platforms such as Hadoop, Hive or Snowflake for data storage and processing Data Warehousing & Database Management: Expertise around Data Warehousing concepts, Relational (Oracle, MSSQL, MySQL) and NoSQL (MongoDB, DynamoDB) database design Data Modeling & Design: Good exposure to data modeling techniques; design, optimization and maintenance of data models and data structures Languages: Proficient in one or more programming languages commonly used in data engineering such as Python, Java or Scala DevOps: Exposure to concepts and enablers - CI/CD platforms, version control, automated quality control management Data Governance: A strong grasp of principles and practice including data quality, security, privacy and compliance Technical Skills (Valuable) Ab Initio: Experience developing Co>Op graphs; ability to tune for performance. Demonstrable knowledge across full suite of Ab Initio toolsets e.g., GDE, Express>IT, Data Profiler and Conduct>IT, Control>Center, Continuous>Flows Cloud: Good exposure to public cloud data platforms such as S3, Snowflake, Redshift, Databricks, BigQuery, etc. Demonstratable understanding of underlying architectures and trade-offs Data Quality & Controls: Exposure to data validation, cleansing, enrichment and data controls Containerization: Fair understanding of containerization platforms like Docker, Kubernetes File Formats: Exposure in working on Event/File/Table Formats such as Avro, Parquet, Protobuf, Iceberg, Delta Others: Experience of using a Job scheduler e.g., Autosys. Exposure to Business Intelligence tools e.g., Tableau, Power BI Certification on any one or more of the above topics would be an advantage. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Digital Software Engineering ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster. Show more Show less

Posted 5 days ago

Apply

1.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

About The Team Rubrik is on a mission to secure the world’s data and our Information Technology Team is committed to supporting this mission. As part of the newly founded IT AI team, you’ll be pivotal in driving AI-powered transformation, enabling smarter automation, data-driven insights, and scalable solutions that empower Rubrik’s mission. About The Role We are seeking an experienced GenAI Engineer to join our Data Engineering team, with a focus on building AI Agents and workflows. The successful candidate will work on integrating data sources or building MCP clients/servers to support the development and deployment of LLM based Agents and bots. You will work closely with business teams and fellow data engineers enabling the Data Engineering team to leverage Gen AI tools for advanced data solutions. Collaborate with business teams and data engineers to empower the Data Engineering team's adoption of Gen AI tools for creating sophisticated data solutions. An experienced AI Data Engineer is needed to join our Data Engineering team, focusing on the development of AI Agents and workflows. The ideal candidate will be responsible for integrating data sources and building MCP clients/servers to facilitate the development and deployment of LLM-based Agents and bots. This role involves close collaboration with data scientists and engineers to ensure smooth data integration and flow, enabling the Data Engineering team to utilize GenAI tools for sophisticated data solutions. What You’ll Do Design and develop data integrations through MCP protocols or traditional data extractionmechanismsDesign and build data integrations utilizing MCP protocols or conventional data extraction methods. Leverage Snowflake Cortex, Gemini Agentspace or similar tools to build scalable and efficient data solutions for AI workloads, enabling the Data Engineering team to generate high-quality data products from unstructured and structured data Ensure data quality, integrity, and scalability for large-scale AI workloads, supporting the development of Gen AI models Collaborate with business teams, data engineers and application developers to deliver products helping streamline business processes, Work with business teams, data engineers, and application developers to create products that improve business processes or lead to top line growth or bottom line improvements. Integrate data pipelines with existing infrastructure, enabling seamless data flow and analytics Design and develop scalable data pipelines for GenAI model training and deployment. Utilize tools like Snowflake Cortex and Databricks LLM (Mosaic AI, RAG, Model Serving). Leverage platforms such as Snowflake Cortex and Gemini Agentspace. Create efficient data solutions for AI workloads. Enable the Data Engineering team to produce high-quality data products (unstructured and structured). Ensure data quality, integrity, and scalability for large AI workloads supporting GenAI model development. Collaborate with business teams, data engineers, and application developers. Deliver products that streamline business processes or drive revenue and efficiency. Integrate data pipelines with existing infrastructure. Ensure seamless data flow and analytics. Experience You’ll Need 1+ years of experience building AI Agents or leveraging Snowflake Cortex, Gemini Agentspace or similar open source tooling 3+ years of experience in data engineering, with a focus on AI/ML workloads 5+ years of experience working in Data Analytics either Snowflake or Databricks Strong programming skills in languages like Python, Java, or Scala Knowledge of data storage solutions (e.g., Snowflake, Databricks) and data APIs Experience with cloud configuration and data governance Strong problem-solving skills and ability to work in a fast-paced environment Experience with large language models (LLMs) like transformer-based models, and frameworks like LangChain or similar. Preferred Qualifications Building AI Agents and Agentic workflows Experience leveraging MCP, Agent2Agent Protocols Knowledge of generative models and their applications in data engineering Experience with data governance and security best practices for Gen AI workloads Experience with Agile development methodologies and collaboration tools (e.g., Jira, GitHub) Join Us in Securing the World's Data Rubrik (NYSE: RBRK) is on a mission to secure the world’s data. With Zero Trust Data Security™, we help organizations achieve business resilience against cyberattacks, malicious insiders, and operational disruptions. Rubrik Security Cloud, powered by machine learning, secures data across enterprise, cloud, and SaaS applications. We help organizations uphold data integrity, deliver data availability that withstands adverse conditions, continuously monitor data risks and threats, and restore businesses with their data when infrastructure is attacked. Linkedin | X (formerly Twitter) | Instagram | Rubrik.com Inclusion @ Rubrik At Rubrik, we are dedicated to fostering a culture where people from all backgrounds are valued, feel they belong, and believe they can succeed. Our commitment to inclusion is at the heart of our mission to secure the world’s data. Our goal is to hire and promote the best talent, regardless of background. We continually review our hiring practices to ensure fairness and strive to create an environment where every employee has equal access to opportunities for growth and excellence. We believe in empowering everyone to bring their authentic selves to work and achieve their fullest potential. Our inclusion strategy focuses on three core areas of our business and culture: Our Company: We are committed to building a merit-based organization that offers equal access to growth and success for all employees globally. Your potential is limitless here. Our Culture: We strive to create an inclusive atmosphere where individuals from all backgrounds feel a strong sense of belonging, can thrive, and do their best work. Your contributions help us innovate and break boundaries. Our Communities: We are dedicated to expanding our engagement with the communities we operate in, creating opportunities for underrepresented talent and driving greater innovation for our clients. Your impact extends beyond Rubrik, contributing to safer and stronger communities. Equal Opportunity Employer/Veterans/Disabled Rubrik is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, or protected veteran status and will not be discriminated against on the basis of disability. Rubrik provides equal employment opportunities (EEO) to all employees and applicants for employment without regard to race, color, religion, sex, national origin, age, disability or genetics. In addition to federal law requirements, Rubrik complies with applicable state and local laws governing nondiscrimination in employment in every location in which the company has facilities. This policy applies to all terms and conditions of employment, including recruiting, hiring, placement, promotion, termination, layoff, recall, transfer, leaves of absence, compensation and training. Federal law requires employers to provide reasonable accommodation to qualified individuals with disabilities. Please contact us at hr@rubrik.com if you require a reasonable accommodation to apply for a job or to perform your job. Examples of reasonable accommodation include making a change to the application process or work procedures, providing documents in an alternate format, using a sign language interpreter, or using specialized equipment. EEO IS THE LAW NOTIFICATION OF EMPLOYEE RIGHTS UNDER FEDERAL LABOR LAWS Show more Show less

Posted 5 days ago

Apply

5.0 years

0 Lacs

India

On-site

Linkedin logo

About Oportun Oportun (Nasdaq: OPRT) is a mission-driven fintech that puts its 2.0 million members' financial goals within reach. With intelligent borrowing, savings, and budgeting capabilities, Oportun empowers members with the confidence to build a better financial future. Since inception, Oportun has provided more than $16.6 billion in responsible and affordable credit, saved its members more than $2.4 billion in interest and fees, and helped its members save an average of more than $1,800 annually. Oportun has been certified as a Community Development Financial Institution (CDFI) since 2009. WORKING AT OPORTUN Working at Oportun means enjoying a differentiated experience of being part of a team that fosters a diverse, equitable and inclusive culture where we all feel a sense of belonging and are encouraged to share our perspectives. This inclusive culture is directly connected to our organization's performance and ability to fulfill our mission of delivering affordable credit to those left out of the financial mainstream. We celebrate and nurture our inclusive culture through our employee resource groups. Position Overview As a Sr. Data Engineer at Oportun, you will be a key member of our team, responsible for designing, developing, and maintaining sophisticated software / data platforms in achieving the charter of the engineering group. Your mastery of a technical domain enables you to take up business problems and solve them with a technical solution. With your depth of expertise and leadership abilities, you will actively contribute to architectural decisions, mentor junior engineers, and collaborate closely with cross-functional teams to deliver high-quality, scalable software solutions that advance our impact in the market. This is a role where you will have the opportunity to take up responsibility in leading the technology effort – from technical requirements gathering to final successful delivery of the product - for large initiatives (cross-functional and multi-month-long projects). Responsibilities Data Architecture and Design: Lead the design and implementation of scalable, efficient, and robust data architectures to meet business needs and analytical requirements. Collaborate with stakeholders to understand data requirements, build subject matter expertise, and define optimal data models and structures. Data Pipeline Development And Optimization Design and develop data pipelines, ETL processes, and data integration solutions for ingesting, processing, and transforming large volumes of structured and unstructured data. Optimize data pipelines for performance, reliability, and scalability. Database Management And Optimization Oversee the management and maintenance of databases, data warehouses, and data lakes to ensure high performance, data integrity, and security. Implement and manage ETL processes for efficient data loading and retrieval. Data Quality And Governance Establish and enforce data quality standards, validation rules, and data governance practices to ensure data accuracy, consistency, and compliance with regulations. Drive initiatives to improve data quality and documentation of data assets. Mentorship And Leadership Provide technical leadership and mentorship to junior team members, assisting in their skill development and growth. Lead and participate in code reviews, ensuring best practices and high-quality code. Collaboration And Stakeholder Management Collaborate with cross-functional teams, including data scientists, analysts, and business stakeholders, to understand their data needs and deliver solutions that meet those needs. Communicate effectively with non-technical stakeholders to translate technical concepts into actionable insights and business value. Performance Monitoring And Optimization Implement monitoring systems and practices to track data pipeline performance, identify bottlenecks, and optimize for improved efficiency and scalability. Common Requirements You have a strong understanding of a business or system domain with sufficient knowledge & expertise around the appropriate metrics and trends. You collaborate closely with product managers, designers, and fellow engineers to understand business needs and translate them into effective solutions. You provide technical leadership and expertise, guiding the team in making sound architectural decisions and solving challenging technical problems. Your solutions anticipate scale, reliability, monitoring, integration, and extensibility. You conduct code reviews and provide constructive feedback to ensure code quality, performance, and maintainability. You mentor and coach junior engineers, fostering a culture of continuous learning, growth, and technical excellence within the team. You play a significant role in the ongoing evolution and refinement of current tools and applications used by the team, and drive adoption of new practices within your team. You take ownership of (customer) issues, including initial troubleshooting, identification of root cause and issue escalation or resolution, while maintaining the overall reliability and performance of our systems. You set the benchmark for responsiveness and ownership and overall accountability of engineering systems. You independently drive and lead multiple features, contribute to (a) large project(s) and lead smaller projects. You can orchestrate work that spans multiples engineers within your team and keep all relevant stakeholders informed. You support your lead/EM about your work and that of the team, that they need to share with the stakeholders, including escalation of issues Qualifications Bachelor's or Master's degree in Computer Science, Data Science, or a related field. 5+ years of experience in data engineering, with a focus on data architecture, ETL, and database management. Proficiency in programming languages like Python/PySpark and Java or Scala Expertise in big data technologies such as Hadoop, Spark, Kafka, etc. In-depth knowledge of SQL and experience with various database technologies (e.g., PostgreSQL, MariaDB, NoSQL databases). Experience and expertise in building complex end-to-end data pipelines. Experience with orchestration and designing job schedules using the CICD tools like Jenkins, Airflow or Databricks Ability to work in an Agile environment (Scrum, Lean, Kanban, etc) Ability to mentor junior team members. Familiarity with cloud platforms (e.g., AWS, Azure, GCP) and their data services (e.g., AWS Redshift, S3, Azure SQL Data Warehouse). Strong leadership, problem-solving, and decision-making skills. Excellent communication and collaboration abilities. Familiarity or certification in Databricks is a plus. We are proud to be an Equal Opportunity Employer and consider all qualified applicants for employment opportunities without regard to race, age, color, religion, gender, national origin, disability, sexual orientation, veteran status or any other category protected by the laws or regulations in the locations where we operate. California applicants can find a copy of Oportun's CCPA Notice here: https://oportun.com/privacy/california-privacy-notice/. We will never request personal identifiable information (bank, credit card, etc.) before you are hired. We do not charge you for pre-employment fees such as background checks, training, or equipment. If you think you have been a victim of fraud by someone posing as us, please report your experience to the FBI’s Internet Crime Complaint Center (IC3). Show more Show less

Posted 5 days ago

Apply

25.0 years

4 - 7 Lacs

Cochin

On-site

GlassDoor logo

Company Overview Milestone Technologies is a global IT managed services firm that partners with organizations to scale their technology, infrastructure and services to drive specific business outcomes such as digital transformation, innovation, and operational agility. Milestone is focused on building an employee-first, performance-based culture and for over 25 years, we have a demonstrated history of supporting category-defining enterprise clients that are growing ahead of the market. The company specializes in providing solutions across Application Services and Consulting, Digital Product Engineering, Digital Workplace Services, Private Cloud Services, AI/Automation, and ServiceNow. Milestone culture is built to provide a collaborative, inclusive environment that supports employees and empowers them to reach their full potential. Our seasoned professionals deliver services based on Milestone’s best practices and service delivery framework. By leveraging our vast knowledge base to execute initiatives, we deliver both short-term and long-term value to our clients and apply continuous service improvement to deliver transformational benefits to IT. With Intelligent Automation, Milestone helps businesses further accelerate their IT transformation. The result is a sharper focus on business objectives and a dramatic improvement in employee productivity. Through our key technology partnerships and our people-first approach, Milestone continues to deliver industry-leading innovation to our clients. With more than 3,000 employees serving over 200 companies worldwide, we are following our mission of revolutionizing the way IT is deployed. Job Overview In this vital role you will be responsible for the development and implementation of our data strategy. The ideal candidate possesses a strong blend of technical expertise and data-driven problem-solving skills. As a Data Engineer, you will play a crucial role in building, and optimizing our data pipelines and platforms in a SAFE Agile product team. Chip in to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions. Deliver for data pipeline projects from development to deployment, managing, timelines, and risks. Ensure data quality and integrity through meticulous testing and monitoring. Leverage cloud platforms (AWS, Databricks) to build scalable and efficient data solutions. Work closely with product team, and key collaborators to understand data requirements. Enforce to data engineering industry standards and standards. Experience developing in an Agile development environment, and comfortable with Agile terminology and ceremonies. Familiarity with code versioning using GIT and code migration tools. Familiarity with JIRA. Stay up to date with the latest data technologies and trends What we expect of you Basic Qualifications: Doctorate degree OR Master’s degree and 4 to 6 years of Information Systems experience OR Bachelor’s degree and 6 to 8 years of Information Systems experience OR Diploma and 10 to 12 years of Information Systems experience. Demonstrated hands-on experience with cloud platforms (AWS, Azure, GCP) Proficiency in Python, PySpark, SQL. Development knowledge in Databricks. Good analytical and problem-solving skills to address sophisticated data challenges. Preferred Qualifications: Experienced with data modeling Experienced working with ETL orchestration technologies Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and DevOps Familiarity with SQL/NOSQL database Soft Skills: Skilled in breaking down problems, documenting problem statements, and estimating efforts. Effective communication and interpersonal skills to collaborate with multi-functional teams. Excellent analytical and problem solving skills. Strong verbal and written communication skills Ability to work successfully with global teams High degree of initiative and self-motivation. Team-oriented, with a focus on achieving team goals Compensation Estimated Pay Range: Exact compensation and offers of employment are dependent on circumstances of each case and will be determined based on job-related knowledge, skills, experience, licenses or certifications, and location. Our Commitment to Diversity & Inclusion At Milestone we strive to create a workplace that reflects the communities we serve and work with, where we all feel empowered to bring our full, authentic selves to work. We know creating a diverse and inclusive culture that champions equity and belonging is not only the right thing to do for our employees but is also critical to our continued success. Milestone Technologies provides equal employment opportunity for all applicants and employees. All qualified applicants will receive consideration for employment and will not be discriminated against on the basis of race, color, religion, gender, gender identity, marital status, age, disability, veteran status, sexual orientation, national origin, or any other category protected by applicable federal and state law, or local ordinance. Milestone also makes reasonable accommodations for disabled applicants and employees. We welcome the unique background, culture, experiences, knowledge, innovation, self-expression and perspectives you can bring to our global community. Our recruitment team is looking forward to meeting you.

Posted 5 days ago

Apply

130.0 years

6 - 9 Lacs

Hyderābād

On-site

GlassDoor logo

Job Description Manager senior data engineer The Opportunity Based in Hyderabad, join a global healthcare biopharma company and be part of a 130- year legacy of success backed by ethical integrity, forward momentum, and an inspiring mission to achieve new milestones in global healthcare. Be part of an organisation driven by digital technology and data-backed approaches that support a diversified portfolio of prescription medicines, vaccines, and animal health products. Drive innovation and execution excellence. Be a part of a team with passion for using data, analytics, and insights to drive decision-making, and which creates custom software, allowing us to tackle some of the world's greatest health threats. Our Technology Centers focus on creating a space where teams can come together to deliver business solutions that save and improve lives. An integral part of our company’s IT operating model, Tech Centers are globally distributed locations where each IT division has employees to enable our digital transformation journey and drive business outcomes. These locations, in addition to the other sites, are essential to supporting our business and strategy. A focused group of leaders in each Tech Center helps to ensure we can manage and improve each location, from investing in growth, success, and well-being of our people, to making sure colleagues from each IT division feel a sense of belonging to managing critical emergencies. And together, we must leverage the strength of our team to collaborate globally to optimize connections and share best practices across the Tech Centers. Role Overview : Our technology teams operate as business partners, proposing ideas and innovative solutions that enable new organizational capabilities. We collaborate internationally to deliver services and solutions that help everyone be more productive and enable innovation. Responsibilities Designs, builds, and maintains data pipeline architecture - ingest, process, and publish data for consumption. Batch processes collected data, formats data in an optimized way to bring it analyze-ready Ensures best practices sharing and across the organization Enables delivery of data-analytics projects Develops deep knowledge of the company's supported technology; understands the whole complexity/dependencies between multiple teams, platforms (people, technologies) Communicates intensively with other platform/competencies to comprehend new trends and methodologies being implemented/considered within the company ecosystem Understands the customer and stakeholders business needs/priorities and helps building solutions that support our business goals Establishes and manages the close relationship with customers/stakeholders Has overview of the date engineering market development to be able to come up/explore new ways of delivering pipelines to increase their value/contribution Builds “community of practice” leveraging experience from delivering complex analytics projects Is accountable for ensuring that the team delivers solutions with high quality standards, timeliness, compliance and excellent user experience Contributes to innovative experiments, specifically to idea generation, idea incubation and/or experimentation, identifying tangible and measurable criteria Qualifications: Bachelor’s degree in Computer Science, Data Science, Information Technology, Engineering or a related field. 3+ plus years of experience as a Data Engineer or in a similar role, with a strong portfolio of data projects. 3+ plus years experience SQL skills, with the ability to write and optimize queries for large datasets. 1+ plus years experience and proficiency in Python for data manipulation, automation, and pipeline development. Experience with Databricks including creating notebooks and utilizing Spark for big data processing. Strong experience with data warehousing solution (such as Snowflake), including schema design and performance optimization. Experience with data governance and quality management tools, particularly Collibra DQ. Strong analytical and problem-solving skills, with an attention to detail. Who we are: We are known as Merck & Co., Inc., Rahway, New Jersey, USA in the United States and Canada and MSD everywhere else. For more than a century, we have been inventing for life, bringing forward medicines and vaccines for many of the world's most challenging diseases. Today, our company continues to be at the forefront of research to deliver innovative health solutions and advance the prevention and treatment of diseases that threaten people and animals around the world. What we look for: Imagine getting up in the morning for a job as important as helping to save and improve lives around the world. Here, you have that opportunity. You can put your empathy, creativity, digital mastery, or scientific genius to work in collaboration with a diverse group of colleagues who pursue and bring hope to countless people who are battling some of the most challenging diseases of our time. Our team is constantly evolving, so if you are among the intellectually curious, join us—and start making your impact today. #HYDIT2025 Current Employees apply HERE Current Contingent Workers apply HERE Search Firm Representatives Please Read Carefully Merck & Co., Inc., Rahway, NJ, USA, also known as Merck Sharp & Dohme LLC, Rahway, NJ, USA, does not accept unsolicited assistance from search firms for employment opportunities. All CVs / resumes submitted by search firms to any employee at our company without a valid written search agreement in place for this position will be deemed the sole property of our company. No fee will be paid in the event a candidate is hired by our company as a result of an agency referral where no pre-existing agreement is in place. Where agency agreements are in place, introductions are position specific. Please, no phone calls or emails. Employee Status: Regular Relocation: VISA Sponsorship: Travel Requirements: Flexible Work Arrangements: Hybrid Shift: Valid Driving License: Hazardous Material(s): Required Skills: Business, Business, Business Intelligence (BI), Business Management, Contractor Management, Cost Reduction, Database Administration, Database Optimization, Data Engineering, Data Flows, Data Infrastructure, Data Management, Data Modeling, Data Optimization, Data Quality, Data Visualization, Design Applications, Information Management, Management Process, Operating Cost Reduction, Productivity Improvements, Project Engineering, Social Collaboration, Software Development, Software Development Life Cycle (SDLC) {+ 1 more} Preferred Skills: Job Posting End Date: 08/20/2025 A job posting is effective until 11:59:59PM on the day BEFORE the listed job posting end date. Please ensure you apply to a job posting no later than the day BEFORE the job posting end date. Requisition ID: R350684

Posted 5 days ago

Apply

130.0 years

6 - 9 Lacs

Hyderābād

On-site

GlassDoor logo

Job Description Manager Senior Data Engineer The Opportunity Based in Hyderabad, join a global healthcare biopharma company and be part of a 130- year legacy of success backed by ethical integrity, forward momentum, and an inspiring mission to achieve new milestones in global healthcare. Be part of an organisation driven by digital technology and data-backed approaches that support a diversified portfolio of prescription medicines, vaccines, and animal health products. Drive innovation and execution excellence. Be a part of a team with passion for using data, analytics, and insights to drive decision-making, and which creates custom software, allowing us to tackle some of the world's greatest health threats. Our Technology Centers focus on creating a space where teams can come together to deliver business solutions that save and improve lives. An integral part of our company’s IT operating model, Tech Centers are globally distributed locations where each IT division has employees to enable our digital transformation journey and drive business outcomes. These locations, in addition to the other sites, are essential to supporting our business and strategy. A focused group of leaders in each Tech Center helps to ensure we can manage and improve each location, from investing in growth, success, and well-being of our people, to making sure colleagues from each IT division feel a sense of belonging to managing critical emergencies. And together, we must leverage the strength of our team to collaborate globally to optimize connections and share best practices across the Tech Centers. Role Overview Our IT team operates as a business partner proposing ideas and innovative solutions that enable new organizational capabilities. We collaborate internationally to deliver the services and solutions that help everyone to be more productive and enable innovation. The candidate will work with a globally diverse set of teams which includes SAP Basis, Security, ABAP, SAP functional team members, Infrastructure team and other IT process partners providing support for existing and new initiatives. The candidate will work closely with and advise the SAP Technical Architect on architectural topics and new applications / technologies to be integrated. The candidate will lead some cross-functional projects, relied upon to answer complex questions, and assists with program-wide initiatives. Our organization is on a transformation journey and we envision using newer SAP technologies and infrastructure as part of this transformation and the candidate must have exposure to these new technologies. It is expected that the candidate will be able to both lead technical initiatives and be hands-on. Responsibilities Designs, builds, and maintains data pipeline architecture - ingest, process, and publish data for consumption. Batch processes collected data, formats data in an optimized way to bring it analyze-ready Ensures best practices sharing and across the organization Enables delivery of data-analytics projects Develops deep knowledge of the company's supported technology; understands the whole complexity/dependencies between multiple teams, platforms (people, technologies) Communicates intensively with other platform/competencies to comprehend new trends and methodologies being implemented/considered within the company ecosystem Understands the customer and stakeholders business needs/priorities and helps building solutions that support our business goals Establishes and manages the close relationship with customers/stakeholders Has overview of the date engineering market development to be able to come up/explore new ways of delivering pipelines to increase their value/contribution Builds “community of practice” leveraging experience from delivering complex analytics projects Is accountable for ensuring that the team delivers solutions with high quality standards, timeliness, compliance and excellent user experience Contributes to innovative experiments, specifically to idea generation, idea incubation and/or experimentation, identifying tangible and measurable criteria Qualifications: Bachelor’s degree in Computer Science, Data Science, Information Technology, Engineering or a related field. 5+ plus years of experience as a Data Engineer or in a similar role, with a strong portfolio of data projects. 3+ plus years experience SQL skills, with the ability to write and optimize queries for large datasets. 1+ plus years experience and proficiency in Python for data manipulation, automation, and pipeline development. Experience with Databricks including creating notebooks and utilizing Spark for big data processing. Strong experience with data warehousing solution (such as Snowflake), including schema design and performance optimization. Experience with data governance and quality management tools, particularly Collibra DQ. Strong analytical and problem-solving skills, with an attention to detail. Who we are: We are known as Merck & Co., Inc., Rahway, New Jersey, USA in the United States and Canada and MSD everywhere else. For more than a century, we have been inventing for life, bringing forward medicines and vaccines for many of the world's most challenging diseases. Today, our company continues to be at the forefront of research to deliver innovative health solutions and advance the prevention and treatment of diseases that threaten people and animals around the world. What we look for: Imagine getting up in the morning for a job as important as helping to save and improve lives around the world. Here, you have that opportunity. You can put your empathy, creativity, digital mastery, or scientific genius to work in collaboration with a diverse group of colleagues who pursue and bring hope to countless people who are battling some of the most challenging diseases of our time. Our team is constantly evolving, so if you are among the intellectually curious, join us—and start making your impact today. Current Employees apply HERE Current Contingent Workers apply HERE Search Firm Representatives Please Read Carefully Merck & Co., Inc., Rahway, NJ, USA, also known as Merck Sharp & Dohme LLC, Rahway, NJ, USA, does not accept unsolicited assistance from search firms for employment opportunities. All CVs / resumes submitted by search firms to any employee at our company without a valid written search agreement in place for this position will be deemed the sole property of our company. No fee will be paid in the event a candidate is hired by our company as a result of an agency referral where no pre-existing agreement is in place. Where agency agreements are in place, introductions are position specific. Please, no phone calls or emails. Employee Status: Regular Relocation: VISA Sponsorship: Travel Requirements: Flexible Work Arrangements: Hybrid Shift: Valid Driving License: Hazardous Material(s): Required Skills: Business Intelligence (BI), Database Administration, Data Engineering, Data Management, Data Modeling, Data Visualization, Design Applications, Information Management, Software Development, Software Development Life Cycle (SDLC), System Designs Preferred Skills: Job Posting End Date: 07/22/2025 A job posting is effective until 11:59:59PM on the day BEFORE the listed job posting end date. Please ensure you apply to a job posting no later than the day BEFORE the job posting end date. Requisition ID: R350689

Posted 5 days ago

Apply

3.0 - 6.0 years

6 - 9 Lacs

Hyderābād

On-site

GlassDoor logo

Senior Analyst – Data Engineer - Deloitte Technology - Deloitte Support Services India Private Limited Do you thrive on developing creative and innovative insights to solve complex challenges? Want to work on next-generation, cutting-edge products and services that deliver outstanding value and that are global in vision and scope? Work with premier thought leaders in your field? Work for a world-class organization that provides an exceptional career experience with an inclusive and collaborative culture? Work you’ll do Seeking a candidate with extensive experience on designing, delivering and maintaining implementations of solutions in the cloud, specifically Microsoft Azure. This candidate should also possess strong cross-discipline communication skills, strong analytical aptitude with critical thinking, a solid understanding of how data would translate into reporting / dashboarding capabilities, and the tools and platforms that support them. Responsibilities Role Specific Designing a well-structured data model using methodologies (e.g., Kimball or Inmon) that accurately represents the business requirements, ensures data integrity and minimizes redundancies. Developing and implementing data pipelines to extract, transform, and load (ETL) data from various sources into Azure data services. This includes using Azure Data Factory, Azure Databricks, or other tools to orchestrate data workflows and data movement. Build, Test and Run of data assets tied to tasks and user stories from the Azure DevOps instance of Enterprise Data & Analytics. Bring a level of technical expertise of the Big Data space that contributes to the strategic roadmaps for Enterprise Data Architecture, Global Data Cloud Architecture, and Global Business Intelligence Architecture, as well contributes to the development of the broader Enterprise Data & Analytics Engineering community Actively participate in regularly scheduled contact calls to transparently review the status of in-flight projects, priorities of backlog projects, and review adoption of previous deliveries from Enterprise Data & Analytics with the Data Insights team. Handle break fixes and participate in a rotational on-call schedule. On-call includes monitoring of scheduled jobs and ETL pipelines. Actively participate in team meetings to transparently review the status of in-flight projects and their progress. Follow standard practice and frameworks on each project from development, to testing and then productionizing, each within the appropriate environment laid out by Data Architecture. Challenge’s self and others to make an impact that matters and help team connect their contributions with broader purpose. Sets expectations to the team, aligns the work based on the strengths and competencies, and challenges them to raise the bar while providing the support. Extensive knowledge of multiple technologies, tools, and processes to improve the design and architecture of the assigned applications. Knowledge Sharing / Documentation Contribute to, produce, and maintain processes, procedures, operational and architectural documentation. Change Control - ensure compliance with Processes and adherence to standards and documentation. Work with Deloitte Technology leadership and service teams in reviewing documentation and aligning KPIs to critical steps in our service operations. Active participation in ongoing training within BI space. The team At Deloitte, we’re all about collaboration. And nowhere is this more apparent than among our 2,000-strong internal services team. With our combined specialist skills, we provide all the essential support and advice our client-facing colleagues need, right across the firm. This enables them to focus all of their efforts on delivering the best service possible to their clients. Covering seven distinct areas; Human Resources, Clients & Industries, Finance & Legal, Practice Support Services, Quality & Risk Services, IT Services, and Workplace Services & Real Estate, together we live, breathe and deliver the Deloitte experience. Location: Hyderabad Work shift Timings: 11 AM to 8 PM Qualifications Bachelor of Engineering/ Bachelor of Technology 3-6 years of broad-based IT experience with technical knowledge of Microsoft SQL Server, Azure SQL Data Warehouse, Azure Data Lake Store, Azure Data Factory Demonstrated experience in Apache Framework (Spark, Scala, etc.) Well versed in SQL and comfortable in scripting using Python or similar language. First Month Critical Outcomes: Absorb strategic projects from the backlog and complete the related Azure SQL Data Warehouse Development work. Inspect existing run-state SQL Server databases and Azure SQL Data Warehouses and identify optimizations for potential development. Deliver new databases assigned as needed. Integration to on-call rotation (First 90 days). Contribute to legacy content and architecture migration to data lake (First 90 days). Delivery of first 2 data ingestion pipelines to include ingestion, QA and automation using Azure Big Data tools (First 90 days). Ability to document all work following standard documentation practices set forth by Data Governance (First 90 days). How you’ll grow At Deloitte, we’ve invested a great deal to create a rich environment in which our professionals can grow. We want all our people to develop in their own way, playing to their own strengths as they hone their leadership skills. And, as a part of our efforts, we provide our professionals with a variety of learning and networking opportunities—including exposure to leaders, sponsors, coaches, and challenging assignments—to help accelerate their careers along the way. No two people learn in exactly the same way. So, we provide a range of resources including live classrooms, team-based learning, and eLearning. DU: The Leadership Center in India, our state-of-the-art, world-class learning Center in the Hyderabad offices is an extension of the Deloitte University (DU) in Westlake, Texas, and represents a tangible symbol of our commitment to our people’s growth and development. Explore DU: The Leadership Center in India Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Deloitte’s culture Our positive and supportive culture encourages our people to do their best work every day. We celebrate individuals by recognizing their uniqueness and offering them the flexibility to make daily choices that can help them to be healthy, centered, confident, and aware. We offer well-being programs and are continuously looking for new ways to maintain a culture that is inclusive, invites authenticity, leverages our diversity, and where our people excel and lead healthy, happy lives. Learn more about Life at Deloitte. Corporate citizenship Deloitte is led by a purpose: to make an impact that matters. This purpose defines who we are and extends to relationships with our clients, our people and our communities. We believe that business has the power to inspire and transform. We focus on education, giving, skill-based volunteerism, and leadership to help drive positive social impact in our communities. Learn more about Deloitte’s impact on the world. #EAG-Technology Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Professional development From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to build new skills, take on leadership opportunities and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career. Requisition code: 304653

Posted 5 days ago

Apply

7.0 years

0 Lacs

Gurgaon

On-site

GlassDoor logo

Job Purpose Client calls, guide towards optimized, cloud-native architectures, future state of their data platform, strategic recommendations and Microsoft Fabric integration. Desired Skills and experience Candidates should have a B.E./B.Tech/MCA/MBA in Finance, Information Systems, Computer Science or a related field 7+ years of experience as a Data and Cloud architecture with client stakeholders AZ Data Platform Expertise: Synapse, Databricks, Azure Data Factory (ADF), Azure SQL (DW/DB), Power BI (PBI). Define modernization roadmaps and target architecture. Strong understanding of data governance best practices for data quality, Cataloguing, and lineage. Proven ability to lead client engagements and present complex findings. Excellent communication skills, both written and verbal Extremely strong organizational and analytical skills with strong attention to detail Strong track record of excellent results delivered to internal and external clients Able to work independently without the needs for close supervision and collaboratively as part of cross-team efforts Experience with delivering projects within an agile environment Experience in project management and team management Key responsibilities include: Lead all interviews & workshops to capture current/future needs. Direct the technical review of Azure (AZ) infrastructure (Databricks, Synapse Analytics, Power BI) and critical on-premises (on-prem) systems. Come up with architecture designs (Arch. Designs), focusing on refined processing strategies and Microsoft Fabric. Understand and refine the Data Governance (Data Gov.) roadmap, including data cataloguing (Data Cat.), lineage, and quality. Lead project deliverables, ensuring actionable and strategic outputs. Evaluate and ensure quality of deliverables within project timelines Develop a strong understanding of equity market domain knowledge Collaborate with domain experts and business stakeholders to understand business rules/logics Ensure effective, efficient, and continuous communication (written and verbally) with global stakeholders Independently troubleshoot difficult and complex issues on dev, test, UAT and production environments Responsible for end-to-end delivery of projects, coordination between client and internal offshore teams and manage client queries Demonstrate high attention to detail, should work in a dynamic environment whilst maintaining high quality standards, a natural aptitude to develop good internal working relationships and a flexible work ethic Responsible for Quality Checks and adhering to the agreed Service Level Agreement (SLA) / Turn Around Time (TAT)

Posted 5 days ago

Apply

8.0 years

0 Lacs

Pune

On-site

GlassDoor logo

Company Description About Hitachi Solutions India Pvt Ltd: Hitachi Solutions, Ltd., headquartered in Tokyo, Japan, is a core member of Information & Telecommunication Systems Company of Hitachi Group and a recognized leader in delivering proven business and IT strategies and solutions to companies across many industries. The company provides value-driven services throughout the IT life cycle from systems planning to systems integration, operation and maintenance. Hitachi Solutions delivers products and services of superior value to customers worldwide through key subsidiaries in the United States, Europe, China and India. The flagship company in the Hitachi Group's information and communication system solutions business, Hitachi Solutions also offers solutions for social innovation such as smart cities. Our Competitive Edge We work together in a dynamic and rewarding work environment. We have an experienced leadership team, excellent technology and product expertise, and strong relationships with a broad base of customers and partners. We offer competitive compensation and benefits package, regular performance review, performance bonuses, and regular trainings. What is it like working here? We pride ourselves on being industry leaders and providing an enjoyable work environment where our people can grow personally and professionally. Hitachi is the place people can develop skills they’re excited about. The following are our commitments to employees. We recognize our profitability and project success comes from our team—great people doing great things. As such, we pursue profitable growth and expanded opportunities for our team. We offer challenging and diverse work across multiple industries and reward creativity and entrepreneurial innovation. We respect, encourage, and support each individual needs to continually learn and grow personally and professionally. We are committed to fostering our people. We listen. Every employee has something important to say that can contribute to enriching our environment. We compensate fairly. And while employees might come for the paycheck, they stay for the people. Our people are the reason we are exceptional. This is something we never forget. Job Description Power BI Architects are experts in data modeling and analysis and are responsible for developing high-quality datasets and visually stunning reports. They design and develop data models that effectively support business requirements, ensuring the accuracy and reliability of the data presented in the dashboards and reports. They possess proficiency in Power BI Desktop and expertise with SQL and DAX. Projects may range from short-term individual client engagements to multiyear delivery engagements with large, blended teams. Requirements: A minimum of 8 years full-time experience using Power BI Desktop, with extensive knowledge of Power Query, Power Pivot, and Power View Able to quickly write SQL for database querying and DAX for creating custom calculations Possess good knowledge of M and Vertipaq Analyzer Understand data modeling concepts and be able to create effective data models to support reporting needs. Perform data ETL processes to ensure that data sets are clean, accurate, and ready for analysis. Work closely with stakeholders to understand requirements, deliver solutions that meet those needs, and bridge the gap between technical and non-technical sides. Unwavering ability to quickly propose solutions by recalling the latest best practices learned from MVP & Product Team articles, MSFT documentation, whitepapers, and community publications Excellent communication, presentation, influencing, and reasoning skills Familiarity with the Azure data platform, e.g., ADLS, SQL Server, ADF, Databricks etc. We would like to see a blend of the following technical skills: Power BI Desktop, Power BI Dataflows, Tabular Editor, DAX Studio, and VertiPaq Analyzer T-SQL, DAX, M, and PowerShell Power BI Service architecture design and administration Understanding the business requirements, designing the data model, and developing visualizations that provide actionable insights VertiPaq and MashUp engine knowledge Data modeling using the Kimball methodology Qualifications Good verbal and written communication. Educational Qualification: BE/MCA/ Any Graduation. Additional Information Beware of scams Our recruiting team may communicate with candidates via our @hitachisolutions.com domain email address and/or via our SmartRecruiters (Applicant Tracking System) notification@smartrecruiters.com domain email address regarding your application and interview requests. All offers will originate from our @hitachisolutions.com domain email address. If you receive an offer or information from someone purporting to be an employee of Hitachi Solutions from any other domain, it may not be legitimate.

Posted 5 days ago

Apply

4.0 - 15.0 years

29 - 30 Lacs

Bengaluru

On-site

GlassDoor logo

Post Azure Engineer Location : Bangalore (Hybrid) Experience : 4 to 15 Years No.of Positions: 8 TYPE: FTE Budget : 30 LPA Notice Period : Immediate to 30 Days Joiner Job Description:  Strong Azure Platform Knowledge On Data & AI Service Portfolio.  Knowledge and Skills to know how services work and what level of permissions are required. How to Design Access Patterns for Services, Users, Managed Identities to integrate service in a Complex Architecture, Optimize the Access model, without redundant permissions.  Knowledge on Azure Identity & Access Management & Security  Knowledge on Azure's resource management and quota system  Azure Landing Zone Design Concepts  Working Knowledge with ARM Cli, SDK using Bash/ Powershell/ Python  Knowledge on Azure CI-CD, Devops  Knowledge on MS Purview and Databricks Unity Catalog Job Types: Full-time, Permanent Pay: ₹2,900,000.00 - ₹3,000,000.00 per year Benefits: Provident Fund Schedule: Day shift Work Location: In person

Posted 5 days ago

Apply

5.0 years

0 Lacs

Bengaluru

On-site

GlassDoor logo

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In business intelligence at PwC, you will focus on leveraging data and analytics to provide strategic insights and drive informed decision-making for clients. You will develop and implement innovative solutions to optimise business performance and enhance competitive advantage. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations Responisibilities: Databricks Engineers Requirements: Total Experience: 5-8 years with 4+ years of relevant experience Skills: Proficiency on Databricks platform Strong hands-on experience with Pyspark , SQL, and Python Any cloud - Azure, AWS, GCP Certifications (Any of the following): Databricks Certified Associate Developer for Spark 3.0 - Preferred Databricks Certified Data Engineer Associate Databricks Certified Data Engineer Professional Location: Bangalore Mandatory Skill Sets Databricks, Pyspark, SQL,Python, Any cloud - Azure, AWS, GCP Preferred Skill Sets Related CeCeritfication - •Databricks Certified Associate Developer for Spark 3.0 - Preferred •Databricks Certified Data Engineer Associate •Databricks Certified Data Engineer Professional Year of Experience required 5 to 8 years Education Qualification BE, B.Tech, ME, M,Tech, MBA, MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Engineering, Master of Business Administration, Bachelor of Engineering Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Databricks Platform Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Analytical Thinking, Business Case Development, Business Data Analytics, Business Intelligence and Reporting Tools (BIRT), Business Intelligence Development Studio, Communication, Competitive Advantage, Continuous Process Improvement, Creativity, Data Analysis and Interpretation, Data Architecture, Database Management System (DBMS), Data Collection, Data Pipeline, Data Quality, Data Science, Data Visualization, Embracing Change, Emotional Regulation, Empathy, Inclusion, Industry Trend Analysis {+ 16 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date

Posted 5 days ago

Apply

2.0 years

10 - 10 Lacs

Bengaluru

On-site

GlassDoor logo

Location(s): Quay Building 8th Floor, Bagmane Tech Park, Bengaluru, IN Line Of Business: Data Estate(DE) Job Category: Engineering & Technology Experience Level: Experienced Hire At Moody's, we unite the brightest minds to turn today’s risks into tomorrow’s opportunities. We do this by striving to create an inclusive environment where everyone feels welcome to be who they are-with the freedom to exchange ideas, think innovatively, and listen to each other and customers in meaningful ways. If you are excited about this opportunity but do not meet every single requirement, please apply! You still may be a great fit for this role or other open roles. We are seeking candidates who model our values: invest in every relationship, lead with curiosity, champion diverse perspectives, turn inputs into actions, and uphold trust through integrity. Skills and Competencies Proficiency in Kubernetes and Amazon EKS (2+ years required): Essential for managing containerized applications and ensuring high availability and security in cloud-native environments. Strong expertise in AWS serverless technologies (required): Including Lambda, API Gateway, EventBridge, and Step Functions, to build scalable and cost-efficient solutions. Hands-on experience with Terraform (2+ years required): Critical for managing Infrastructure as Code (IaC) across multiple environments, ensuring consistency and repeatability. CI/CD pipeline development using GitHub Actions (required): Necessary for automating deployments and supporting agile development practices. Scripting skills in Python, Bash, or PowerShell (required): Enables automation of operational tasks and enhances infrastructure management capabilities. Experience with Databricks and Apache Kafka (preferred): Valuable for teams working with data pipelines, MLOps workflows, and event-driven architectures. Education Bachelor’s degree in Computer Science or equivalent experience Responsibilities Design, automate, and manage scalable cloud infrastructure using Kubernetes, AWS, Terraform, and CI/CD pipelines . Design and manage cloud-native infrastructure using container orchestration platforms, ensuring high availability, scalability, and security across environments. Implement and maintain Infrastructure as Code (IaC) using tools like Terraform to provision and manage multi-environment cloud resources consistently and efficiently. Develop and optimize continuous integration and delivery (CI/CD) pipelines to automate application and infrastructure deployments, supporting agile development cycles. Monitor system performance and reliability by configuring observability tools for logging, alerting, and metrics collection, and proactively address operational issues. Collaborate with cross-functional teams to align infrastructure solutions with application requirements, ensuring seamless deployment and performance optimization. Document technical processes and architectural decisions through runbooks, diagrams, and knowledge-sharing resources to support operational continuity and team onboarding. About the team Our Data Estate DevOps team is responsible for enabling the scalable, secure, and automated infrastructure that powers Moody’s enterprise data platform. We ensure the seamless deployment, monitoring, and performance of data pipelines and services that deliver curated, high-quality data to internal and external consumers. We contribute to Moody’s by: Accelerating data delivery and operational efficiency through automation, observability, and infrastructure-as-code practices that support near real-time data processing and remediation. Supporting data integrity and governance by enabling traceable, auditable, and resilient systems that align with regulatory compliance and GenAI readiness. Empowering innovation and analytics by maintaining a modular, interoperable platform that integrates internal and third-party data sources for downstream research models, client workflows, and product applications. By joining our team, you will be part of exciting work in cloud-native DevOps, data engineering, and platform automation, supporting global data operations across 29 countries and contributing to Moody’s mission of delivering integrated perspectives on risk and growth. Moody’s is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, sexual orientation, gender expression, gender identity or any other characteristic protected by law. Candidates for Moody's Corporation may be asked to disclose securities holdings pursuant to Moody’s Policy for Securities Trading and the requirements of the position. Employment is contingent upon compliance with the Policy, including remediation of positions in those holdings as necessary.

Posted 5 days ago

Apply

3.0 years

7 - 9 Lacs

Bengaluru

On-site

GlassDoor logo

By clicking the “Apply” button, I understand that my employment application process with Takeda will commence and that the information I provide in my application will be processed in line with Takeda’s Privacy Notice and Terms of Use. I further attest that all information I submit in my employment application is true to the best of my knowledge. Job Description The Future Begins Here At Takeda, we are leading digital evolution and global transformation. By building innovative solutions and future-ready capabilities, we are meeting the need of patients, our people, and the planet. Bengaluru, the city, which is India’s epicenter of Innovation, has been selected to be home to Takeda’s recently launched Innovation Capability Center. We invite you to join our digital transformation journey. In this role, you will have the opportunity to boost your skills and become the heart of an innovative engine that is contributing to global impact and improvement. At Takeda’s ICC we Unite in Diversity Takeda is committed to creating an inclusive and collaborative workplace, where individuals are recognized for their backgrounds and abilities they bring to our company. We are continuously improving our collaborators journey in Takeda, and we welcome applications from all qualified candidates. Here, you will feel welcomed, respected, and valued as an important contributor to our diverse team. THE OPPORTUNITY: As a Data Engineer, you will be building and maintaining data systems and construct datasets that are easy to analyze and support Business Intelligence requirements as well as downstream systems. Responsibilities: Develops and maintains scalable data pipelines and builds out new integrations using AWS native technologies to support continuing increases in data source, volume, and complexity. Collaborates with analytics and business teams to improve data models that feed business intelligence tools and dashboards, increasing data accessibility and fostering data-driven decision making across the organization. Implements processes and systems to drive data reconciliation, monitor data quality, ensuring production data is always accurate and available for key stakeholders, downstream systems, and business processes that depend on it. Writes unit/integration/performance test scripts, contributes to engineering wiki, and documents work. Performs data analysis required to troubleshoot data related issues and assist in the resolution of data issues. Works closely with a team of frontend and backend engineers, product managers, and analysts. Works with DevOps and Cloud Center of Excellence to deploy data pipeline solutions in Takeda AWS environments meeting security and performance requirements. Skills and Qualifications Bachelors’ Degree, from an accredited institution in Engineering, Computer Science, or related field. 3+ years of experience in software, data, data warehouse, data lake, and analytics reporting development. Build and fine-tune GenAI-powered solutions using LLMs Develop retrieval-augmented generation (RAG) pipelines integrating vector stores Strong experience in data/Big Data, data integration, data model, modern database (Graph, SQL, No-SQL, etc.) query languages and AWS cloud technologies including DMS, Lambda, Databricks, SQS, Step Functions, Data Streaming, Visualization, etc. Solid experience in DBA, dimensional modeling, SQL optimization - Aurora is preferred. Experience designing, building, maintaining data integrations using SOAP/REST web services/API, as well as schema design and dimensional data modeling. Excellent written and verbal communication skills including the ability to interact effectively with multifunctional teams. WHAT TAKEDA ICC INDIA CAN OFFER YOU: Takeda is certified as a Top Employer, not only in India, but also globally. No investment we make pays greater dividends than taking good care of our people. At Takeda, you take the lead on building and shaping your own career. Joining the ICC in Bengaluru will give you access to high-end technology, continuous training and a diverse and inclusive network of colleagues who will support your career growth. BENEFITS: It is our priority to provide competitive compensation and a benefit package that bridges your personal life with your professional career. Amongst our benefits are: Competitive Salary + Performance Annual Bonus Flexible work environment, including hybrid working Comprehensive Healthcare Insurance Plans for self, spouse, and children Group Term Life Insurance and Group Accident Insurance programs Employee Assistance Program Broad Variety of learning platforms Diversity, Equity, and Inclusion Programs Reimbursements – Home Internet & Mobile Phone Employee Referral Program Leaves – Paternity Leave (4 Weeks), Maternity Leave (up to 26 weeks), Bereavement Leave (5 days) ABOUT ICC IN TAKEDA: Takeda is leading a digital revolution. We’re not just transforming our company; we’re improving the lives of millions of patients who rely on our medicines every day. As an organization, we are committed to our cloud-driven business transformation and believe the ICCs are the catalysts of change for our global organization. #Li-Hybrid Locations IND - Bengaluru Worker Type Employee Worker Sub-Type Regular Time Type Full time

Posted 5 days ago

Apply

0.0 - 2.0 years

4 - 8 Lacs

Chennai

On-site

GlassDoor logo

CDM Smith is seeking a Data Engineer to join our Digital Engineering Solutions team. This individual will be part of the Data Technology group within the Digital Engineering Solutions team, helping to drive strategic Architecture, Engineering and Construction (AEC) initiatives using cutting-edge data technologies and analytics to deliver actionable business insights and robust solutions for AEC professionals and client outcomes. The Data Technology group will lead the firm in AEC-focused Business Intelligence and data services by providing architectural guidance, technological vision, and solution development. The Data Technology group will specifically utilize advanced analytics, data science, and AI/ML to give our business and our products a competitive advantage. It includes understanding and managing the data, how it interconnects, and architecting & engineering data for self-serve BI and BA opportunities. This position is for a person who has demonstrated excellence in data engineering capabilities, experienced with data technology and processes, and enjoys framing a problem, shaping and creating solutions, and helping to lead and champion implementation. As a member of the Digital Engineering Solutions team, the Data Technology group will also engage in research and development and provide guidance and oversight to the AEC practices at CDM Smith, engaging in new product research, testing, and the incubation of data technology-related ideas that arise from around the company. Key Responsibilities: Assists in the design, development, and maintenance of scalable data pipelines and workflows to extract, transform, and load (ETL/ELT) data from various sources into target systems. Contributes to automate workflows to ensure efficiency, scalability, and error reduction in data integration processes. Tests data quality and integrity by implementing processes to validate completeness, accuracy, and consistency of data. Monitor and troubleshoot data pipeline performance and reliability. Document data engineering processes and workflows. Collaborate with Data Scientists, Analytics Engineers, and stakeholders to understand business requirements and deliver high-quality data solutions. Stay abreast of the latest developments and advancements, including new and emerging technologies & best practices and new tools & software applications and how they could impact CDM Smith. Assist with the development of documentation, standards, best practices, and workflows for data technology hardware/software in use across the business. Perform other duties as required. Skills and Abilities: Good foundation with the Software Development Life Cycle (SDLC) and Agile Development methodologies. Good foundation with Cloud ETL/ELT tools and deployment, including Azure Data Factory, Azure Databricks, and Azure Synapse Analytics. Good Knowledge in data modeling and designing scalable ETL/ELT processes. Familiarity with CI/CD pipelines and DevOps practices for data solutions. Knowledge of monitoring tools and techniques for ensuring pipeline observability and reliability. Excellent problem-solving and critical thinking skills to identify and address technical challenges effectively. Strong critical thinking skills to generate innovative solutions and improve business processes. Ability to effectively communicate complex technical concepts to both technical and non-technical audiences. Detail oriented with the ability to assist with executing highly complex or specialized projects. Minimum Qualifications Bachelor’s degree. 0 – 2 years of related experience. Equivalent additional directly related experience will be considered in lieu of a degree. Amount of Travel Required 0% Background Check and Drug Testing Information CDM Smith Inc. and its divisions and subsidiaries (hereafter collectively referred to as “CDM Smith”) reserves the right to require background checks including criminal, employment, education, licensure, etc. as well as credit and motor vehicle when applicable for certain positions. In addition, CDM Smith may conduct drug testing for designated positions. Background checks are conducted after an offer of employment has been made in the United States. The timing of when background checks will be conducted on candidates for positions outside the United States will vary based on country statutory law but in no case, will the background check precede an interview. CDM Smith will conduct interviews of qualified individuals prior to requesting a criminal background check, and no job application submitted prior to such interview shall inquire into an applicant's criminal history. If this position is subject to a background check for any convictions related to its responsibilities and requirements, employment will be contingent upon successful completion of a background investigation including criminal history. Criminal history will not automatically disqualify a candidate. In addition, during employment individuals may be required by CDM Smith or a CDM Smith client to successfully complete additional background checks, including motor vehicle record as well as drug testing. Agency Disclaimer All vendors must have a signed CDM Smith Placement Agreement from the CDM Smith Recruitment Center Manager to receive payment for your placement. Verbal or written commitments from any other member of the CDM Smith staff will not be considered binding terms. All unsolicited resumes sent to CDM Smith and any resume submitted to any employee outside of CDM Smith Recruiting Center Team (RCT) will be considered property of CDM Smith. CDM Smith will not be held liable to pay a placement fee. Business Unit COR Group COR Assignment Category Fulltime-Regular Employment Type Regular

Posted 5 days ago

Apply

3.0 - 5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

As a Data Engineer , you are required to: Design, build, and maintain data pipelines that efficiently process and transport data from various sources to storage systems or processing environments while ensuring data integrity, consistency, and accuracy across the entire data pipeline. Integrate data from different systems, often involving data cleaning, transformation (ETL), and validation. Design the structure of databases and data storage systems, including the design of schemas, tables, and relationships between datasets to enable efficient querying. Work closely with data scientists, analysts, and other stakeholders to understand their data needs and ensure that the data is structured in a way that makes it accessible and usable. Stay up-to-date with the latest trends and technologies in the data engineering space, such as new data storage solutions, processing frameworks, and cloud technologies. Evaluate and implement new tools to improve data engineering processes. Qualification : Bachelor's or Master's in Computer Science & Engineering, or equivalent. Professional Degree in Data Science, Engineering is desirable. Experience level : At least 3 - 5 years hands-on experience in Data Engineering, ETL. Desired Knowledge & Experience : Spark: Spark 3.x, RDD/DataFrames/SQL, Batch/Structured Streaming Knowing Spark internals: Catalyst/Tungsten/Photon Databricks: Workflows, SQL Warehouses/Endpoints, DLT, Pipelines, Unity, Autoloader IDE: IntelliJ/Pycharm, Git, Azure Devops, Github Copilot Test: pytest, Great Expectations CI/CD Yaml Azure Pipelines, Continuous Delivery, Acceptance Testing Big Data Design: Lakehouse/Medallion Architecture, Parquet/Delta, Partitioning, Distribution, Data Skew, Compaction Languages: Python/Functional Programming (FP) SQL: TSQL/Spark SQL/HiveQL Storage: Data Lake and Big Data Storage Design additionally it is helpful to know basics of: Data Pipelines: ADF/Synapse Pipelines/Oozie/Airflow Languages: Scala, Java NoSQL: Cosmos, Mongo, Cassandra Cubes: SSAS (ROLAP, HOLAP, MOLAP), AAS, Tabular Model SQL Server: TSQL, Stored Procedures Hadoop: HDInsight/MapReduce/HDFS/YARN/Oozie/Hive/HBase/Ambari/Ranger/Atlas/Kafka Data Catalog: Azure Purview, Apache Atlas, Informatica Required Soft skills & Other Capabilities : Great attention to detail and good analytical abilities. Good planning and organizational skills Collaborative approach to sharing ideas and finding solutions Ability to work independently and also in a global team environment. Show more Show less

Posted 5 days ago

Apply

0 years

5 - 8 Lacs

Indore

On-site

GlassDoor logo

AV-230749 Indore,Madhya Pradesh,India Full-time Permanent Global Business Services DHL INFORMATION SERVICES (INDIA) LLP Your IT Future, Delivered Senior Software Engineer (Azure BI) Open to all PAN India candidates. With a global team of 5800 IT professionals, DHL IT Services connects people and keeps the global economy running by continuously innovating and creating sustainable digital solutions. We work beyond global borders and push boundaries across all dimensions of logistics. You can leave your mark shaping the technology backbone of the biggest logistics company of the world. Our offices in Cyberjaya, Prague, and Chennai have earned #GreatPlaceToWork certification, reflecting our commitment to exceptional employee experiences. Digitalization. Simply delivered. At IT Services, we are passionate about Azure Databricks and PySpark. Our PnP BI Solutions team is continuously expanding. No matter your level of Software Engineer Azure BI proficiency, you can always grow within our diverse environment. #DHL #DHLITServices #GreatPlace #pyspark #azuredatabricks #snowflakedatabase Grow together Timely delivery of DHL packages around the globe in a way that ensures customer data are secure is in the core of what we do. You will provide project deliverables and day-to-day operation support and help investigate and resolve incidents. Sometimes, requirements or issues might get tricky, and this is where your expertise in development or the cooperation on troubleshooting with other IT support teams and specialists will come into play. For any requirements regarding BI use cases in an Azure environment, you are our superhero. The same applies when it comes to production and incidents that need to be fixed. Ready to embark on the journey? Here’s what we are looking for: Practical experience in programming using SQL, PySpark(Python), Azure Databricks and Azure Data Factory Experience in administration and configuration of Databricks Cluster Experience with Snowflake Database Knowledge of Data Vault data modeling (if not: high motivation to learn the modeling approach). Experiences with Streaming APIs like Kafka, CI/CD, XML/JSON, ADLS2 A comprehensive understanding of public cloud platforms, with a preference for Microsoft Azure Proven ability to work in a multi-cultural environment An array of benefits for you: Flexible Work Guidelines. Flexible Compensation Structure. Global Work cultural & opportunities across geographies. Insurance Benefit - Health Insurance for family, parents & in-laws, Term Insurance (Life Cover), Accidental Insurance.

Posted 5 days ago

Apply

8.0 - 10.0 years

0 Lacs

Andhra Pradesh

On-site

GlassDoor logo

Software Engineering Advisor - HIH - Evernorth About Evernorth: Evernorth Health Services, a division of The Cigna Group (NYSE: CI), creates pharmacy, care, and benefits solutions to improve health and increase vitality. We relentlessly innovate to make the prediction, prevention, and treatment of illness and disease more accessible to millions of people. Position Summary: Data engineer on the Data integration team Job Description & Responsibilities: Work with business and technical leadership to understand requirements. Design to the requirements and document the designs Ability to write product-grade performant code for data extraction, transformations and loading using Spark, Py-Spark Do data modeling as needed for the requirements. Write performant queries using Teradata SQL, Hive SQL and Spark SQL against Teradata and Hive Implementing dev-ops pipelines to deploy code artifacts on to the designated platform/servers like AWS / Azure / GCP. Troubleshooting the issues, providing effective solutions and jobs monitoring in the production environment Participate in sprint planning sessions, refinement/story-grooming sessions, daily scrums, demos and retrospectives. Experience Required: Overall 8-10 years of experience Experience Desired: Strong development experience in Spark, Py-Spark, Shell scripting, Teradata. Strong experience in writing complex and effective SQLs (using Teradata SQL, Hive SQL and Spark SQL) and Stored Procedures Health care domain knowledge is a plus Education and Training Required: Primary Skills: Excellent work experience on Databricks as Data Lake implementations Experience in Agile and working knowledge on DevOps tools (Git, Jenkins, Artifactory) Experience in AWS (S3, EC2, SNS, SQS, Lambda, ECS, Glue, IAM, and CloudWatch) / GCP / Azure Databricks (Delta lake, Notebooks, Pipelines, cluster management, Azure / AWS integration Additional Skills: Experience in Jira and Confluence Exercises considerable creativity, foresight, and judgment in conceiving, planning, and delivering initiatives. Location & Hours of Work (hybrid, Hyderabad ) (11:30am-8:30PM) Equal Opportunity Statement Evernorth is an Equal Opportunity Employer actively encouraging and supporting organization-wide involvement of staff in diversity, equity, and inclusion efforts to educate, inform and advance both internal practices and external work with diverse client populations. About Evernorth Health Services Evernorth Health Services, a division of The Cigna Group, creates pharmacy, care and benefit solutions to improve health and increase vitality. We relentlessly innovate to make the prediction, prevention and treatment of illness and disease more accessible to millions of people. Join us in driving growth and improving lives.

Posted 5 days ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

What You'll Do: We’re searching for an experienced Full Stack developer to work on our product teams who is passionate about developing products that are simple, intuitive, and beautiful. We build big-data systems utilizing cutting-edge technologies and solutions that allow our developers to learn and develop while shipping amazing products. As part of our team, you’ll be working on our Big Data Insights Platform products and elevating user experiences. Our stack is based on Elixir, React, and GraphQL APIs. We are a growing company with small, focused engineering teams that are delivering innovative features in a fast-growing market. What You’ll Be Responsible For: You will work on our Product Engineering Teams You will design and enhance core business-driving services and products You will develop features in our databases, backend apps, and front end UI You will help architect and design service-driven UI via RESTful and GraphQL APIs You will work on ideas from different team members as well as your own Participate in our on-call rotation, Fix bugs rapidly and investigate and resolve production problems Attend daily stand-up meetings, planning sessions, encourage others, and collaborate at a rapid pace What You’ll Need: BS/MS in Computer Science, or other related fields. Or on-the-job experience. 5+ years of designing and programming in a work setting Proficient in backend (Elixir) and frontend (JavaScript/TypeScript) with real-world experience applying system and code design patterns. Strong in React or similar Experience building RESTful or GraphQL APIs Good knowledge of SQL Experience with Amazon Web Services (EC2, S3, RDS, Lambdas, EKS, etc.) Comfortable working with CI/CD and automation tools: Docker, Kubernetes, Terraform or similar Good DevOps skills (automate everything, infrastructure as code) Comfortable in an agile development environment Self-learner, hacker, and technology advocate who can work on anything Thrive in a fast-growing environment Proven track record of successful project delivery Excellent written and spoken English communication Nice-to-haves: You've worked on Enterprise-grade SaaS applications Experience leading a project / team Familiar with ElasticSearch, Snowflake/Databricks, ClickHouse or similar Big Data technology Loves startup culture where everyone's contributions are felt and loved. Show more Show less

Posted 5 days ago

Apply

Exploring Databricks Jobs in India

Databricks is a popular technology in the field of big data and analytics, and the job market for Databricks professionals in India is growing rapidly. Companies across various industries are actively looking for skilled individuals with expertise in Databricks to help them harness the power of data. If you are considering a career in Databricks, here is a detailed guide to help you navigate the job market in India.

Top Hiring Locations in India

  1. Bangalore
  2. Hyderabad
  3. Pune
  4. Chennai
  5. Mumbai

Average Salary Range

The average salary range for Databricks professionals in India varies based on experience level: - Entry-level: INR 4-6 lakhs per annum - Mid-level: INR 8-12 lakhs per annum - Experienced: INR 15-25 lakhs per annum

Career Path

In the field of Databricks, a typical career path may include: - Junior Developer - Senior Developer - Tech Lead - Architect

Related Skills

In addition to Databricks expertise, other skills that are often expected or helpful alongside Databricks include: - Apache Spark - Python/Scala programming - Data modeling - SQL - Data visualization tools

Interview Questions

  • What is Databricks and how is it different from Apache Spark? (basic)
  • Explain the concept of lazy evaluation in Databricks. (medium)
  • How do you optimize performance in Databricks? (advanced)
  • What are the different cluster modes in Databricks? (basic)
  • How do you handle data skewness in Databricks? (medium)
  • Explain how you can schedule jobs in Databricks. (medium)
  • What is the significance of Delta Lake in Databricks? (advanced)
  • How do you handle schema evolution in Databricks? (medium)
  • What are the different file formats supported by Databricks for reading and writing data? (basic)
  • Explain the concept of checkpointing in Databricks. (medium)
  • How do you troubleshoot performance issues in Databricks? (advanced)
  • What are the key components of Databricks Runtime? (basic)
  • How can you secure your data in Databricks? (medium)
  • Explain the role of MLflow in Databricks. (advanced)
  • How do you handle streaming data in Databricks? (medium)
  • What is the difference between Databricks Community Edition and Databricks Workspace? (basic)
  • How do you set up monitoring and alerting in Databricks? (medium)
  • Explain the concept of Delta caching in Databricks. (advanced)
  • How do you handle schema enforcement in Databricks? (medium)
  • What are the common challenges faced in Databricks projects and how do you overcome them? (advanced)
  • How do you perform ETL operations in Databricks? (medium)
  • Explain the concept of MLflow Tracking in Databricks. (advanced)
  • How do you handle data lineage in Databricks? (medium)
  • What are the best practices for data governance in Databricks? (advanced)

Closing Remark

As you prepare for Databricks job interviews, make sure to brush up on your technical skills, stay updated with the latest trends in the field, and showcase your problem-solving abilities. With the right preparation and confidence, you can land your dream job in the exciting world of Databricks in India. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies