Jobs
Interviews

6093 Scala Jobs - Page 28

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 years

0 Lacs

Kochi, Kerala, India

On-site

Position- Data Engineer Experience- 3+ years Location : Trivandrum, Hybrid Salary : Upto 8 LPA Job Summary We are seeking a highly motivated and skilled Data Engineer with 3+ years of experience to join our growing data team. In this role, you will be instrumental in designing, building, and maintaining robust, scalable, and efficient data pipelines and infrastructure. You will work closely with data scientists, analysts, and other engineering teams to ensure data availability, quality, and accessibility for various analytical and machine learning initiatives. Key Responsibilities Design and Development: ○ Design, develop, and optimize scalable ETL/ELT pipelines to ingest, transform, and load data from diverse sources into data warehouses/lakes. ○ Implement data models and schemas that support analytical and reporting requirements. ○ Build and maintain robust data APIs for data consumption by various applications and services. Data Infrastructure: ○ Contribute to the architecture and evolution of our data platform, leveraging cloud services (AWS, Azure, GCP) or on-premise solutions. ○ Ensure data security, privacy, and compliance with relevant regulations. ○ Monitor data pipelines for performance, reliability, and data quality, implementing alerting and anomaly detection. Collaboration & Optimization: ○ Collaborate with data scientists, business analysts, and product managers to understand data requirements and translate them into technical solutions. ○ Optimize existing data processes for efficiency, cost-effectiveness, and performance. ○ Participate in code reviews, contribute to documentation, and uphold best practices in data engineering. Troubleshooting & Support: ○ Diagnose and resolve data-related issues, ensuring minimal disruption to data consumers. ○ Provide support and expertise to teams consuming data from the data platform. Required Qualifications Bachelor's degree in Computer Science, Engineering, or a related quantitative field. 3+ years of hands-on experience as a Data Engineer or in a similar role. Strong proficiency in at least one programming language commonly used for data engineering (e.g., Python, Java, Scala). Extensive experience with SQL and relational databases (e.g., PostgreSQL, MySQL, SQL Server). Proven experience with ETL/ELT tools and concepts. Experience with data warehousing concepts and technologies (e.g., Snowflake, Redshift, BigQuery, Azure Synapse, Data Bricks). Familiarity with cloud platforms (AWS, Azure, or GCP) and their data services (e.g., S3, EC2, Lambda, Glue, Data Factory, Blob Storage, BigQuery, Dataflow). Understanding of data modeling techniques (e.g., dimensional modeling, Kimball, Inmon). Experience with version control systems (e.g., Git). Excellent problem-solving, analytical, and communication skills. Preferred Qualifications Master's degree in a relevant field. Experience with Apache Spark (PySpark, Scala Spark) or other big data processing frameworks. Familiarity with NoSQL databases (e.g., MongoDB, Cassandra). Experience with data streaming technologies (e.g., Kafka, Kinesis). Knowledge of containerization technologies (e.g., Docker, Kubernetes). Experience with workflow orchestration tools (e.g., Apache Airflow, Azure Data Factory, AWS Step Functions). Understanding of DevOps principles as applied to data pipelines. Prior experience in Telecom is a plus.

Posted 1 week ago

Apply

3.0 - 8.0 years

5 - 9 Lacs

Bengaluru

Work from Office

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. Your typical day will involve collaborating with team members to understand project needs, developing innovative solutions, and ensuring that applications are aligned with business objectives. You will also engage in testing and troubleshooting to enhance application performance and user experience, while continuously seeking opportunities for improvement and optimization in the development process. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Assist in the documentation of application specifications and user guides.- Engage in code reviews to ensure quality and adherence to best practices. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Strong understanding of data integration and ETL processes.- Experience with cloud-based data solutions and analytics.- Familiarity with programming languages such as Python or Scala.- Knowledge of data governance and security best practices. Additional Information:- The candidate should have minimum 3 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 week ago

Apply

3.0 - 8.0 years

5 - 9 Lacs

Hyderabad

Work from Office

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will engage in the design, construction, and configuration of applications tailored to fulfill specific business processes and application requirements. Your typical day will involve collaborating with team members to understand project needs, developing innovative solutions, and ensuring that applications are optimized for performance and usability. You will also participate in testing and debugging processes to ensure the applications function as intended, while continuously seeking ways to enhance application efficiency and user experience. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Assist in the documentation of application specifications and user guides.- Engage in code reviews to ensure quality and adherence to best practices. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Strong understanding of data integration and ETL processes.- Experience with cloud-based data solutions and analytics.- Familiarity with programming languages such as Python or Scala.- Ability to work with data visualization tools to present insights. Additional Information:- The candidate should have minimum 3 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Hyderabad office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 week ago

Apply

15.0 - 20.0 years

9 - 14 Lacs

Hyderabad

Work from Office

Project Role : AI / ML Engineer Project Role Description : Develops applications and systems that utilize AI to improve performance and efficiency, including but not limited to deep learning, neural networks, chatbots, natural language processing. Must have skills : Google Cloud Machine Learning Services Good to have skills : Google Pub/Sub, GCP Dataflow, Google DataprocMinimum 2 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an AI / ML Engineer, you will engage in the development of applications and systems that leverage artificial intelligence to enhance performance and efficiency. Your typical day will involve collaborating with cross-functional teams to design and implement innovative solutions, utilizing advanced technologies such as deep learning and natural language processing. You will also be responsible for analyzing data and refining algorithms to ensure optimal functionality and user experience, while continuously exploring new methodologies to drive improvements in AI applications. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Assist in the design and development of AI-driven applications to meet project requirements.- Collaborate with team members to troubleshoot and resolve technical challenges. Professional & Technical Skills: - Must To Have Skills: Proficiency in Google Cloud Machine Learning Services.- Good To Have Skills: Experience with GCP Dataflow, Google Pub/Sub, Google Dataproc.- Strong understanding of machine learning frameworks and libraries.- Experience in deploying machine learning models in cloud environments.- Familiarity with data preprocessing and feature engineering techniques. Additional Information:- The candidate should have minimum 2 years of experience in Google Cloud Machine Learning Services.- This position is based at our Hyderabad office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 week ago

Apply

15.0 - 20.0 years

5 - 9 Lacs

Hyderabad

Work from Office

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. A typical day involves collaborating with various teams to understand their needs, developing innovative solutions, and ensuring that applications are aligned with business objectives. You will engage in problem-solving activities, participate in team meetings, and contribute to the overall success of projects by leveraging your expertise in application development. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate knowledge sharing sessions to enhance team capabilities.- Monitor project progress and ensure timely delivery of application features. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Strong understanding of data integration and ETL processes.- Experience with cloud-based data solutions and architectures.- Familiarity with programming languages such as Python or Scala.- Ability to work with data visualization tools to present insights effectively. Additional Information:- The candidate should have minimum 5 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Hyderabad office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 week ago

Apply

3.0 - 8.0 years

5 - 9 Lacs

Navi Mumbai

Work from Office

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will engage in the design, construction, and configuration of applications tailored to fulfill specific business processes and application requirements. Your typical day will involve collaborating with team members to understand project needs, developing innovative solutions, and ensuring that applications are optimized for performance and usability. You will also participate in testing and debugging processes to deliver high-quality applications that meet user expectations and business goals. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Assist in the documentation of application specifications and user guides.- Collaborate with cross-functional teams to gather requirements and provide technical insights. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Strong understanding of data integration and ETL processes.- Experience with cloud computing platforms and services.- Familiarity with programming languages such as Python or Scala.- Knowledge of data visualization techniques and tools. Additional Information:- The candidate should have minimum 3 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Mumbai office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 week ago

Apply

14.0 - 18.0 years

50 - 90 Lacs

Bengaluru

Work from Office

About Netskope Today, theres more data and users outside the enterprise than inside, causing the network perimeter as we know it to dissolve. We realized a new perimeter was needed, one that is built in the cloud and follows and protects data wherever it goes, so we started Netskope to redefine Cloud, Network and Data Security. Since 2012, we have built the market-leading cloud security company and an award-winning culture powered by hundreds of employees spread across offices in Santa Clara, St. Louis, Bangalore, London, Paris, Melbourne, Taipei, and Tokyo. Our core values are openness, honesty, and transparency, and we purposely developed our open desk layouts and large meeting spaces to support and promote partnerships, collaboration, and teamwork. From catered lunches and office celebrations to employee recognition events and social professional groups such as the Awesome Women of Netskope (AWON), we strive to keep work fun, supportive and interactive. Visit us at Netskope Careers. Please follow us on LinkedIn and Twitter @Netskope . About the team DSPM is designed to provide comprehensive data visibility and contextual security for the modern AI-driven enterprise. Our platform automatically discovers, classifies, and secures sensitive data across diverse environments including AWS, Azure, Google Cloud, Oracle Cloud , and on-premise infrastructure. DSPM is critical in empowering CISOs and security teams to enforce secure AI practices, reduce compliance risk, and maintain continuous data protection at scale. Whats in it for you We are a distributed team of passionate engineers dedicated to continuous learning and building impactful products. Our core product is built from the ground up in Scala, leveraging the Lightbend stack (Play/Akka). As a key member of the DSPM team, you will contribute to developing innovative and scalable systems designed to protect the exponentially increasing volume of enterprise data. Our platform continuously maps sensitive data across all major cloud providers, and on-prem environments, automatically detecting and classifying sensitive and regulated data types such as PII, PHI, financial, and healthcare information. It flags data at risk of exposure, exfiltration, or misuse and helps prioritize issues based on sensitivity, exposure, and business impact. What you will be doing Drive the enhancement of Data Security Posture Management (DSPM) capabilities, by enabling the detection of sensitive or risky data utilized in (but not limited to) training private LLMs or accessed by public LLMs. Improve the DSPM platform to extend support of the product to all major cloud infrastructures, on-prem deployments, and any new upcoming technologies. Provide technical leadership in all phases of a project, from discovery and planning through implementation and delivery. Contribute to product development: understand customer requirements and work with the product team to design, plan, and implement features. Support customers by investigating and fixing production issues. Help us improve our processes and make the right tradeoffs between agility, cost, and reliability. Collaborate with the teams across geographies. Required skills and experience 12+ years of software development experience with enterprise-grade software. Must have experience in building scalable, high-performance cloud services. Expert coding skills in Scala or Java. Development on cloud platforms including AWS. Deep knowledge on databases and data warehouses (OLTP, OLAP) Analytical and troubleshooting skills. Experience working with Docker and Kubernetes. Able to multitask and wear many hats in a fast-paced environment. This week you might lead the design of a new feature, next week you are fixing a critical bug or improving our CI infrastructure. Cybersecurity experience and adversarial thinking is a plus. Expertise in building REST APIs. Experience leveraging cloud-based AI services (e.g., AWS Bedrock, SageMaker), vector databases, and Retrieval Augmented Generation (RAG) architectures is a plus. Education BSCS or equivalent required, MSCS or equivalent strongly preferred #LI-JB3 Netskope is committed to implementing equal employment opportunities for all employees and applicants for employment. Netskope does not discriminate in employment opportunities or practices based on religion, race, color, sex, marital or veteran statues, age, national origin, ancestry, physical or mental disability, medical condition, sexual orientation, gender identity/expression, genetic information, pregnancy (including childbirth, lactation and related medical conditions), or any other characteristic protected by the laws or regulations of any jurisdiction in which we operate. Netskope respects your privacy and is committed to protecting the personal information you share with us, please refer to Netskopes Privacy Policy for more details.

Posted 1 week ago

Apply

15.0 - 20.0 years

10 - 14 Lacs

Pune

Work from Office

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Scala Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure project milestones are met, facilitating discussions to address challenges, and guiding your team in implementing effective solutions. You will also engage in strategic planning sessions to align project goals with organizational objectives, ensuring that all stakeholders are informed and involved in the development process. Your role will be pivotal in driving innovation and efficiency within the application development lifecycle, fostering a collaborative environment that encourages creativity and problem-solving. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate knowledge sharing sessions to enhance team capabilities.- Monitor project progress and implement necessary adjustments to meet deadlines. Professional & Technical Skills: - Must To Have Skills: Proficiency in Scala.- Strong understanding of software development methodologies.- Experience with application architecture and design patterns.- Familiarity with cloud platforms and deployment strategies.- Ability to mentor and guide junior team members. Additional Information:- The candidate should have minimum 5 years of experience in Scala.- This position is based in Pune.- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 week ago

Apply

15.0 - 20.0 years

5 - 9 Lacs

Pune

Work from Office

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Python (Programming Language) Good to have skills : ScalaMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. A typical day involves collaborating with various teams to understand their needs, developing innovative solutions, and ensuring that applications are aligned with business objectives. You will engage in problem-solving activities, participate in team meetings, and contribute to the overall success of projects by leveraging your expertise in application development. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior professionals to enhance their skills and knowledge.- Continuously evaluate and improve application performance and user experience. Professional & Technical Skills: - Must To Have Skills: Proficiency in Python (Programming Language).- Good To Have Skills: Experience with Scala.- Strong understanding of application development methodologies.- Experience with software development life cycle and agile practices.- Familiarity with database management and data integration techniques. Additional Information:- The candidate should have minimum 7.5 years of experience in Python (Programming Language).- This position is based in Pune.- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 week ago

Apply

15.0 - 20.0 years

4 - 8 Lacs

Chennai

Work from Office

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Agile Project Management Good to have skills : Apache SparkMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to effectively migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and deliver solutions that meet business needs, while also troubleshooting any issues that arise in the data flow and processing stages. Your role will be pivotal in ensuring that data is accessible, reliable, and ready for analysis, contributing to informed decision-making across the organization. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge in data engineering practices.- Continuously evaluate and improve data processes to enhance efficiency and effectiveness. Professional & Technical Skills: - Must To Have Skills: Proficiency in Agile Project Management.- Good To Have Skills: Experience with Apache Spark, Google Cloud SQL, Python (Programming Language).- Strong understanding of data pipeline architecture and design principles.- Experience with ETL tools and data integration techniques.- Familiarity with data quality frameworks and best practices. Additional Information:- The candidate should have minimum 7.5 years of experience in Agile Project Management.- This position is based in Chennai (Mandatory).- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 week ago

Apply

15.0 - 20.0 years

5 - 9 Lacs

Navi Mumbai

Work from Office

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. A typical day involves collaborating with various teams to understand their needs, developing innovative solutions, and ensuring that applications are aligned with business objectives. You will engage in problem-solving activities, participate in team meetings, and contribute to the overall success of projects by leveraging your expertise in application development. Roles & Responsibilities:- Expected to be an SME, collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate knowledge sharing sessions to enhance team capabilities.- Monitor project progress and ensure timely delivery of application features. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Strong understanding of data integration and ETL processes.- Experience with cloud-based data solutions and architecture.- Familiarity with programming languages such as Python or Scala.- Ability to work with data visualization tools to present insights effectively. Additional Information:- The candidate should have minimum 5 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Mumbai office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 week ago

Apply

15.0 - 20.0 years

5 - 9 Lacs

Bengaluru

Work from Office

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. Your typical day will involve collaborating with various teams to understand their needs, developing innovative solutions, and ensuring that applications are aligned with business objectives. You will also engage in problem-solving activities, providing support and guidance to your team members while continuously seeking opportunities for improvement and efficiency in application development. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate knowledge sharing sessions to enhance team capabilities.- Monitor project progress and ensure timely delivery of application features. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Strong understanding of data integration and ETL processes.- Experience with cloud-based data solutions and architectures.- Familiarity with programming languages such as Python or Scala.- Ability to work with data visualization tools to present insights effectively. Additional Information:- The candidate should have minimum 5 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 week ago

Apply

5.0 - 10.0 years

5 - 14 Lacs

Bengaluru

Work from Office

Experience:- 5+ Years. Location:- Bengaluru. Candidates Preffered:- Immediate, Serving Notice Period, 30 days. Role:- Big Data QA We are looking for a skilled ETL Tester with proven experience in designing developing and validating data solutions. PRINCIPAL ACCOUNTABILITIES • Developing test scenarios and test cases from requirements and design specifications and execute the test suites. • Perform System integration testing • Working in agile teams along with Testing, your contributions could be code review, requirements/user stories grooming, or coding where it is reasonable. • Provide Inputs and works on automation, Integration, and regression testing. • Interact with stakeholders, participate in scrum meetings and provide status on daily basis. KNOWLEDGE AND EXPERIENCE • 5 to 8 years relevant work experience in software testing primarily on Database /ETL and exposure towards Big Data Testing • Hands on experience in Testing Big Data Application on: Azure , Cloudera • Understanding of more query languages: Pig, HiveQL, etc. • Excellent skills in writing SQL queries and good knowledge on database [Oracle/ Netezza/SQL ] • Handson on any one scripting languages: Java/Scala/Python etc. • Good to have experience in Unix shell scripting • Experience in Agile development, knowledge on using Jira • Good analytical skills and communications skills. • Prior Health Care industry experience is a plus. • Flexible to work / Adopt quickly with different technologies and tools Experience working in cross-functional teams and collaborating effectively with various stakeholders. Excellent analytical, problem-solving, and communication skills to present and document technical concepts clearly. Bachelors or Masters degree in Computer Science, Data Engineering, or a related field. Interested candidates share your updated resume to the below mentioned Mail ID Mail ID:- arshitha.n@rlabsglobal.com

Posted 1 week ago

Apply

3.0 - 8.0 years

5 - 9 Lacs

Navi Mumbai

Work from Office

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : BasisMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will engage in the design, construction, and configuration of applications tailored to fulfill specific business processes and application requirements. Your typical day will involve collaborating with team members to understand project needs, developing innovative solutions, and ensuring that applications are optimized for performance and usability. You will also participate in testing and debugging processes to deliver high-quality applications that meet user expectations and business goals. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Assist in the documentation of application specifications and user guides.- Collaborate with cross-functional teams to gather requirements and provide technical insights. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Strong understanding of data integration and ETL processes.- Experience with cloud computing platforms and services.- Familiarity with programming languages such as Python or Scala.- Knowledge of data visualization techniques and tools. Additional Information:- The candidate should have minimum 3 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Mumbai office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 week ago

Apply

3.0 - 8.0 years

4 - 8 Lacs

Hyderabad

Work from Office

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and contribute to the overall data strategy of the organization, ensuring that data is accessible, reliable, and actionable for stakeholders. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Assist in the design and implementation of data architecture and data models.- Monitor and optimize data pipelines for performance and reliability. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Strong understanding of data integration techniques and ETL processes.- Experience with data quality frameworks and data governance practices.- Familiarity with cloud platforms and services related to data storage and processing.- Knowledge of programming languages such as Python or Scala for data manipulation. Additional Information:- The candidate should have minimum 3 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Hyderabad office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 week ago

Apply

6.0 years

0 Lacs

Delhi, India

On-site

Job Title: Lead Azure Data Engineer Experience Level: Mid - Senior Level Location: Delhi Duration: Fulltime Experience Required: 6-8+ Years Description: We are seeking a highly skilled and experienced Lead Azure Data Engineer to join our team. The ideal candidate will have a strong background in data engineering, with a focus on working with Databricks, PySpark, Scala-Spark, and advanced SQL. This role requires hands-on experience in implementing or migrating projects to Unity Catalog, optimizing performance on Databricks Spark, and orchestrating workflows using various tools. Must Have Skills: MS Fabric ADF (Azure Data Factory) Azure Synapse Key Responsibilities: Data engineering and analytics project delivery experience Minimum 6 years Min. 2 project done in past of Databricks Migration (Ex. Hadoop to Databricks, Teradata to Databricks, Oracle to Databricks, Talend to Databricks etc) Hands on with Advanced SQL and Pyspark and/or Scala Spark Min 3 project done in past on Databricks where performance optimization activity was done Design, develop, and optimize data pipelines and ETL processes using Databricks and Apache Spark. Implement and optimize performance on Databricks Spark, ensuring efficient data processing and management. Develop and validate data formulation and data delivery for Big Data projects. Collaborate with cross-functional teams to define, design, and implement data solutions that meet business requirements. Conduct performance tuning and optimization of complex queries and data models. Manage and orchestrate data workflows using tools such as Databricks Workflow, Azure Data Factory (ADF), Apache Airflow, and/or AWS Glue. Maintain and ensure data security, quality, and governance throughout the data lifecycle. Technical Skills: • Extensive experience with PySpark and Scala-Spark. • Advanced SQL skills for complex data manipulation and querying. • Proven experience in performance optimization on Databricks Spark across at least three projects. • Hands-on experience with data formulation and data delivery validation in Big Data projects. • Experience in data orchestration using at least two of the following: Databricks Workflow, Azure Data Factory (ADF), Apache Airflow, AWS Glue. • Experience in Azure Synapse

Posted 1 week ago

Apply

8.0 years

0 Lacs

Pune, Maharashtra, India

On-site

The Applications Development Senior Programmer Analyst is an intermediate level position responsible for participation in the establishment and implementation of new or revised application systems and programs in coordination with the Technology team. The overall objective of this role is to contribute to applications systems analysis and programming activities. Responsibilities: Conduct tasks related to feasibility studies, time and cost estimates, IT planning, risk technology, applications development, model development, and establish and implement new or revised applications systems and programs to meet specific business needs or user areas Monitor and control all phases of development process and analysis, design, construction, testing, and implementation as well as provide user and operational support on applications to business users Utilize in-depth specialty knowledge of applications development to analyze complex problems/issues, provide evaluation of business process, system process, and industry standards, and make evaluative judgement Recommend and develop security measures in post implementation analysis of business usage to ensure successful system design and functionality Consult with users/clients and other technology groups on issues, recommend advanced programming solutions, and install and assist customer exposure systems Ensure essential procedures are followed and help define operating standards and processes Serve as advisor or coach to new or lower level analysts Has the ability to operate with a limited level of direct supervision. Can exercise independence of judgement and autonomy. Acts as SME to senior stakeholders and /or other team members. Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency. Person in this role will have to talk to different tech and functional teams across globe. We have teams in different LATAM countries, US, Mexico, and India. He/ She should be available mostly during the second shift supporting Indian timings and should be able to attend calls during the first half of US Time zone. Minimal interaction with external clients. Qualifications: Must Have: 8+ years of application/software development/maintenance 5+ Years of experience on Big Data Technologies like Apache Spark, Hive, Hadoop is must. Knowledge of Python, Java or Scala programming language. Experience on any two languages is mandatory. Experience with JAVA (Core Java, J2EE, Spring Boot Restful Services), Web services (REST, SOAP), XML, Java Script, Micro services, SOA etc. Strong technical knowledge of Apache Spark, Hive, SQL, and Hadoop ecosystem. Experience with developing frameworks and utility services including logging/monitoring. Experience delivering high quality software following continuous delivery and using code quality tools (JIRA, GitHub, Jenkin, Sonar, etc.). Experience creating large-scale, multi-tiered, distributed applications with Hadoop and Spark Profound knowledge implementing to different data storage solutions such as RDMBS(Oracle), Hive, HBase, Impala and NO SQL databases like MongoDB, HBase, Cassandra etc. Ability to work independently, multi-task, and take ownership of various analyses or reviews. Has to be results-oriented, willing and able to take ownership of engagements. Banking domain experience is a must. Strong analytical and communication skills Good to Have: Work experience in Citi or Regulatory Reporting applications. Hands on experience on cloud technologies especially around data engineering Hands on experience on AI/ML integration and creation of data pipelines Experience with vendor products like Tableau, Arcadia, Paxata, KNIME is a plus Experience with API development and use of data formats is a plus Education: Bachelor’s degree/University degree or equivalent experience This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required. Java Spark: Strong technical knowledge of Apache Spark, Hive and Hadoop ecosystem Knowledge of Scala, Java & Python programming language. Hands-on experience on any two languages are mandatory. Minimum 2 product, platform or solution technical delivery experience Experience delivering high quality software following continuous delivery and using code quality tools (JIRA, GitHub, Jenkin, Sonar, etc.). Experience creating large-scale, multi-tiered, distributed applications with Hadoop and Spark Comfortable in Windows and Linux environments. Comfortable with different data storage solutions such as RDMBS(Oracle), Hive, HBase, Impala etc. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Applications Development ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster.

Posted 1 week ago

Apply

5.0 - 10.0 years

12 - 16 Lacs

Bengaluru

Remote

Job Summary: The Data Engineer for the Central Data Platform who will be responsible for designing, developing, and maintaining the data infrastructure. This role involves working with large datasets, ensuring data quality, and implementing data integration solutions to facilitate efficient and accurate tax processing. The Data Engineer will collaborate with cross-functional teams to understand data requirements and provide insights that drive decision-making processes. Key Responsibilities: Data Infrastructure Development: Design and implement scalable data pipelines and ETL processes to collect, process, and store data. Develop and maintain data models and database schemas to support the system. Data Integration: Integrate data from various sources, including external systems and internal databases, ensuring data consistency and reliability. Implement data validation and cleansing processes to maintain data quality. Data Management: Monitor and optimize data storage solutions to ensure eAicient data retrieval and processing. Implement data security measures to protect sensitive tax information. Collaboration and Support: Work closely with data analysts, data scientists, and other stakeholders to understand data requirements and provide technical support. Collaborate with IT and software development teams to integrate data solutions into the overall system architecture. Performance Optimization: Identify and resolve performance bottlenecks in data processing and storage. Continuously improve data processing workflows to enhance system performance and scalability. Documentation and Reporting: Document data engineering processes, data models, and system configurations. Generate reports and dashboards to provide insights into data trends and system performance. Qualifications: Bachelors or master’s degree in computer science, Information Technology, Data Science, or a related field. At least 5 years of proven experience as a Data Engineer, with string expertise in SAP Hana/BW Strong proficiency in SQL- yrs and experience with relational databases (e.g., SAP Hana/ BW, PostgreSQL, MySQL). Experience with big data technologies (e.g., Hadoop, Spark, Cloudera) and data processing frameworks. Familiarity with cloud platforms (e.g., AWS, Azure yrs, Google Cloud) and their data services. Proficiency in programming languages such as Python, Java, or Scala. Excellent problem-solving skills and strong analytical abilities. Experience with data visualization tools (e.g., Tableau, Power BI-2yrs) is a plus. Hands on experience in using ETL – yrs tools.

Posted 1 week ago

Apply

4.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Title ML Engineer – Predictive Maintenance Job Description ML Engineer at DSP – Predictive Maintenance Hay Level Hay 60 Job Location Veghel Vanderlande provides baggage handling systems for 600 airports globally, moving over 4 billion pieces of baggage annually. For the parcel market, our systems handle 52 million parcels daily. All these systems generate massive amounts of data. Do you see the challenge in building models and solutions that enable data-driven services, including predictive insights using machine learning? Would you like to contribute to Vanderlande's fast-growing Technology Department and its journey to become more data-driven? If so, join our Digital Service Platform team! Your Position You will work as a Data Engineer with Machine Learning expertise in the Predictive Maintenance team. This hybrid and multi-cultural team includes Data Scientists, Machine Learning Engineers, Data Engineers, a DevOps Engineer, a QA Engineer, an Architect, a UX Designer, a Scrum Master, and a Product Owner. The Digital Service Platform focuses on optimizing customer asset usage and maintenance, impacting performance, cost, and sustainability KPIs by extending component lifetimes. In your role, you will Participate in solution design discussions led by our Product Architect, where your input as a Data Engineer with ML expertise is highly valued. Collaborate with IT and business SMEs to ensure delivery of high-quality end-to-end data and machine learning pipelines. Your Responsibilities Data Engineering Develop, test, and document data (collection and processing) pipelines for Predictive Maintenance solutions, including data from (IoT) sensors and control components to our data platform. Build scalable pipelines to transform, aggregate, and make data available for machine learning models. Align implementation efforts with other back-end developers across multiple development teams. Machine Learning Integration Collaborate with Data Scientists to integrate machine learning models into production pipelines, ensuring smooth deployment and scalability. Develop and optimize end-to-end machine learning pipelines (MLOps) from data preparation to model deployment and monitoring. Work on model inference pipelines, ensuring efficient real-time predictions from deployed models. Implement automated retraining workflows and ensure version control for datasets and models. Continuous Improvement Contribute to the design and build of a CI/CD pipeline, including integration test automation for data and ML pipelines. Continuously improve and standardize data and ML services for customer sites to reduce project delivery time. Actively monitor model performance and ensure timely updates or retraining as needed. Your Profile Minimum 4 years' experience building complex data pipelines and integrating machine learning solutions. Bachelor's or Master's degree in Computer Science, IT, Data Science, or equivalent. Hands-on experience with data modeling and machine learning workflows. Strong programming skills in Java, Scala, and Python (preferred for ML tasks). Experience with stream processing frameworks (e.g., Spark) and streaming storage (e.g., Kafka). Proven experience with MLOps practices, including data preprocessing, model deployment, and monitoring. Familiarity with ML frameworks and tools (e.g., TensorFlow, PyTorch, MLflow). Proficient in cloud platforms (preferably Azure and Databricks). Experience with data quality management, monitoring, and ensuring robust pipelines. Knowledge of Predictive Maintenance model development is a strong plus. What You’ll Gain Opportunity to work at the forefront of data-driven innovation in a global organization. Collaborate with a talented and diverse team to design and implement cutting-edge solutions. Expand your expertise in data engineering and machine learning in a real-world industrial setting. If you are passionate about leveraging data and machine learning to drive innovation, we’d love to hear from you!

Posted 1 week ago

Apply

2.0 - 5.0 years

7 - 11 Lacs

Bengaluru

Work from Office

Transport is at the core of modern society Imagine using your expertise to shape sustainable transport and infrastructure solutions for the futureIf you seek to make a difference on a global scale, working with next-gen technologies and the sharpest collaborative teams, then we could be a perfect match Job Description Volvo Group is looking for Azure Data Engineers! Are you interested in being involved in the biggest, data driven transformation of transport solutions in historyWe are in the middle of our digitalization journey, and you are welcome to join and help us drive analytics and artificial intelligence initiatives which enable new business models We are looking for Data Engineers 5+ years of relevant experience with possible DevOps skills to develop and optimize data pipelines and work together with Data Scientists and other experts who focus on generating business value out of data You will work with the latest technology, in agile product teams which discover data to create innovative services Your typical day at work will be filled with the following activities: Build complex data pipelines in Microsoft Azure Optimize existing data pipelines Support in building the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources to Azure Work closely with data analysts and data scientists to provide required data structures and enable new insights Able to provision resources on Azure Keep the data secure across the end-to-end solutions Evaluate and improve existing data analytics solutions Develop your competences, learn new tools and ways of working This Is You We are looking for experienced data engineer with 4+ years of experience and relevant education background You have good hands-on skills on Azure Data Analytics, Spark, Databricks (Cost Optimization Techniques, Cluster Optimization, Performance Tuning) Youhavestronghands-on ETLskillsandexperience in building data pipelines using Databricks (with good knowledge of spark) and orchestrating Data workloads on Azure Data Factory Experience in handling real-time streaming/transactional data preferably Spark-Streaming, Kafka / EventHubs / EventGrid / Stream Analytics/Service Bus etc You havesignificant experiencein processing data in scripting languages (Python, Pyspark, Scala) Working with Git and following Git workflow (Branching, Managing conflicts, etc) Databases (likeSQLserveror Netezza) donthave anysecret foryou You are comfortable with DevOps (or DevSecOps) and are able to provision resources in Azure Youarecomfortableworkingin adiverse,complex,and fast-changing landscapeof datasources You communicate fluently in English Youareproactive problemsolver withinnovative thinkinganda strong teamplayer Extras Good To Have Experience in AGILE project methodology and practice with ARM Templates/Bicep on the DevOps front Understanding the DevOps Architecture (Agent Pools, Accounts, Var Groups, Templating, etc) Azure Networking model (public and private connectivity, VNET injection, etc) Familiarity with Microsoft Power BI or other data visualization tools such as Qlik, SAP Business Objects Knowledge of other services and tools used to ETL workflows like Informatica IICS Experience with business intelligence platforms, such as Microsoft SSIS/SSAS/SSRS, Teradata, IBM Netezza Familiarity with database management systems, online analytical processing (OLAP) We value your data privacy and therefore do not accept applications via mail

Posted 1 week ago

Apply

6.0 - 10.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Summary Position Summary AI & Data In this age of disruption, organizations need to navigate the future with confidence, embracing decision making with clear, data-driven choices that deliver enterprise value in a dynamic business environment. The AI & Data team leverages the power of data, analytics, robotics, science and cognitive technologies to uncover hidden relationships from vast troves of data, generate insights, and inform decision-making. The offering portfolio helps clients transform their business by architecting organizational intelligence programs and differentiated strategies to win in their chosen markets. AI & Data will work with our clients to: Implement large-scale data ecosystems including data management, governance and the integration of structured and unstructured data to generate insights leveraging cloud-based platforms Leverage automation, cognitive and science-based techniques to manage data, predict scenarios and prescribe actions Drive operational efficiency by maintaining their data ecosystems, sourcing analytics expertise and providing As-a-Service offerings for continuous insights and improvements Azure Databricks Data Engineer (6 to 9) Job Title: Senior Consultant Job Summary: As a Azure Databricks Data engineer, you will lead and implement advanced data analytics and engineering solutions using Databricks on Azure. This role requires a deep understanding of big data technologies, cloud services, and data architecture strategies. You will be instrumental in transforming data into actionable insights that drive business decisions. Key Responsibilities: Design and implement scalable, high-performance data solutions using Databricks on Azure platform Collaborate with cross-functional teams to integrate big data solutions with existing IT infrastructure. Develop and optimize data pipelines, architectures, and data sets. Perform data modeling, data validation, and ensure data accuracy and reliability. Implement machine learning algorithms and data processing workflows. Provide expertise in data storage solutions and manage large-scale data ingestion and transformation. Implement CI/CD based application development methodology using tools like Azure DevOps/Jenkins/TFS/power shell etc. Ensure compliance with data security and privacy policies. Mentor junior team members and lead project segments. Qualifications: Bachelor’s degree in Computer Science, Engineering, or related field. 6-10 years of experience in data engineering with a proven track record in using Databricks on Azure Strong knowledge of Python, SQL, PySpark and Scala (optional) Experience with cloud services such as cloud Databases, storage accounts ADLS Gen2, Azure Key vault, Cosmos DB, Azure Data factory, Azure Synapse is plus Experience in building metadata driven ingestion and DQ framework using PySpark Strong understanding of Lakehouse, Apache Spark, Delta Lake, and other big data technologies. Experience working with data toolsets, including data warehouse, data marts, data lake, 3NF, and dimensional model Experience in building pipelines using Delta live tables, autoloader, Databricks workflows for orchestration. Experience with Apache airflow will be plus. Experience with Databricks Unity catalog is plus. Experience in implementing fine grained access control using Databricks Unity catalog features is plus Experience in performance optimization in Databricks/Apache spark Demonstrated ability to work collaboratively in a team environment. Excellent problem-solving and analytical skills. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 300095

Posted 1 week ago

Apply

6.0 - 10.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

Summary Position Summary AI & Data In this age of disruption, organizations need to navigate the future with confidence, embracing decision making with clear, data-driven choices that deliver enterprise value in a dynamic business environment. The AI & Data team leverages the power of data, analytics, robotics, science and cognitive technologies to uncover hidden relationships from vast troves of data, generate insights, and inform decision-making. The offering portfolio helps clients transform their business by architecting organizational intelligence programs and differentiated strategies to win in their chosen markets. AI & Data will work with our clients to: Implement large-scale data ecosystems including data management, governance and the integration of structured and unstructured data to generate insights leveraging cloud-based platforms Leverage automation, cognitive and science-based techniques to manage data, predict scenarios and prescribe actions Drive operational efficiency by maintaining their data ecosystems, sourcing analytics expertise and providing As-a-Service offerings for continuous insights and improvements Azure Databricks Data Engineer (6 to 9) Job Title: Senior Consultant Job Summary: As a Azure Databricks Data engineer, you will lead and implement advanced data analytics and engineering solutions using Databricks on Azure. This role requires a deep understanding of big data technologies, cloud services, and data architecture strategies. You will be instrumental in transforming data into actionable insights that drive business decisions. Key Responsibilities: Design and implement scalable, high-performance data solutions using Databricks on Azure platform Collaborate with cross-functional teams to integrate big data solutions with existing IT infrastructure. Develop and optimize data pipelines, architectures, and data sets. Perform data modeling, data validation, and ensure data accuracy and reliability. Implement machine learning algorithms and data processing workflows. Provide expertise in data storage solutions and manage large-scale data ingestion and transformation. Implement CI/CD based application development methodology using tools like Azure DevOps/Jenkins/TFS/power shell etc. Ensure compliance with data security and privacy policies. Mentor junior team members and lead project segments. Qualifications: Bachelor’s degree in Computer Science, Engineering, or related field. 6-10 years of experience in data engineering with a proven track record in using Databricks on Azure Strong knowledge of Python, SQL, PySpark and Scala (optional) Experience with cloud services such as cloud Databases, storage accounts ADLS Gen2, Azure Key vault, Cosmos DB, Azure Data factory, Azure Synapse is plus Experience in building metadata driven ingestion and DQ framework using PySpark Strong understanding of Lakehouse, Apache Spark, Delta Lake, and other big data technologies. Experience working with data toolsets, including data warehouse, data marts, data lake, 3NF, and dimensional model Experience in building pipelines using Delta live tables, autoloader, Databricks workflows for orchestration. Experience with Apache airflow will be plus. Experience with Databricks Unity catalog is plus. Experience in implementing fine grained access control using Databricks Unity catalog features is plus Experience in performance optimization in Databricks/Apache spark Demonstrated ability to work collaboratively in a team environment. Excellent problem-solving and analytical skills. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 300095

Posted 1 week ago

Apply

0 years

0 Lacs

Bengaluru East, Karnataka, India

On-site

Scala and Spark hands-on. Quantexa certification and experience of Quantexa project delivery. Quantexa ETL Quantexa Scoring Strong Design and Development exposure Strong communication skills, both written and verbal. Highly motivated self-driven with a positive attitude. Basic understanding of MS Azure. CI/CD pipelines using Jenkins. Kubernetes.

Posted 1 week ago

Apply

5.0 years

0 Lacs

Dadra & Nagar Haveli, Daman and Diu, India

On-site

Security represents the most critical priorities for our customers in a world awash in digital threats, regulatory scrutiny, and estate complexity. Microsoft Security aspires to make the world a safer place for all. We want to reshape security and empower every user, customer, and developer with a security cloud that protects them with end to end, simplified solutions. The Microsoft Security organization accelerates Microsoft’s mission and bold ambitions to ensure that our company and industry is securing digital technology platforms, devices, and clouds in our customers’ heterogeneous environments, as well as ensuring the security of our own internal estate. Our culture is centered on embracing a growth mindset, a theme of inspiring excellence, and encouraging teams and leaders to bring their best each day. In doing so, we create life-changing innovations that impact billions of lives around the world. We are seeking a Senior Data Scientist to join our team and work on cutting-edge solutions that safeguard Microsoft services against cyberattacks. The Senior Data Scientist will play a critical role in enhancing our security posture by developing innovative models to detect and predict security threats. This role requires a deep understanding of data science, machine learning, and cybersecurity. The ideal candidate will have extensive experience in analysing large datasets, developing algorithms, and working closely with security experts to understand emerging threats and vulnerabilities. Our team values diversity in all its forms and believes in deep collaboration to harness the best of technology. We are a group of Security Engineers, Software Developers and Data Scientists with expertise in large-scale software systems, security analysis, big data, and machine learning. We delight in diving deep into the billions of events and terabytes of data generated daily by Microsoft products and services (e.g., Azure, M365) to detect and respond to suspicious activities. We ensure that critical security components are embedded throughout our infrastructure and are continuously. Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond. In alignment with our Microsoft values, we are committed to cultivating an inclusive work environment for all employees to positively impact our culture every day. Responsibilities Understand complex cybersecurity and business problems, translate them into well-defined data science problems, and build scalable solutions. Develop and deploy scalable, production-grade AI/ML systems for real-time threat detection and response. Drive end-to-end ML lifecycle: from data ingestion and feature engineering to model development, evaluation, and deployment. Analyse large and complex datasets generated by M365 to identify patterns and anomalies indicative of security risks. Collaborate with security experts to understand threat landscapes and incorporate domain knowledge into models. Continuously monitor and improve the performance of ML systems to adapt to evolving threats. Lead the design and implementation of data-driven security solutions and tools. Mentor and guide junior data scientists in best practices and advanced techniques. Communicate findings and insights to stakeholders, including senior leadership and technical teams. Qualifications Experience in developing and deploying machine learning models for security applications. Experience in Big Data preferably in the cybersecurity. Experience with data science workloads with the Azure tech stack; Synapse, Azure ML, etc. Knowledge of anomaly detection, fraud detection, and other related areas. Familiarity with security fundamentals and attack vectors Publications or contributions to the field of data science or cybersecurityDoctorate in Data Science, Mathematics, Statistics, Econometrics, Economics, Operations Research, Computer Science, or related field 5+ year(s) data-science experience (e.g., managing structured and unstructured data, applying statistical techniques and reporting results) OR Master's Degree in Data Science, Mathematics, Statistics, Econometrics, Economics, Operations Research, Computer Science, or related field Experience in programming languages such as Python, R, or Scala, with hands-on experience in data analysis, experimental design principles and visualization. Experience in translating complex data into actionable insights and recommendations that drive business impact. Excellent technical design skills and proven ability to drive large scale system designs for complex projects or products. Expertise in machine learning frameworks and libraries (e.g., TensorFlow, PyTorch, scikit-learn). In-depth knowledge of cybersecurity principles, threats, and attack vectors. Experience with big data technologies (e.g., Hadoop, Spark, Kafka) and data processing. Strong analytical and problem-solving skills with the ability to think creatively. Excellent communication skills with the ability to explain complex concepts to non-technical stakeholders. Preferred Qualifications Experience in Experience in developing and deploying machine learning models for security applications. Experience in Big Data preferably in the cybersecurity. Experience with data science workloads with the Azure tech stack; Synapse, Azure ML, etc. Knowledge of anomaly detection, fraud detection, and other related areas. Familiarity with security fundamentals and attack vectors Publications or contributions to the field of data science or cybersecuritydeveloping and deploying machine learning models for security applications. Experience in Big Data preferably in the cybersecurity. Experience with data science workloads with the Azure tech stack; Synapse, Azure ML, etc. Knowledge of anomaly detection, fraud detection, and other related areas. Familiarity with security fundamentals and attack vectors Publications or contributions to the field of data science or cybersecurity Microsoft is an equal opportunity employer. Consistent with applicable law, all qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Lakshadweep, India

On-site

Security represents the most critical priorities for our customers in a world awash in digital threats, regulatory scrutiny, and estate complexity. Microsoft Security aspires to make the world a safer place for all. We want to reshape security and empower every user, customer, and developer with a security cloud that protects them with end to end, simplified solutions. The Microsoft Security organization accelerates Microsoft’s mission and bold ambitions to ensure that our company and industry is securing digital technology platforms, devices, and clouds in our customers’ heterogeneous environments, as well as ensuring the security of our own internal estate. Our culture is centered on embracing a growth mindset, a theme of inspiring excellence, and encouraging teams and leaders to bring their best each day. In doing so, we create life-changing innovations that impact billions of lives around the world. We are seeking a Senior Data Scientist to join our team and work on cutting-edge solutions that safeguard Microsoft services against cyberattacks. The Senior Data Scientist will play a critical role in enhancing our security posture by developing innovative models to detect and predict security threats. This role requires a deep understanding of data science, machine learning, and cybersecurity. The ideal candidate will have extensive experience in analysing large datasets, developing algorithms, and working closely with security experts to understand emerging threats and vulnerabilities. Our team values diversity in all its forms and believes in deep collaboration to harness the best of technology. We are a group of Security Engineers, Software Developers and Data Scientists with expertise in large-scale software systems, security analysis, big data, and machine learning. We delight in diving deep into the billions of events and terabytes of data generated daily by Microsoft products and services (e.g., Azure, M365) to detect and respond to suspicious activities. We ensure that critical security components are embedded throughout our infrastructure and are continuously. Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond. In alignment with our Microsoft values, we are committed to cultivating an inclusive work environment for all employees to positively impact our culture every day. Responsibilities Understand complex cybersecurity and business problems, translate them into well-defined data science problems, and build scalable solutions. Develop and deploy scalable, production-grade AI/ML systems for real-time threat detection and response. Drive end-to-end ML lifecycle: from data ingestion and feature engineering to model development, evaluation, and deployment. Analyse large and complex datasets generated by M365 to identify patterns and anomalies indicative of security risks. Collaborate with security experts to understand threat landscapes and incorporate domain knowledge into models. Continuously monitor and improve the performance of ML systems to adapt to evolving threats. Lead the design and implementation of data-driven security solutions and tools. Mentor and guide junior data scientists in best practices and advanced techniques. Communicate findings and insights to stakeholders, including senior leadership and technical teams. Qualifications Experience in developing and deploying machine learning models for security applications. Experience in Big Data preferably in the cybersecurity. Experience with data science workloads with the Azure tech stack; Synapse, Azure ML, etc. Knowledge of anomaly detection, fraud detection, and other related areas. Familiarity with security fundamentals and attack vectors Publications or contributions to the field of data science or cybersecurityDoctorate in Data Science, Mathematics, Statistics, Econometrics, Economics, Operations Research, Computer Science, or related field 5+ year(s) data-science experience (e.g., managing structured and unstructured data, applying statistical techniques and reporting results) OR Master's Degree in Data Science, Mathematics, Statistics, Econometrics, Economics, Operations Research, Computer Science, or related field Experience in programming languages such as Python, R, or Scala, with hands-on experience in data analysis, experimental design principles and visualization. Experience in translating complex data into actionable insights and recommendations that drive business impact. Excellent technical design skills and proven ability to drive large scale system designs for complex projects or products. Expertise in machine learning frameworks and libraries (e.g., TensorFlow, PyTorch, scikit-learn). In-depth knowledge of cybersecurity principles, threats, and attack vectors. Experience with big data technologies (e.g., Hadoop, Spark, Kafka) and data processing. Strong analytical and problem-solving skills with the ability to think creatively. Excellent communication skills with the ability to explain complex concepts to non-technical stakeholders. Preferred Qualifications Experience in Experience in developing and deploying machine learning models for security applications. Experience in Big Data preferably in the cybersecurity. Experience with data science workloads with the Azure tech stack; Synapse, Azure ML, etc. Knowledge of anomaly detection, fraud detection, and other related areas. Familiarity with security fundamentals and attack vectors Publications or contributions to the field of data science or cybersecuritydeveloping and deploying machine learning models for security applications. Experience in Big Data preferably in the cybersecurity. Experience with data science workloads with the Azure tech stack; Synapse, Azure ML, etc. Knowledge of anomaly detection, fraud detection, and other related areas. Familiarity with security fundamentals and attack vectors Publications or contributions to the field of data science or cybersecurity Microsoft is an equal opportunity employer. Consistent with applicable law, all qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies