Jobs
Interviews

6093 Scala Jobs - Page 25

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 - 6.0 years

12 - 16 Lacs

Bengaluru

Work from Office

As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs.Your primary responsibilities include: Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Developed the Pysprk code for AWS Glue jobs and for EMR. Worked on scalable distributed data system using Hadoop ecosystem in AWS EMR, MapR distribution.. Developed Python and pyspark programs for data analysis. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Hadoop streaming Jobs using python for integrating python API supported applications.. Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations.. Re- write some Hive queries to Spark SQL to reduce the overall batch time Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala

Posted 1 week ago

Apply

3.0 - 6.0 years

14 - 18 Lacs

Kochi

Work from Office

Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. 7+ Yrs total experience in Data Engineering projects & 4+ years of relevant experience on Azure technology services and Python Azure Azure data factory, ADLS- Azure data lake store, Azure data bricks, Mandatory Programming languages Py-Spark, PL/SQL, Spark SQL Database SQL DB Experience with AzureADLS, Databricks, Stream Analytics, SQL DW, COSMOS DB, Analysis Services, Azure Functions, Serverless Architecture, ARM Templates Experience with relational SQL and NoSQL databases, including Postgres and Cassandra. Experience with object-oriented/object function scripting languagesPython, SQL, Scala, Spark-SQL etc. Data Warehousing experience with strong domain Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Intuitive individual with an ability to manage change and proven time management Proven interpersonal skills while contributing to team effort by accomplishing related results as needed Up-to-date technical knowledge by attending educational workshops, reviewing publications Preferred technical and professional experience Experience with AzureADLS, Databricks, Stream Analytics, SQL DW, COSMOS DB, Analysis Services, Azure Functions, Serverless Architecture, ARM Templates Experience with relational SQL and NoSQL databases, including Postgres and Cassandra. Experience with object-oriented/object function scripting languagesPython, SQL, Scala, Spark-SQL etc

Posted 1 week ago

Apply

3.0 - 6.0 years

14 - 18 Lacs

Bengaluru

Work from Office

As an Associate Software Developer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include: Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise search applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviours. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modelling results Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Total Exp 3-6 Yrs (Relevant-4-5 Yrs) Mandatory Skills: Azure Databricks, Python/PySpark, SQL, Github, - Azure Devops - Azure Blob Ability to use programming languages like Java, Python, Scala, etc., to build pipelines to extract and transform data from a repository to a data consumer Ability to use Extract, Transform, and Load (ETL) tools and/or data integration, or federation tools to prepare and transform data as needed. Ability to use leading edge tools such as Linux, SQL, Python, Spark, Hadoop and Java Preferred technical and professional experience You thrive on teamwork and have excellent verbal and written communication skills. Ability to communicate with internal and external clients to understand and define business needs, providing analytical solutions Ability to communicate results to technical and non-technical audiences

Posted 1 week ago

Apply

5.0 - 10.0 years

22 - 27 Lacs

Bengaluru

Work from Office

Create Solution Outline and Macro Design to describe end to end product implementation in Data Platforms including, System integration, Data ingestion, Data processing, Serving layer, Design Patterns, Platform Architecture Principles for Data platform Contribute to pre-sales, sales support through RfP responses, Solution Architecture, Planning and Estimation Contribute to reusable components / asset / accelerator development to support capability development Participate in Customer presentations as Platform Architects / Subject Matter Experts on Big Data, Azure Cloud and related technologies Participate in customer PoCs to deliver the outcomes Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Candidates must have experience in designing of data products providing descriptive, prescriptive, and predictive analytics to end users or other systems 10 - 15 years of experience in data engineering and architecting data platforms 5 – 8 years’ experience in architecting and implementing Data Platforms Azure Cloud Platform. 5 – 8 years’ experience in architecting and implementing Data Platforms on-prem (Hadoop or DW appliance) Experience on Azure cloud is mandatory (ADLS Gen 1 / Gen2, Data Factory, Databricks, Synapse Analytics, Azure SQL, Cosmos DB, Event hub, Snowflake), Azure Purview, Microsoft Fabric, Kubernetes, Terraform, Airflow. Experience in Big Data stack (Hadoop ecosystem Hive, HBase, Kafka, Spark, Scala PySpark, Python etc.) with Cloudera or Hortonworks Preferred technical and professional experience Exposure to Data Cataloging and Governance solutions like Collibra, Alation, Watson Knowledge Catalog, dataBricks unity Catalog, Apache Atlas, Snowflake Data Glossary etc Candidates should have experience in delivering both business decision support systems (reporting, analytics) and data science domains / use cases

Posted 1 week ago

Apply

2.0 - 5.0 years

14 - 17 Lacs

Mysuru

Work from Office

As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs. Your primary responsibilities include: Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Must have 5+ years exp in Big Data -Hadoop Spark -Scala ,Python Hbase, Hive Good to have Aws -S3, athena ,Dynomo DB, Lambda, Jenkins GIT Developed Python and pyspark programs for data analysis. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations. Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala

Posted 1 week ago

Apply

3.0 - 6.0 years

7 - 11 Lacs

Bengaluru

Work from Office

Advanced Programming Skills in Python,Scala,GoStrong expertise in developing and maintaining microservices in Go (or other similar languages), with the ability to lead and mentor others in this area. Extensive exposure in developing Big Data Applications ,Data Engineering ,ETL and Data Analytics . Cloud ExpertiseIn-depth knowledge of IBM Cloud or similar cloud platforms, with a proven track record of deploying and managing cloud-native applications. Leadership and CollaborationAbility to lead cross-functional teams, work closely with product owners, and drive platform enhancements while mentoring junior team members. Security and ComplianceStrong understanding of security best practices and compliance standards, with experience ensuring that platforms meet or exceed these requirements. Analytical and Problem-Solving Skills: Excellent problem-solving abilities with a proven track record of resolving complex issues in a multi-tenant environment. Required education Bachelor's Degree Required technical and professional expertise 4-7 years' experience primarily in using Apache Spark, Kafka and SQL preferably in Data Engineering projects with a strong TDD approach. Advanced Programming Skills in languages like Python ,Java , Scala with proficiency in SQL Extensive exposure in developing Big Data Applications, Data Engineering, ETL ETL tools and Data Analytics. Exposure in Data Modelling, Data Quality and Data Governance. Extensive exposure in Creating and maintaining Data pipelines - workflows to move data from various sources into data warehouses or data lakes. Cloud ExpertiseIn-depth knowledge of IBM Cloud or similar cloud platforms, with a proven track record of developing, deploying and managing cloud-native applications. Good to have Front-End Development experienceReact, Carbon, and Node for managing and improving user-facing portals. Leadership and CollaborationAbility to lead cross-functional teams, work closely with product owners, and drive platform enhancements while mentoring junior team members. Security and ComplianceStrong understanding of security best practices and compliance standards, with experience ensuring that platforms meet or exceed these requirements. Analytical and Problem-Solving Skills: Excellent problem-solving abilities with a proven track record of resolving complex issues in a multi-tenant environment. Preferred technical and professional experience Hands on experience with Data Analysis & Querying using SQLs and considerable exposure to ETL processes. Expertise in developing Cloud applications with High Volume Data processing. Worked on building scalable Microservices components using various API development frameworks.

Posted 1 week ago

Apply

7.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Role:-Data Engineer-Investment Exp:-6-12 Yrs Location :- Hyderabad Primary Skills :- ETL, Informatica,SQL,Python and Investment domain Please share your resumes to jyothsna.g@technogenindia.com, Job Description :- •7-9 years of experience with data analytics, data modeling, and database design. •3+ years of coding and scripting (Python, Java, Scala) and design experience. •3+ years of experience with Spark framework. •5+ Experience with ELT methodologies and tools. •5+ years mastery in designing, developing, tuning and troubleshooting SQL. •Knowledge of Informatica Power center and Informatica IDMC. •Knowledge of distributed, column- orientated technology to create high-performant database technologies like - Vertica, Snowflake. •Strong data analysis skills for extracting insights from financial data •Proficiency in reporting tools (e.g., Power BI, Tableau). T he Ideal Qualifications Technical Skills: •Domain knowledge of Investment Management operations including Security Masters, Securities Trade and Recon Operations, Reference data management, and Pricing. •Familiarity with regulatory requirements and compliance standards in the investment management industry. •Experience with IBOR’s such as Blackrock Alladin, CRD, Eagle STAR (ABOR), Eagle Pace, and Eagle DataMart. •Familiarity with investment data platforms such as GoldenSource, FINBOURNE, NeoXam, RIMES, and JPM Fusion. Soft Skills: •Strong analytical and problem-solving abilities. •Exceptional communication and interpersonal skills. •Ability to influence and motivate teams without direct authority. •Excellent time management and organizational skills, with the ability to prioritize multiple initiatives.

Posted 1 week ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Dreaming big is in our DNA. It’s who we are as a company. It’s our culture. It’s our heritage. And more than ever, it’s our future. A future where we’re always looking forward. Always serving up new ways to meet life’s moments. A future where we keep dreaming bigger. We look for people with passion, talent, and curiosity, and provide them with the teammates, resources and opportunities to unleash their full potential. The power we create together – when we combine your strengths with ours – is unstoppable. Are you ready to join a team that dreams as big as you do? Purpose of the role We are looking for a Data Scientist to analyze large amounts of raw information to find patterns that will help improve our company. We will rely on you to build data products to extract valuable business insights. In this role, you should be highly analytical with a knack for analysis, math and statistics. Critical thinking and problem-solving skills are essential for interpreting data. We also want to see a passion for machine-learning and research. Key tasks & accountabilities Identify valuable data sources and automate collection processes Undertake preprocessing of structured and unstructured data Analyze large amounts of information to discover trends and patterns Build predictive models and machine-learning algorithms Combine models through ensemble modeling Present information using data visualization techniques Propose solutions and strategies to business challenges Collaborate with engineering and product development teams 3. Qualifications, Experience, Skills Level of educational attainment required: BSc/BA in Computer Science, Engineering or relevant field; graduate degree in Data Science or other quantitative field is preferred Previous work experience required: Proven experience as a Data Scientist or Data Analyst Experience in data mining Understanding of machine-learning and operations research Technical skills required: Knowledge of R, SQL and Python; familiarity with Scala, Java or C++ is an asset Experience using business intelligence tools (e.g. PowerBI) and data frameworks (e.g. Hadoop) Analytical mind and business acumen Strong math skills (e.g. statistics, algebra)

Posted 1 week ago

Apply

4.0 - 6.0 years

20 - 30 Lacs

Gurugram

Work from Office

Key Skills: Spark, Scala, Flink, Big Data, Structured Streaming, Data Architecture, Data Modeling, NoSQL, AWS, Azure, GCP, JVM tuning, Performance Optimization. Roles & Responsibilities: Design and build robust data architectures for large-scale data processing. Develop and maintain data models and database designs. Work on stream processing engines like Spark Structured Streaming and Flink. Perform analytical processing on Big Data using Spark. Administer, configure, monitor, and tune performance of Spark workloads and distributed JVM-based systems. Lead and support cloud deployments across AWS, Azure, or Google Cloud Platform. Manage and deploy Big Data technologies such as Business Data Lakes and NoSQL databases. Experience Requirements: Extensive experience working with large data sets and Big Data technologies. 4-6 years of hands-on experience in Spark/Big Data tech stack. At least 4 years of experience in Scala. At least 2+ years of experience in cloud deployment (AWS, Azure, or GCP). Successfully completed at least 2 product deployments involving Big Data technologies. Education: B.Tech M.Tech (Dual), B.Tech.

Posted 1 week ago

Apply

8.0 - 13.0 years

0 - 1 Lacs

Hyderabad, Pune, Bengaluru

Hybrid

Primary Skill Hadoop with Scala, Apache Spark, and Hive Secondary Skill – SQL,PLSQL, Python and Unix Shell scripting Client Interview – Yes / No: Yes Total Exp – 8-10 years Notice Period – Immediate Joiner Job Location – Are we flexible for other locations ? Anywhere in India, ready to travel Denmark(Onsite)

Posted 1 week ago

Apply

8.0 years

0 Lacs

Greater Chennai Area

Remote

Your work days are brighter here. At Workday, it all began with a conversation over breakfast. When our founders met at a sunny California diner, they came up with an idea to revolutionize the enterprise software market. And when we began to rise, one thing that really set us apart was our culture. A culture which was driven by our value of putting our people first. And ever since, the happiness, development, and contribution of every Workmate is central to who we are. Our Workmates believe a healthy employee-centric, collaborative culture is the essential mix of ingredients for success in business. That’s why we look after our people, communities and the planet while still being profitable. Feel encouraged to shine, however that manifests: you don’t need to hide who you are. You can feel the energy and the passion, it's what makes us unique. Inspired to make a brighter work day for all and transform with us to the next stage of our growth journey? Bring your brightest version of you and have a brighter work day here. At Workday, we value our candidates’ privacy and data security. Workday will never ask candidates to apply to jobs through websites that are not Workday Careers. Please be aware of sites that may ask for you to input your data in connection with a job posting that appears to be from Workday but is not. In addition, Workday will never ask candidates to pay a recruiting fee, or pay for consulting or coaching services, in order to apply for a job at Workday. About The Team Workday Prism Analytics is a self-service analytics solution for Finance and Human Resources teams that allows companies to bring external data into Workday, combine it with existing people or financial data, and present it via Workday’s reporting framework. This gives the end user a comprehensive collection of insights that can be carried out in a flash. We design, build and maintain the data warehousing systems that underpin our Analytics products. We straddle both applications and systems, the ideal candidate for this role is someone who has a passion for solving hyper scale engineering challenges to serve the largest companies on the planet. About The Role We are looking for a highly motivated Senior Software Development Engineering Manager to build and lead our team in the Chennai office. You’ll play a crucial role in growing the team and its skillset, you will collaborate with stakeholders across the globe. This is a great opportunity to lead and contribute to a dynamic and critical platform in a fast-paced environment. You will be responsible for leading a team of engineers, ensuring the seamless operation of our Analytics products. Critical responsibility will be driving team onboarding and product knowledge ramp up, cooperating closely with our core teams in Dublin and Pleasanton. This role will require strong cross-team collaboration and communication to effectively bridge time zone differences and ensure seamless workflow. You will lead by example, leveraging your deep knowledge building world class software. You will promote a diverse and inclusive environment where employees are happy, energized and engaged, and who are excited to come to work every day. Responsibilities: Build and lead a multidisciplinary development team, drive them through technical challenges, delivering high-quality solutions that power Analytics at scale. Understand and promote industry-standard methodologies Coach and mentor team members to help them to be at their best, assisting with career growth and personal development. Foster an environment where communication, teamwork and collaboration are rewarded. Participate in our 12x7 on-call rotation supporting our applications in development and customer environments. Energize your team and have fun engineering software! About You Basic Qualifications: 8+ years of experience in a Software Engineering role (preferably using Java, Scala or other similar language). 4+ years proven experience leading and managing teams delivering software in an agile environment. Bachelor's degree in a computer related field or equivalent work experience. Other Qualifications: Experience in building Highly Available, Scalable, Reliable multi-tenanted big data applications on Cloud (AWS, GCP) and/or Data Center architectures. Working knowledge of distributed system principles. Experience with managing big data frameworks like Spark and/or Hadoop. Demonstrated track record of delivering performant, resilient solutions in a business-critical SaaS environment. Solid understanding and practical experience with software engineering best practices (coding standards, code reviews, SCM, CI, build processes, testing, and operations). You have a strong focus on delivering high-quality software products, continuous innovation, and you value test automation and performance engineering. You demonstrate the interpersonal skills needed to positively influence important issues or decisions in a multi-functional environment. You have the ability to communicate technical complexity in simple terms to both technical and non technical audiences. Experience supporting team members career growth and development. You put people first and ensure a psychologically safe environment for team members. Our Approach to Flexible Work With Flex Work, we’re combining the best of both worlds: in-person time and remote. Our approach enables our teams to deepen connections, maintain a strong community, and do their best work. We know that flexibility can take shape in many ways, so rather than a number of required days in-office each week, we simply spend at least half (50%) of our time each quarter in the office or in the field with our customers, prospects, and partners (depending on role). This means you'll have the freedom to create a flexible schedule that caters to your business, team, and personal needs, while being intentional to make the most of time spent together. Those in our remote "home office" roles also have the opportunity to come together in our offices for important moments that matter. Are you being referred to one of our roles? If so, ask your connection at Workday about our Employee Referral process! ,

Posted 1 week ago

Apply

7.0 - 9.0 years

7 - 17 Lacs

Coimbatore

Work from Office

Role Overview As the Technical Lead, you will take ownership of our technology strategy, system architecture, and engineering execution. You'll wear multiple hats from designing real-time trading systems and market data pipelines to mentoring engineers, managing infrastructure, and collaborating with founders on product direction. This role is ideal for someone who thrives in fast-paced, high-ownership environments, has full-stack and systems-level expertise, and can bridge product vision with scalable, maintainable tech execution in the financial technology domain. Responsibilities: Technical Leadership - Own and evolve the full technology stack: backend, frontend, infrastructure, and DevOps - Architect scalable, low-latency systems for real-time data ingestion, analytics, and strategy execution - Conduct detailed code and architecture reviews to ensure quality, performance, and security - Mentor and support engineers; lead by example with high engineering standards Product & Business Alignment - Collaborate with founders and business teams to translate product vision into technical milestones - Align technical decisions with product, compliance, and trading goals - Evaluate trade-offs between speed, cost, and long-term maintainability Team & Culture - Mentor and support engineers; lead by example with high engineering standards - Promote a collaborative, ownership-driven engineering environment Must-Have Skills: - Languages: Expert in at least one of Scala, Ruby on Rails, Node.js, React.js, Type Script - Databases: Proficient with Relational (PostgreSQL, MySQL) and NoSQL (Mongo DB, Redis, Cassandra, Rocks DB) - Experience with time-series databases (e.g., Timescale DB) is a strong plus - DevOps & Infra: Hands-on with Docker, Kubernetes, CI/CD pipelines - Cloud: Experience with AWS, Digital Ocean, or similar cloud platforms - Architecture: Strong experience in designing Micro Services, event-driven systems and database schema design - Soft Skills: Excellent communication, PR/code reviews, and cross-functional leadership Good to Have Skills: - Start-up experience or having played a founding/leadership engineering role - Exposure to UI/UX thinking and frontend frameworks beyond React - Knowledge of system design for scale, performance, and security - Experience in mentoring and building diverse engineering teams Why Join Us: - Be part of a mission-driven Fintech Start up at a transformative stage - Take full ownership of the tech stack and influence product direction - Collaborate with passionate founders and a lean, high-performance team - Work on cutting-edge financial systems that impact real traders and investors About Us: Simply Algo Fintech was founded in 2019 by a team of visionary founders with a shared passion for building intelligent algorithmic trading platforms. What began as a bootstrap venture has evolved into a cutting-edge fintech company empowering both retail and institutional traders through data-driven technology. We specialize in delivering seamless, user-friendly platforms that require no coding knowledge making algorithmic trading accessible to everyone. Our solutions are designed to simplify strategy development, testing, and execution with speed and precision. Website: https://www.simplyalgo.in/ LinkedIn: https://www.linkedin.com/company/simply-algo-fintech/?viewAsMember=true

Posted 1 week ago

Apply

65.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Job descriptions may display in multiple languages based on your language selection. What We Offer At Magna, you can expect an engaging and dynamic environment where you can help to develop industry-leading automotive technologies. We invest in our employees, providing them with the support and resources they need to succeed. As a member of our global team, you can expect exciting, varied responsibilities as well as a wide range of development prospects. Because we believe that your career path should be as unique as you are. Group Summary Magna is more than one of the world’s largest suppliers in the automotive space. We are a mobility technology company built to innovate, with a global, entrepreneurial-minded team. With 65+ years of expertise, our ecosystem of interconnected products combined with our complete vehicle expertise uniquely positions us to advance mobility in an expanded transportation landscape. Job Responsibilities Magna New Mobility is seeking a Data Engineer to join our Software Platform team. As a Backend Developer with cloud experience, you will be responsible for designing, developing, and maintaining the server-side components of our applications. You will work closely with cross-functional teams to ensure our systems are scalable, reliable, and secure. Your expertise in cloud platforms will be crucial in optimizing our infrastructure and deploying solutions that leverage cloud-native features. Your Responsibilities Design & Development: Develop robust, scalable, and high-performance backend systems and APIs. Design and implement server-side logic and integrate with front-end components. Database Knowledge: Strong experience with relational databases (e.g., MySQL, PostgreSQL) and NoSQL databases, especially MongoDB. Proficient in SQL and handling medium to large-scale datasets using big data platforms like Databricks. Familiarity with Change Data Capture (CDC) concepts, and hands-on experience with modern data streaming and integration tools such as Debezium and Apache Kafka. Cloud Integration: Leverage cloud platforms (e.g., AWS, Azure, Google Cloud) to deploy, manage, and scale applications. Implement cloud-based solutions for storage, computing, and networking. Security: Implement and maintain security best practices, including authentication, authorization, and data protection. Performance Optimization: Identify and resolve performance bottlenecks. Monitor application performance and implement improvements as needed. Collaboration: Work with product managers, front-end developers, and other stakeholders to understand requirements and deliver solutions. Participate in code reviews and contribute to team knowledge sharing. Troubleshooting: Diagnose and resolve issues related to backend systems and cloud infrastructure. Provide support for production environments and ensure high availability Who We Are Looking For Bachelor's Degree or Equivalent Experience in Computer Science or a relevant technical field Experience with Microservices: Knowledge and experience with microservices architecture. 3+ years of experience in backend development with a strong focus on cloud technologies. Technical Skills: Proficiency in backend programming languages such as Go lang, Python, Node.js, C/C++ or Java. Experience with any cloud platforms (AWS, Azure, Google Cloud) and related services (e.g., EC2, Lambda, S3, CloudFormation). Experience in building scalable ETL pipelines on industry standard ETL orchestration tools (Airflow, Dagster, Luigi, Google Cloud Composer, etc.) with deep expertise in SQL, PySpark, or Scala. Database Knowledge: Experience with relational databases (e.g., MySQL, PostgreSQL) and NoSQL databases (e.g., MongoDB). Expertise in SQL and using big data technologies (e.g. Hive, Presto, Spark, Iceberg, Flink, Databricks etc) on medium to large-scale data. DevOps: Familiarity with CI/CD pipelines, infrastructure as code (IaC), containerization (Docker), and orchestration tools (Kubernetes). Awareness, Unity, Empowerment At Magna, we believe that a diverse workforce is critical to our success. That’s why we are proud to be an equal opportunity employer. We hire on the basis of experience and qualifications, and in consideration of job requirements, regardless of, in particular, color, ancestry, religion, gender, origin, sexual orientation, age, citizenship, marital status, disability or gender identity. Magna takes the privacy of your personal information seriously. We discourage you from sending applications via email or traditional mail to comply with GDPR requirements and your local Data Privacy Law. Worker Type Regular / Permanent Group Magna Corporate

Posted 1 week ago

Apply

6.0 - 10.0 years

4 - 8 Lacs

Hyderabad

Work from Office

We are looking for a skilled Senior Oracle Data Engineer to join our team at Apps Associates (I) Pvt. Ltd, with 6-10 years of experience in the IT Services & Consulting industry. Roles and Responsibility Design, develop, and implement data engineering solutions using Oracle technologies. Collaborate with cross-functional teams to identify and prioritize project requirements. Develop and maintain large-scale data pipelines and architectures. Ensure data quality, integrity, and security through data validation and testing procedures. Optimize data processing workflows for improved performance and efficiency. Troubleshoot and resolve complex technical issues related to data engineering projects. Job Requirements Strong knowledge of Oracle Data Engineering concepts and technologies. Experience with data modeling, design, and development. Proficiency in programming languages such as Java or Python. Excellent problem-solving skills and attention to detail. Ability to work collaboratively in a team environment. Strong communication and interpersonal skills.

Posted 1 week ago

Apply

5.0 - 7.0 years

15 - 30 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

Minimum 5 years of hands-on experience in Scala programming for data-intensive applications. Strong expertise in Apache Spark (RDD/DataFrame/Streaming) for large-scale data processing. Proficient in writing complex SQL queries

Posted 1 week ago

Apply

0 years

0 Lacs

Greater Nashik Area

On-site

Dreaming big is in our DNA. It’s who we are as a company. It’s our culture. It’s our heritage. And more than ever, it’s our future. A future where we’re always looking forward. Always serving up new ways to meet life’s moments. A future where we keep dreaming bigger. We look for people with passion, talent, and curiosity, and provide them with the teammates, resources and opportunities to unleash their full potential. The power we create together – when we combine your strengths with ours – is unstoppable. Are you ready to join a team that dreams as big as you do? Purpose of the role We are looking for a Data Scientist to analyze large amounts of raw information to find patterns that will help improve our company. We will rely on you to build data products to extract valuable business insights. In this role, you should be highly analytical with a knack for analysis, math and statistics. Critical thinking and problem-solving skills are essential for interpreting data. We also want to see a passion for machine-learning and research. Key tasks & accountabilities Identify valuable data sources and automate collection processes Undertake preprocessing of structured and unstructured data Analyze large amounts of information to discover trends and patterns Build predictive models and machine-learning algorithms Combine models through ensemble modeling Present information using data visualization techniques Propose solutions and strategies to business challenges Collaborate with engineering and product development teams Qualifications, Experience, Skills Level of educational attainment required: BSc/BA in Computer Science, Engineering or relevant field; graduate degree in Data Science or other quantitative field is preferred Previous work experience required: Proven experience as a Data Scientist or Data Analyst Experience in data mining Understanding of machine-learning and operations research Technical skills required: Knowledge of R, SQL and Python; familiarity with Scala, Java or C++ is an asset Experience using business intelligence tools (e.g. PowerBI) and data frameworks (e.g. Hadoop) Analytical mind and business acumen Strong math skills (e.g. statistics, algebra)

Posted 1 week ago

Apply

8.0 years

30 - 38 Lacs

Gurgaon

Remote

Role: AWS Data Engineer Location: Gurugram Mode: Hybrid Type: Permanent Job Description: We are seeking a talented and motivated Data Engineer with requisite years of hands-on experience to join our growing data team. The ideal candidate will have experience working with large datasets, building data pipelines, and utilizing AWS public cloud services to support the design, development, and maintenance of scalable data architectures. This is an excellent opportunity for individuals who are passionate about data engineering and cloud technologies and want to make an impact in a dynamic and innovative environment. Key Responsibilities: Data Pipeline Development: Design, develop, and optimize end-to-end data pipelines for extracting, transforming, and loading (ETL) large volumes of data from diverse sources into data warehouses or lakes. Cloud Infrastructure Management: Implement and manage data processing and storage solutions in AWS (Amazon Web Services) using services like S3, Redshift, Lambda, Glue, Kinesis, and others. Data Modeling: Collaborate with data scientists, analysts, and business stakeholders to define data requirements and design optimal data models for reporting and analysis. Performance Tuning & Optimization: Identify bottlenecks and optimize query performance, pipeline processes, and cloud resources to ensure cost-effective and scalable data workflows. Automation & Scripting: Develop automated data workflows and scripts to improve operational efficiency using Python, SQL, or other scripting languages. Collaboration & Documentation: Work closely with data analysts, data scientists, and other engineering teams to ensure data availability, integrity, and quality. Document processes, architectures, and solutions clearly. Data Quality & Governance: Ensure the accuracy, consistency, and completeness of data. Implement and maintain data governance policies to ensure compliance and security standards are met. Troubleshooting & Support: Provide ongoing support for data pipelines and troubleshoot issues related to data integration, performance, and system reliability. Qualifications: Essential Skills: Experience: 8+ years of professional experience as a Data Engineer, with a strong background in building and optimizing data pipelines and working with large-scale datasets. AWS Experience: Hands-on experience with AWS cloud services, particularly S3, Lambda, Glue, Redshift, RDS, and EC2. ETL Processes: Strong understanding of ETL concepts, tools, and frameworks. Experience with data integration, cleansing, and transformation. Programming Languages: Proficiency in Python, SQL, and other scripting languages (e.g., Bash, Scala, Java). Data Warehousing: Experience with relational and non-relational databases, including data warehousing solutions like AWS Redshift, Snowflake, or similar platforms. Data Modeling: Experience in designing data models, schema design, and data architecture for analytical systems. Version Control & CI/CD: Familiarity with version control tools (e.g., Git) and CI/CD pipelines. Problem-Solving: Strong troubleshooting skills, with an ability to optimize performance and resolve technical issues across the data pipeline. Desirable Skills: Big Data Technologies: Experience with Hadoop, Spark, or other big data technologies. Containerization & Orchestration: Knowledge of Docker, Kubernetes, or similar containerization/orchestration technologies. Data Security: Experience implementing security best practices in the cloud and managing data privacy requirements. Data Streaming: Familiarity with data streaming technologies such as AWS Kinesis or Apache Kafka. Business Intelligence Tools: Experience with BI tools (Tableau, Quicksight) for visualization and reporting. Agile Methodology: Familiarity with Agile development practices and tools (Jira, Trello, etc.) Job Type: Permanent Pay: ₹3,000,000.00 - ₹3,800,000.00 per year Benefits: Work from home Schedule: Day shift Monday to Friday Experience: Data Engineering: 5 years (Required) AWS Elastic MapReduce (EMR): 3 years (Required) AWS : 3 years (Required) Work Location: In person

Posted 1 week ago

Apply

4.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Responsibilities Design, develop, and implement robust Big Data solutions using technologies such as Hadoop, Spark, and NoSQL databases Build and maintain scalable data pipelines for effective data ingestion, transformation, and analysis Collaborate with data scientists, analysts, and cross-functional teams to understand business requirements and translate them into technical solutions Ensure data quality and integrity through effective validation, monitoring, and troubleshooting techniques Optimize data processing workflows for maximum performance and efficiency Stay up-to-date with evolving Big Data technologies and methodologies to enhance existing systems Implement best practices for data governance, security, and compliance Document technical designs, processes, and procedures to support knowledge sharing across teams Requirements Bachelor's or Master's degree in Computer Science, Engineering, or a related field 4+ years of experience as a Big Data Engineer or in a similar role Strong proficiency in Big Data technologies (Hadoop, Spark, Hive, Pig) and frameworks Extensive experience with programming languages such as Python, Scala, or Java Knowledge of data modeling and data warehousing concepts Familiarity with NoSQL databases like Cassandra or MongoDB Proficient in SQL for data querying and analysis Strong analytical and problem-solving skills Excellent communication and collaboration abilities Ability to work independently and effectively in a fast-paced environment Benefits Competitive salary and benefits package. Opportunity to work on cutting-edge technologies and solve complex challenges. Dynamic and collaborative work environment with opportunities for growth and career advancement. Regular training and professional development opportunities.

Posted 1 week ago

Apply

0 years

0 Lacs

India

On-site

Posted on 23.07.2025 What’s important to us: Techversant is seeking experienced Data Scientist Engineers who will be responsible for developing and driving new business opportunities internationally. The incumbent will be responsible for discovering sales opportunities and creating qualified leads. Job Description Key Responsibilities Data mining or extracting usable data from valuable data sources Using machine learning tools to select features, create and optimize classifiers Carrying out the preprocessing of structured and unstructured data Enhancing data collection procedures to include all relevant information for developing analytic systems Processing, cleansing, and validating the integrity of data to be used for analysis Analyzing large amounts of information to find patterns and solutions Developing prediction systems and machine learning algorithms Presenting results in a clear manner Propose solutions and strategies to tackle business challenges Collaborate with Business and IT teams Required Skills Programming Skills – knowledge of statistical programming languages like R, Python, and database query languages like SQL, Hive, Pig is desirable. Familiarity with Scala, Java, or C++ is an added advantage. Statistics – Good applied statistical skills, including knowledge of statistical tests, distributions, regression, maximum likelihood estimators, etc. Proficiency in statistics is essential for data-driven companies. Machine Learning – good knowledge of machine learning methods like k-Nearest Neighbors, Naive Bayes, SVM, Decision Forests. Strong Math Skills (Multivariable Calculus and Linear Algebra) – understanding the fundamentals of Multivariable Calculus and Linear Algebra is important as they form the basis of a lot of predictive performance or algorithm optimization techniques. Data Wrangling – proficiency in handling imperfections in data is an important aspect of a data scientist job description. Experience with Data Visualization Tools like matplotlib, ggplot, d3.js., Tableau that help to visually encode data Excellent Communication Skills – it is incredibly important to describe findings to a technical and non-technical audience. Strong Software Engineering Background Hands-on experience with data science tools Problem-solving aptitude Analytical mind and great business sense Degree in Computer Science, Engineering or relevant field is preferred Proven Experience as Data Analyst or Data Scientist What Company Offers: Excellent career growth opportunities and exposure to multiple technologies. Fixed weekday day schedule, meaning, you’ll have your weekends off! Family Medical Insurance. Unique leave benefits and encashment options based on performance. Long term growth opportunities. Fun family environment surrounded by experienced developers. Various internal employee rewards programs based on Performance. Opportunities for various other Bonus programs – for training hours taken, certifications, special value to business through idea and innovation. Work life Balance – flexible work timings, early out Fridays, various social and cultural activities etc. Company Sponsored International Tours. Email to careers@techversantinfotech.com

Posted 1 week ago

Apply

5.0 years

7 - 8 Lacs

Hyderābād

On-site

Full-time Employee Status: Regular Role Type: Hybrid Department: Product Development Schedule: Full Time Company Description Experian is a global data and technology company, powering opportunities for people and businesses around the world. We help to redefine lending practices, uncover and prevent fraud, simplify healthcare, create marketing solutions, and gain deeper insights into the automotive market, all using our unique combination of data, analytics and software. We also assist millions of people to realize their financial goals and help them save time and money. We operate across a range of markets, from financial services to healthcare, automotive, agribusiness, insurance, and many more industry segments. We invest in people and new advanced technologies to unlock the power of data. As a FTSE 100 Index company listed on the London Stock Exchange (EXPN), we have a team of 22,500 people across 32 countries. Our corporate headquarters are in Dublin, Ireland. Learn more at experianplc.com. Job Description Design, develop, and maintain high-quality software solutions. Collaborate with cross-functional teams to define, design, and ship new features. Strong Programming knowledge includes design patterns and debugging Java or Scala Design and Implement Data Engineering Frameworks on HDFS, Spark and EMR Implement and manage Kafka Streaming and containerized microservices. Work with RDBMS (Aurora MySQL) and No-SQL (Cassandra) databases. Utilize AWS Cloud services such as S3, EFS, MSK, ECS, EMR, etc. Ensure the performance, quality, and responsiveness of applications. Troubleshoot and resolve software defects and issues. Write clean, maintainable, and efficient code. Participate in code reviews and contribute to team knowledge sharing. You will be reporting to a Senior Manager This role would require you to work from Hyderabad (Workplace) for Hybrid 2 days a week from Office Qualifications 5+ years experienced engineer with hands-on and strong coding skills, preferably with Scala and java. Experience with Data Engineering – BigData, EMR, Airflow, Spark, Athena. AWS Cloud experience – S3, EFS, MSK, ECS, EMR, etc. Experience with Kafka Streaming and containerized microservices. Knowledge and experience with RDBMS (Aurora MySQL) and No-SQL (Cassandra) databases. Additional Information Our uniqueness is that we truly celebrate yours. Experian's culture and people are important differentiators. We take our people agenda very seriously and focus on what truly matters; DEI, work/life balance, development, authenticity, engagement, collaboration, wellness, reward & recognition, volunteering... the list goes on. Experian's strong people first approach is award winning; Great Place To Work™ in 24 countries, FORTUNE Best Companies to work and Glassdoor Best Places to Work (globally 4.4 Stars) to name a few. Check out Experian Life on social or our Careers Site to understand why. Experian is proud to be an Equal Opportunity and Affirmative Action employer. Innovation is a critical part of Experian's DNA and practices, and our diverse workforce drives our success. Everyone can succeed at Experian and bring their whole self to work, irrespective of their gender, ethnicity, religion, color, sexuality, physical ability or age. If you have a disability or special need that requires accommodation, please let us know at the earliest opportunity. Experian Careers - Creating a better tomorrow together Benefits Experian care for employee's work life balance, health, safety and wellbeing. In support of this endeavor, we offer best-in-class family well-being benefits, enhanced medical benefits and paid time off. #LI-Onsite Experian Careers - Creating a better tomorrow together

Posted 1 week ago

Apply

10.0 years

8 - 10 Lacs

Hyderābād

On-site

Full-time Employee Status: Regular Role Type: Hybrid Department: Product Development Schedule: Full Time Company Description Experian is a global data and technology company, powering opportunities for people and businesses around the world. We help to redefine lending practices, uncover and prevent fraud, simplify healthcare, create marketing solutions, and gain deeper insights into the automotive market, all using our unique combination of data, analytics and software. We also assist millions of people to realize their financial goals and help them save time and money. We operate across a range of markets, from financial services to healthcare, automotive, agribusiness, insurance, and many more industry segments. We invest in people and new advanced technologies to unlock the power of data. As a FTSE 100 Index company listed on the London Stock Exchange (EXPN), we have a team of 22,500 people across 32 countries. Our corporate headquarters are in Dublin, Ireland. Learn more at experianplc.com. Job Description Job description The ESS Analytics team is accelerating Experian's impact by bringing together data, tech, and data science and build game-changing products and services for our customers. We are looking for Senior Staff Engineer to help develop new initiatives and products to serve the needs of Experian's clients, including those in the financial services. Our team is solving key questions, like - How can we increase the rigor and scalability of our loan originations modeling approach? How can we tackle important issues in the lending space today like bias, fairness and explainable AI? As a growing team, we embrace a start-up mentality while operating in a large organization. We value speed, agility, and impact – and our results and ways of working are transforming the culture of the larger organizations around us. Role accountabilities and key activities Takes on a leading role within the development teams and tackling complex assignments. Provides substantial technical expertise in end-to-end development cycle and solves complex problems. Provides coaching and guidance to junior team members. Executes on technical and business strategies and ensures functional goals are met. Supports products at a holistic level, understanding how all the various pieces fit together. You will: Support complex software development projects by playing a critical role in planning, systems design and mentoring junior developers. Innovate and architect solutions for intricate technical problems or system improvements. Defining and executing the technical strategy for product development, including technology selection and innovation initiatives. Design and build full stack components of our analytics product platforms on AWS. Dive deeply into the platform, infrastructure, and applications to solve problems effeciently Collaborate with geographically distributed cross-functional teams to expand the value of Analytics offerings. Enhance the product to reduce the overall cost footprint while maximizing scalability and stability. Knowledge, Skills, and Experience High-level knowledge of data engineering, analytics, and ML, with the ability to dive deep when necessary. Experience building or supporting internal data engineering, analytics and MLOpsplatforms. Experience with distributed data processing frameworks like Spark Experience with any of the public cloud platforms like AWS, Azure, GCP, preferably AWS, including infrastructure as code (Terraform, Helm). Experience with Docker, Kubernetes, CI/CD pipelines, and observability tools. Strong hands-on experience with Scala, Java & Python Qualifications Qualifications 10+ years of experience with object-oriented programming and asynchronous programming High-level knowledge of data engineering, analytics, and ML, with the ability to dive deep when necessary. Experience building or supporting internal data engineering, analytics and MLOps platforms. Experience with distributed data processing frameworks like Spark Experience with any of the public cloud platforms like AWS, Azure, GCP, preferably AWS, including infrastructure as code (Terraform, Helm). Strong hands-on experience with Scala, Java & Python Additional Information Our uniqueness is that we celebrate yours. Experian's culture and people are important differentiators. We take our people agenda very seriously and focus on what matters; DEI, work/life balance, development, authenticity, collaboration, wellness, reward & recognition, volunteering... the list goes on. Experian's people first approach is award-winning; World's Best Workplaces™ 2024 (Fortune Top 25), Great Place To Work™ in 24 countries, and Glassdoor Best Places to Work 2024 to name a few. Check out Experian Life on social or our Careers Site to understand why. Experian is proud to be an Equal Opportunity and Affirmative Action employer. Innovation is an important part of Experian's DNA and practices, and our diverse workforce drives our success. Everyone can succeed at Experian and bring their whole self to work, irrespective of their gender, ethnicity, religion, colour, sexuality, physical ability or age. If you have a disability or special need that requires accommodation, please let us know at the earliest opportunity. Experian Careers - Creating a better tomorrow together

Posted 1 week ago

Apply

4.0 - 8.0 years

12 - 16 Lacs

Pune

Work from Office

Job Description We are seeking a highly skilled and experienced Data Engineering professional for our data engineering team. The ideal candidate will have extensive hands-on experience with the Microsoft Azure technology stack, including Azure Data Factory, Azure Databricks, Azure SQL Database, Azure Synapse Analytics, and other related services. This role requires a strong focus on data management, data engineering, and governance, ensuring the delivery of high-quality data solutions to support business objectives. Key Responsibilities: Technical Oversight & Delivery : Provide technical guidance and support to team members, promoting best practices and innovative solutions Oversee the planning, execution, and delivery of data engineering projects, ensuring alignment with business goals and timelines. Data Engineering: Design, develop, and maintain scalable and robust data pipelines using Azure Data Factory, Azure Databricks, and other Azure services. Implement ETL/ELT processes to ingest, transform, and load data from various sources into data lakes and data warehouses (specifically sources includes Excel, SAP HANA, APIs and SQL server). Optimize data workflows for performance, scalability, and reliability. Data Management: Ensure data quality, integrity, and consistency across all data platforms. Manage data storage, retrieval, and archiving solutions, leveraging Azure Blob Storage, Azure Data Lake, and Azure SQL Database. Develop and enforce data management policies and procedures. Data Governance: Establish and maintain data governance frameworks, including data cataloging, lineage, and metadata management. Implement data security and privacy measures, ensuring compliance with relevant regulations and industry standards. Monitor and audit data usage, access controls, and data protection practices. Collaboration & Communication: Collaborate with cross-functional teams, including data scientists, analysts, and business stakeholders, to understand data requirements and deliver solutions. Communicate complex technical concepts to non-technical stakeholders, ensuring transparency and alignment. Provide regular updates and reports on data engineering activities, progress, and challenges. Qualifications: Bachelors or Master’s degree in Computer Science, Information Technology, Engineering, or a related field. Strong hands-on experience with the Microsoft Azure technology stack, including but not limited to: Azure Data Factory Azure Databricks Azure SQL Database Azure Synapse Analytics Azure Data Lake Storage Proficiency in programming languages such as SQL, Python, and Scala. Experience with data modeling, ETL/ELT processes, Medallion Architecture, and data warehousing solutions. Solid understanding of data governance principles, data quality management, and data security best practices. Excellent problem-solving skills and the ability to work in a fast-paced, dynamic environment. Strong communication, leadership, and project management skills. Preferred Qualifications: Azure certifications such as Microsoft Certified: Azure Data Engineer Associate or Microsoft Certified: Azure Solutions Architect Expert. Experience with other data platforms and tools such as Power BI, Azure Machine Learning, and Azure DevOps. Familiarity with big data technologies and frameworks like Hadoop and Spark.

Posted 1 week ago

Apply

5.0 - 8.0 years

12 - 18 Lacs

Pune

Work from Office

Job Description We are seeking a highly skilled and experienced Data Engineering professional for our data engineering team. The ideal candidate will have extensive hands-on experience with the Microsoft Azure technology stack, including Azure Data Factory, Azure Databricks, Azure SQL Database, Azure Synapse Analytics, and other related services. This role requires a strong focus on data management, data engineering, and governance, ensuring the delivery of high-quality data solutions to support business objectives. Key Responsibilities: Technical Oversight & Delivery : Provide technical guidance and support to team members, promoting best practices and innovative solutions Oversee the planning, execution, and delivery of data engineering projects, ensuring alignment with business goals and timelines. Data Engineering: Design, develop, and maintain scalable and robust data pipelines using Azure Data Factory, Azure Databricks, and other Azure services. Implement ETL/ELT processes to ingest, transform, and load data from various sources into data lakes and data warehouses (specifically sources includes Excel, SAP HANA, APIs and SQL server). Optimize data workflows for performance, scalability, and reliability. Data Management: Ensure data quality, integrity, and consistency across all data platforms. Manage data storage, retrieval, and archiving solutions, leveraging Azure Blob Storage, Azure Data Lake, and Azure SQL Database. Develop and enforce data management policies and procedures. Data Governance: Establish and maintain data governance frameworks, including data cataloging, lineage, and metadata management. Implement data security and privacy measures, ensuring compliance with relevant regulations and industry standards. Monitor and audit data usage, access controls, and data protection practices. Collaboration & Communication: Collaborate with cross-functional teams, including data scientists, analysts, and business stakeholders, to understand data requirements and deliver solutions. Communicate complex technical concepts to non-technical stakeholders, ensuring transparency and alignment. Provide regular updates and reports on data engineering activities, progress, and challenges. Qualifications: Bachelors or Masters degree in Computer Science, Information Technology, Engineering, or a related field. Strong hands-on experience more than 5 years with the Microsoft Azure technology stack, including but not limited to: Azure Data Factory Azure Databricks Azure SQL Database Azure Synapse Analytics Azure Data Lake Storage Proficiency in programming languages such as SQL, Python, and Scala. Experience with data modeling, ETL/ELT processes, Medallion Architecture, and data warehousing solutions. Solid understanding of data governance principles, data quality management, and data security best practices. Excellent problem-solving skills and the ability to work in a fast-paced, dynamic environment. Strong communication, leadership, and project management skills. Preferred Qualifications: Azure certifications such as Microsoft Certified: Azure Data Engineer Associate or Microsoft Certified: Azure Solutions Architect Expert. Experience with other data platforms and tools such as Power BI, Azure Machine Learning, and Azure DevOps. Familiarity with big data technologies and frameworks like Hadoop and Spark.

Posted 1 week ago

Apply

3.0 years

1 - 6 Lacs

Hyderābād

Remote

Software Engineer II (Data) Hyderabad, Telangana, India Date posted Jul 24, 2025 Job number 1850170 Work site Up to 50% work from home Travel 0-25 % Role type Individual Contributor Profession Software Engineering Discipline Software Engineering Employment type Full-Time Overview Microsoft is a company where passionate innovators come to collaborate, envision what can be and take their careers to levels they cannot achieve anywhere else. This is a world of more possibilities, more innovation, more openness in a cloud-enabled world. The Business & Industry Copilots group is a rapidly growing organization that is responsible for the Microsoft Dynamics 365 suite of products, Power Apps, Power Automate, Dataverse, AI Builder, Microsoft Industry Solution and more. Microsoft is considered one of the leaders in Software as a Service in the world of business applications and this organization is at the heart of how business applications are designed and delivered. This is an exciting time to join our group BIC Customer Experience and work on something highly strategic to Microsoft. The goal of the Customer Zero Engineering is to build the next generation of our applications running on Dynamics 365, AI, Copilot, and several other Microsoft cloud services to drive AI transformation across Marketing, Sales, Services and Support organizations within Microsoft. We innovate quickly and collaborate closely with our partners and customers in an agile, high-energy environment. Leveraging the scalability and value from Azure & Power Platform, we ensure our solutions are robust and efficient. Our organization’s implementation acts as reference architecture for large companies and helps drive product capabilities. If the opportunity to collaborate with a diverse engineering team, on enabling end-to-end business scenarios using cutting-edge technologies and to solve challenging problems for large scale 24x7 business SaaS applications excite you, please come and talk to us! We are looking for talented and motivated data engineers interested in helping our organization empower learners through producing valuable data that can be used to understand the organization's needs to make the right decisions. We want you for your passion for technology, your curiosity and willingness to learn, your ability to communicate well in a team environment, your desire to make our team better with your contributions, and your ability to deliver. We use industry-standard technology: C#, JavaScript/Typescript, HTML5, ETL/ELT, Data warehousing, and/ or Business Intelligence Development. Qualifications Required: Bachelor's or Master's degree in Computer Science, Engineering, or a related technical field. 3+ years experience in business analytics, software development, data modeling or data engineering work. Software development using languages like C#, JavaScript or Java. Experience using a variety of data stores, including data warehouses, RDBMS, in-memory caches, and document Databases. Proficiency with SQL and NoSQL and hands-on experience using distributed computing platforms. Experience developing on cloud platforms (i.e. Azure, AWS) in a continuous delivery environment. Strong problem solving, design, implementation, and communication skills. Strong intellectual curiosity and passion for learning new technologies. Preferred Qualifications: Experience with data engineering projects with firm sense of accountability and ownership. Experience in ETL/ELT, Data warehousing, data pipelines and/ or Business Intelligence Development. Experience using ML, anomaly detection, predictive analysis, exploratory data analysis. A strong understanding of the value of Data, data exploration and the benefits of a data-driven organizational culture. Business Intelligence experience or visualization with tools such as Power BI is also beneficial. Experience implementing data systems in C#/Python/Scala or similar. Working knowledge of any (or multiple) of the following tech stacks is a plus: SQL, Databricks, PySparkSQL, Azure Synapse, Azure Data Factory, Azure Fabric, or similar. Basic Knowledge of Microsoft Dynamics Platform will be an added advantage. #BICJobs Responsibilities Implement scalable data models, data pipelines, data storage, management, and transformation solutions for real-time decisioning, reporting, data collecting, and related functions. Leveraging machine learning(ML) models knowledge and implement appropriate solutions for business objectives. Ship high-quality, well-tested, secure, and maintainable code. Develop and maintain software designed to improve data governance and security. Troubleshoot and resolve issues related to data processing and storage. Collaborate effectively with teammates, other teams and disciplines and drive improvements in engineering. Creates and implements code for a product, service, or feature, reusing code as applicable. Contributes to efforts to break down larger work items into smaller work items and provides estimation. Troubleshooting live site issues as part of both product development and Designated Responsible Individual (DRI) during live site rotations. Remains current in skills by investing time and effort into staying abreast of latest technologies. Benefits/perks listed below may vary depending on the nature of your employment with Microsoft and the country where you work.  Industry leading healthcare  Educational resources  Discounts on products and services  Savings and investments  Maternity and paternity leave  Generous time away  Giving programs  Opportunities to network and connect Microsoft is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.

Posted 1 week ago

Apply

0 years

0 Lacs

Mohali

On-site

Job Title: Software Engineer (SDE) Intern Company: DataTroops Location: Mohali (Punjab) - Work From Office Shift: As per client decision (Day) Shift About the Role: We are looking for a highly motivated Software Development Engineer (SDE) Intern to join our dynamic team. As an intern, you will have the opportunity to work on challenging projects, solve real-world problems, and gain exposure to full-stack development. You will work closely with experienced engineers and learn best practices in software development while honing your problem-solving and technical skills. Key Responsibilities: Collaborate with cross-functional teams to design, develop, test, and maintain end-to-end web applications . Develop scalable and maintainable backend services using Scala, JavaScript, or other languages as required. Build responsive, user-friendly frontend interfaces using modern frameworks and ensure seamless user experience. Write clean, efficient, and well-documented code across the stack. Participate in code reviews , ensuring best practices in software development and architecture. Design and implement automated unit and integration tests to ensure code quality and reliability. Contribute to CI/CD pipelines and assist in deployment, monitoring, and debugging of applications in development and production environments. Optimize systems for performance, security, and scalability . Work closely with QA to identify bugs and implement fixes proactively. Ensure high availability and system reliability through DevOps practices and monitoring tools. Stay updated with emerging trends in full stack development , cloud platforms, and infrastructure automation. Demonstrate strong collaboration and communication skills to work efficiently across engineering, QA, and DevOps teams. Required Skills: Strong problem-solving skills with a deep understanding of data structures and algorithms . Proficiency in one or more programming languages: Java, C, C++, Python . Exposure to or willingness to learn technologies like React, Node.js, Play Framework, MongoDB, PostgreSQL , etc. Familiarity with core computer science concepts : operating systems, databases, networking, etc. Basic understanding or interest in cloud services , CI/CD , and containerization tools (e.g., Docker, GitHub Actions). A self-starter with a passion for learning new technologies and working in fast-paced, dynamic environments. Strong communication, collaboration, and team-player attitude. NOTE: Only BCA (2025 pass-out) or B.Tech (2026 pass-out) can apply. Compensation: The salary for this internship position will be determined based on the candidate's experience, skills, and performance during the interview process. How to Apply: If you're ready to take on new challenges and grow with us, send your resume to hr@datatroops.io Note: Only candidates based in the Tricity area or willing to relocate to Mohali will be considered for this role. Job Types: Full-time, Fresher, Internship Job Types: Full-time, Fresher, Internship Pay: ₹1.00 per hour Schedule: Monday to Friday Weekend availability Work Location: In person

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies