Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Description Amazon Retail Financial Intelligence Systems is seeking a seasoned and talented Senior Data Engineer to join the Fortune Platform team. Fortune is a fast growing team with a mandate to build tools to automate profit-and-loss forecasting and planning for the Physical Consumer business. We are building the next generation Business Intelligence solutions using big data technologies such as Apache Spark, Hive/Hadoop, and distributed query engines. As a Data Engineer in Amazon, you will be working in a large, extremely complex and dynamic data environment. You should be passionate about working with big data and are able to learn new technologies rapidly and evaluate them critically. You should have excellent communication skills and be able to work with business owners to translate business requirements into system solutions. You are a self-starter, comfortable with ambiguity, and working in a fast-paced and ever-changing environment. Ideally, you are also experienced with at least one of the programming languages such as Java, C++, Spark/Scala, Python, etc. Major Responsibilities Work with a team of product and program managers, engineering leaders, and business leaders to build data architectures and platforms to support business Design, develop, and operate high-scalable, high-performance, low-cost, and accurate data pipelines in distributed data processing platforms Recognize and adopt best practices in data processing, reporting, and analysis: data integrity, test design, analysis, validation, and documentation Keep up to date with big data technologies, evaluate and make decisions around the use of new or existing software products to design the data architecture Design, build and own all the components of a high-volume data warehouse end to end. Provide end-to-end data engineering support for project lifecycle execution (design, execution and risk assessment) Continually improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers Interface with other technology teams to extract, transform, and load (ETL) data from a wide variety of data sources Own the functional and nonfunctional scaling of software systems in your ownership area. Implement big data solutions for distributed computing. Key job responsibilities As a DE on our team, you will be responsible for leading the data modelling, database design, and launch of some of the core data pipelines. You will have significant influence on our overall strategy by helping define the data model, drive the database design, and spearhead the best practices to delivery high quality products. About The Team Profit intelligence systems measures, predicts true profit(/loss) for each item as a result of a specific shipment to an Amazon customer. Profit Intelligence is all about providing intelligent ways for Amazon to understand profitability across retail business. What are the hidden factors driving the growth or profitability across millions of shipments each day? We compute the profitability of each and every shipment that gets shipped out of Amazon. Guess what, we predict the profitability of future possible shipments too. We are a team of agile, can-do engineers, who believe that not only are moon shots possible but that they can be done before lunch. All it takes is finding new ideas that challenge our preconceived notions of how things should be done. Process and procedure matter less than ideas and the practical work of getting stuff done. This is a place for exploring the new and taking risks. We push the envelope in using cloud services in AWS as well as the latest in distributed systems, forecasting algorithms, and data mining. Basic Qualifications 3+ years of data engineering experience Experience with data modeling, warehousing and building ETL pipelines Experience with SQL Preferred Qualifications Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases) Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI MAA 12 SEZ Job ID: A3006789 Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Greetings from TCS! TCS is hiring for Big Data (PySpark & Scala) Location: - Chennai Desired Experience Range: 5 + Years Must-Have • PySpark • Hive Good-to-Have • Spark • HBase • DQ tool • Agile Scrum experience • Exposure in data ingestion from disparate sources onto Big Data platform Thanks Anshika Show more Show less
Posted 1 week ago
100.0 years
0 Lacs
Greater Kolkata Area
On-site
Overview Working at Atlassian Atlassians can choose where they work – whether in an office, from home, or a combination of the two. That way, Atlassians have more control over supporting their family, personal goals, and other priorities. We can hire people in any country where we have a legal entity. Interviews and onboarding are conducted virtually, a part of being a distributed-first company. Responsibilities Your future team To become a 100 year company, we need a world-class engineering organisation of empowered teams with the tools and infrastructure to do the best work of their careers. As a part of a unified R&D team, Engineering is prioritising key initiatives which support our customers as they increase their adoption of Atlassian Cloud products and services while continuing to support their current needs at extreme enterprise scale. We're looking for people who want to write the future and who believe that we can accomplish so much more together. You will report to one of the Engineering Managers of the R&D teams. What You'll Do Build and ship features and capabilities daily in highly scalable, cross-geo distributed environment Be part of an amazing open and collaborative work environment with other experienced engineers, architects, product managers, and designers Review code with best practices of readability, testing patterns, documentation, reliability, security, and performance considerations in mind Mentor and level up the skills of your teammates by sharing your expertise in formal and informal knowledge sharing sessions Ensure full visibility, error reporting, and monitoring of high performing backend services Participate in Agile software development including daily stand-ups, sprint planning, team retrospectives, show and tell demo sessions Your background 4+ years of experience building and developing backend applications Bachelor's or Master's degree with a preference for Computer Science degree Experience crafting and implementing highly scalable and performant RESTful micro-services Proficiency in any modern object-oriented programming language (e.g., Java, Kotlin, Go, Scala, Python, etc.) Fluency in any one database technology (e.g. RDBMS like Oracle or Postgres and/or NoSQL like DynamoDB or Cassandra) Strong understanding of CI/CD reliability principles, including test strategy, security, and performance benchmarking. Real passion for collaboration and strong interpersonal and communication skills Broad knowledge and understanding of SaaS, PaaS, IaaS industry with hands-on experience of public cloud offerings (AWS, GAE, Azure) Familiarity with cloud architecture patterns and an engineering discipline to produce software with quality Qualifications Benefits & Perks Atlassian offers a wide range of perks and benefits designed to support you, your family and to help you engage with your local community. Our offerings include health and wellbeing resources, paid volunteer days, and so much more. To learn more, visit go.atlassian.com/perksandbenefits . About Atlassian At Atlassian, we're motivated by a common goal: to unleash the potential of every team. Our software products help teams all over the planet and our solutions are designed for all types of work. Team collaboration through our tools makes what may be impossible alone, possible together. We believe that the unique contributions of all Atlassians create our success. To ensure that our products and culture continue to incorporate everyone's perspectives and experience, we never discriminate based on race, religion, national origin, gender identity or expression, sexual orientation, age, or marital, veteran, or disability status. All your information will be kept confidential according to EEO guidelines. To provide you the best experience, we can support with accommodations or adjustments at any stage of the recruitment process. Simply inform our Recruitment team during your conversation with them. To learn more about our culture and hiring process, visit go.atlassian.com/crh . Show more Show less
Posted 1 week ago
1.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Description Business Data Technologies (BDT) makes it easier for teams across Amazon to produce, store, catalog, secure, move, and analyze data at massive scale. Our managed solutions combine standard AWS tooling, open-source products, and custom services to free teams from worrying about the complexities of operating at Amazon scale. This lets BDT customers move beyond the engineering and operational burden associated with managing and scaling platforms, and instead focus on scaling the value they can glean from their data, both for their customers and their teams. We own the one of the biggest (largest) data lakes for Amazon where 1000’s of Amazon teams can search, share, and store EB (Exabytes) of data in a secure and seamless way; using our solutions, teams around the world can schedule/process millions of workloads on a daily basis. We provide enterprise solutions that focus on compliance, security, integrity, and cost efficiency of operating and managing EBs of Amazon data. Key job responsibilities Core Responsibilities Be hands-on with ETL to build data pipelines to support automated reporting Interface with other technology teams to extract, transform, and load data from a wide variety of data sources Implement data structures using best practices in data modeling, ETL/ELT processes, and SQL, Redshift. Model data and metadata for ad-hoc and pre-built reporting Interface with business customers, gathering requirements and delivering complete reporting solutions Build robust and scalable data integration (ETL) pipelines using SQL, Python and Spark. Build and deliver high quality data sets to support business analyst, data scientists, and customer reporting needs. Continually improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers Participate in strategic & tactical planning discussions A day in the life As a Data Engineer, You Will Be Working With Cross-functional Partners From Science, Product, SDEs, Operations And Leadership To Translate Raw Data Into Actionable Insights For Stakeholders, Empowering Them To Make Data-driven Decisions. Some Of The Key Activities Include Crafting the Data Flow: Design and build data pipelines, the backbone of our data ecosystem. Ensure the integrity of the data journey by implementing robust data quality checks and monitoring processes. Architect for Insights: Translate complex business requirements into efficient data models that optimize data analysis and reporting. Automate data processing tasks to streamline workflows and improve efficiency. Become a data detective! ensuring data availability and performance Basic Qualifications 1+ years of data engineering experience Experience with SQL Experience with data modeling, warehousing and building ETL pipelines Experience with one or more query language (e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala) Experience with one or more scripting language (e.g., Python, KornShell) Preferred Qualifications Experience with big data technologies such as: Hadoop, Hive, Spark, EMR Experience with any ETL tool like, Informatica, ODI, SSIS, BODI, Datastage, etc. Knowledge of cloud services such as AWS or equivalent Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI HYD 13 SEZ Job ID: A3006419 Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Hyderabad, Telangana
On-site
General Information Locations : Hyderabad, Telangana, India Role ID 209546 Worker Type Regular Employee Studio/Department CTO - EA Digital Platform Work Model Hybrid Description & Requirements Electronic Arts creates next-level entertainment experiences that inspire players and fans around the world. Here, everyone is part of the story. Part of a community that connects across the globe. A place where creativity thrives, new perspectives are invited, and ideas matter. A team where everyone makes play happen. Software Engineer II - AI/ML Engineer The EA Digital Platform (EADP) group is the core powering the global EA ecosystem. We provide the foundation for all of EA’s incredible games and player experiences with high-level platforms like Cloud, Commerce, Data and AI, Gameplay Services, Identity and Social. By providing reusable capabilities that game teams can easily integrate into their work, we let them focus on making some of the best games in the world and creating meaningful relationships with our players. We’re behind the curtain, making it all work together. Come power the future of play with us. The Challenge Ahead: We are looking for developers who want to work on a large-scale distributed data system that empowers EA Games to personalize player experience and engagement. Responsibilities You will help with designing, implementing and optimizing the infrastructure for AI model training and deployment platform You will help with integrating AI capabilities into existing software systems and applications You will develop tools and systems to monitor the performance of the platform in real-time, analyzing key metrics, and proactively identify and address any issues or opportunities for improvement. You will participate in code reviews to maintain code quality and ensure best practices You will help with feature and operation enhancement for platform under senior guidance You will help with improving the stability and observability of the platform Qualifications Bachelor's degree or foreign degree equivalent in Computer Science, Electrical Engineering, or related field. 3+ years of experience with software development and model development Experience with a programming language such as Go, Java or Scala Experience with scripting languages such as bash, awk, python Experience with Scikit-Learn, Pandas, Matplotlib 3+ years of experience with Deep Learning frameworks like PyTorch, TensorFlow, CUDA Hands-on experience of any ML Platform (Sagemaker, Azure ML, GCP Vertex AI) Experience with cloud services and modern data technologies Experience with data streaming and processing systems About Electronic Arts We’re proud to have an extensive portfolio of games and experiences, locations around the world, and opportunities across EA. We value adaptability, resilience, creativity, and curiosity. From leadership that brings out your potential, to creating space for learning and experimenting, we empower you to do great work and pursue opportunities for growth. We adopt a holistic approach to our benefits programs, emphasizing physical, emotional, financial, career, and community wellness to support a balanced life. Our packages are tailored to meet local needs and may include healthcare coverage, mental well-being support, retirement savings, paid time off, family leaves, complimentary games, and more. We nurture environments where our teams can always bring their best to what they do. Electronic Arts is an equal opportunity employer. All employment decisions are made without regard to race, color, national origin, ancestry, sex, gender, gender identity or expression, sexual orientation, age, genetic information, religion, disability, medical condition, pregnancy, marital status, family status, veteran status, or any other characteristic protected by law. We will also consider employment qualified applicants with criminal records in accordance with applicable law. EA also makes workplace accommodations for qualified individuals with disabilities as required by applicable law.
Posted 1 week ago
0.0 years
0 Lacs
Bengaluru, Karnataka
On-site
- 1+ years of data engineering experience - Experience with SQL - Experience with data modeling, warehousing and building ETL pipelines - Experience with one or more query language (e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala) - Experience with one or more scripting language (e.g., Python, KornShell) The Prime Data Engineering & Analytics (PDEA) team is seeking to hire passionate Data Engineers to build and manage the central petabyte-scale data infrastructure supporting the worldwide Prime business operations. At Amazon Prime, understanding customer data is paramount to our success in providing customers with relevant and enticing benefits such as fast free shipping, instant videos, streaming music and free Kindle books in the US and international markets. At Amazon you will be working in one of the world's largest and most complex data environments. You will be part of team that will work with the marketing, retail, finance, analytics, machine learning and technology teams to provide real time data processing solution that give Amazon leadership, marketers, PMs timely, flexible and structured access to customer insights. The team will be responsible for building this platform end to end using latest AWS technologies and software development principles. As a Data Engineer, you will be responsible for leading the architecture, design and development of the data, metrics and reporting platform for Prime. You will architect and implement new and automated Business Intelligence solutions, including big data and new analytical capabilities that support our Development Engineers, Analysts and Retail business stakeholders with timely, actionable data, metrics and reports while satisfying scalability, reliability, accuracy, performance and budget goals and driving automation and operational efficiencies. You will partner with business leaders to drive strategy and prioritize projects and feature sets. You will also write and review business cases and drive the development process from design to release. In addition, you will provide technical leadership and mentoring for a team of highly capable Data Engineers. Responsibilities 1. Own design and execution of end to end projects 2. Own managing WW Prime core services data infrastructure 3. Establish key relationships which span Amazon business units and Business Intelligence teams 4. Implement standardized, automated operational and quality control processes to deliver accurate and timely data and reporting to meet or exceed SLAs Experience with big data technologies such as: Hadoop, Hive, Spark, EMR Experience with any ETL tool like, Informatica, ODI, SSIS, BODI, Datastage, etc. Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Posted 1 week ago
1.0 - 2.0 years
0 Lacs
Karnataka, India
On-site
Who You’ll Work With SRE hired will work as an embedded Reliability Engineer with the strategically selected engineering team. The candidate will belong to a horizontal domain called TechOps: Resilience Engineering. This position will provide a provision for the SRE to shift between multiple engineering platforms as demanded by the work, vision and/or criticality of the projects. Roles and responsibilities will include interacting with Engineering leaders, engineers, product teams, Scrum/Agile leads, Production support, business, and delivery teams. “Just Do It” mindset teammates that believe in our shared commitment of listening with Empathy, Prioritize with Purpose, operate with a Growth Mind set and Foster Community & Trust. This person will be reporting into Manager, Site Reliability Engineering and will be collaborating with teammates in various SRE functions across multiple geographies. Who We Are Looking For We are looking for talented and passionate full stack developers with knowledge of datacenter infrastructure and cloud platforms. Join us if you have willingness to learn new technologies, share knowledge and learn from others. You feel responsible for the success of the entire team. You are not afraid to work on challenging tasks if necessary and look for opportunities to help others, who may not be part of your team. Ability to observe, diagnose, and develop fixes for production issues quickly and efficiently Ability to develop and drive real-time monitoring solutions that provide visibility into site health and key performance indicators. Practical experiences with observability tools like SignalFx, CatchPoint, Splunk, NewRelic or any other industry-specific products will be a value add. Capability for customization on observability tools will be part of the job Strong communication skills (written and verbal). They must be able to articulate issues and their impact(s) Highly confident and capable of reporting and communicating high-value metrics to leadership. Deep understanding of the business landscape and how site reliability influences our consumers Working understanding of IT service management (Incident, Problem, Change and Knowledge management) Ability to work across teams (business and technical) to continuously analyze system performance in production, troubleshoot consumer reported issues, and proactively identify areas in need of optimization Practical experience in managing and leading application reliability practices for consumer-facing web and mobile experiences Demonstrated negotiation and influencing skills Passion for coaching, teaching, mentoring and learning What You’ll Work On SRE hired will work as a Site Reliability Engineer I with all engineering teams. The candidate will belong to a horizontal domain called TechOps: Resilience Engineering. This position will provide a provision for the SRE to shift between multiple engineering platforms as demanded by the work, vision and/or criticality of the projects. Roles and responsibilities will include interacting with Engineering leaders, engineers, product teams, Scrum/Agile leads, Production support, business, and delivery teams. As a site reliability engineer, you will be focused on maximum availability, observability, reliability, security, and performance for Nike Digital Experiences. SREs perform deep problem analysis, detect infrastructure or code defects, define, report, and create observability processes for Key Performance Indicators (KPIs), and work with product delivery teams to provide long term solutions to production issues. Bachelor’s degree in computer science, Information Systems, Business, or other relevant subject areas 1-2 years of professional experience in software development, operations, or support Strong design and development experience with Java Proficient with JavaScript on the frontend (React, Angular, etc.) and backend (Node.js) components Kubernetes working knowledge and experience Experience in other modern enterprise languages (functional or other – Scala, Python, Golang, etc.) is preferred Basic understanding of DNS, Networking, Virtualization, Linux Expertise in designing/building/supporting scalable cloud-based Micro Services Experience with Docker and/or Serverless patterns Experience with at least one No-SQL database like DynamoDb, Cassandra, etc. Good understanding of RESTful APIs Basic understanding of common tools for service management, agile, and observability: ServiceNow, Jira, Jenkins, Splunk, New Relic, SignalFx Background with ITIL or Lean is a plus Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to effectively migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and contribute to the overall data strategy of the organization, ensuring that data solutions are efficient, scalable, and aligned with business objectives. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Collaborate with stakeholders to gather and analyze data requirements. - Design and implement robust data pipelines to support data processing and analytics. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. - Strong understanding of data modeling and database design principles. - Experience with ETL tools and data integration techniques. - Familiarity with cloud platforms and services related to data storage and processing. - Knowledge of programming languages such as Python or Scala for data manipulation. Additional Information: - The candidate should have minimum 3 years of experience in Databricks Unified Data Analytics Platform. - This position is based at our Pune office. - A 15 years full time education is required. Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Project Role : Software Development Engineer Project Role Description : Analyze, design, code and test multiple components of application code across one or more clients. Perform maintenance, enhancements and/or development work. Must have skills : Scala Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Software Development Engineer, you will analyze, design, code, and test multiple components of application code across one or more clients. You will perform maintenance, enhancements, and/or development work, contributing to the overall success of the projects. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work-related problems. - Collaborate with team members to analyze, design, and develop software solutions. - Write clean, maintainable, and efficient code following best practices. - Participate in code reviews to ensure code quality and adherence to standards. - Troubleshoot and debug software applications to resolve issues. - Stay updated on emerging technologies and apply them to projects. Professional & Technical Skills: - Must To Have Skills: Proficiency in Scala with minimum 4 years of relevant experience in Scala hands-on development. - Strong understanding of object-oriented programming principles. - Experience with distributed systems and microservices architecture. - Hands-on experience with relational and NoSQL databases. - Knowledge of software development lifecycle and agile methodologies. Additional Information: - This position is based at our Pune, Bengaluru and Chennai office. - A 15 years full-time education is required. Show more Show less
Posted 1 week ago
8.0 - 11.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Company Description About Sopra Steria Sopra Steria, a major Tech player in Europe with 50,000 employees in nearly 30 countries, is recognised for its consulting, digital services and solutions. It helps its clients drive their digital transformation and obtain tangible and sustainable benefits. The Group provides end-to-end solutions to make large companies and organisations more competitive by combining in-depth knowledge of a wide range of business sectors and innovative technologies with a collaborative approach. Sopra Steria places people at the heart of everything it does and is committed to putting digital to work for its clients in order to build a positive future for all. In 2024, the Group generated revenues of €5.8 billion. Job Description The world is how we shape it. Position: Snowflake - Senior Technical Lead Experience: 8-11 years Location: Noida/ Bangalore Education: B.E./ B.Tech./ MCA Primary Skills: Snowflake, Snowpipe, SQL, Data Modelling, DV 2.0, Data Quality, AWS, Snowflake Security Good to have Skills: Snowpark, Data Build Tool, Finance Domain Experience with Snowflake-specific features: Snowpipe, Streams & Tasks, Secure Data Sharing. Experience in data warehousing, with at least 2 years focused on Snowflake. Hands-on expertise in SQL, Snowflake scripting (JavaScript UDFs), and Snowflake administration. Proven experience with ETL/ELT tools (e.g., dbt, Informatica, Talend, Matillion) and orchestration frameworks. Deep knowledge of data modeling techniques (star schema, data vault) and performance tuning. Familiarity with data security, compliance requirements, and governance best practices. Experience in Python, Scala, or Java for Snowpark development is good to have. Strong understanding of cloud platforms (AWS, Azure, or GCP) and related services (S3, ADLS, IAM) Key Responsibilities Define data partitioning, clustering, and micro-partition strategies to optimize performance and cost. Lead the implementation of ETL/ELT processes using Snowflake features (Streams, Tasks, Snowpipe). Automate schema migrations, deployments, and pipeline orchestration (e.g., with dbt, Airflow, or Matillion). Monitor query performance and resource utilization; tune warehouses, caching, and clustering. Implement workload isolation (multi-cluster warehouses, resource monitors) for concurrent workloads. Define and enforce role-based access control (RBAC), masking policies, and object tagging. Ensure data encryption, compliance (e.g., GDPR, HIPAA), and audit logging are correctly configured. Establish best practices for dimensional modeling, data vault architecture, and data quality. Create and maintain data dictionaries, lineage documentation, and governance standards. Partner with business analysts and data scientists to understand requirements and deliver analytics-ready datasets. Stay current with Snowflake feature releases (e.g., Snowpark, Native Apps) and propose adoption strategies. Contribute to the long-term data platform roadmap and cloud cost-optimization initiatives. Qualifications BTech/MCA Additional Information At our organization, we are committed to fighting against all forms of discrimination. We foster a work environment that is inclusive and respectful of all differences. All of our positions are open to people with disabilities. Show more Show less
Posted 1 week ago
4.0 - 8.0 years
12 - 18 Lacs
Hyderabad, Chennai, Coimbatore
Hybrid
We are seeking a skilled and motivated Data Engineer to join our dynamic team. The ideal candidate will have experience in designing, developing, and maintaining scalable data pipelines and architectures using Hadoop, PySpark, ETL processes , and Cloud technologies . Responsibilities: Design, develop, and maintain data pipelines for processing large-scale datasets. Build efficient ETL workflows to transform and integrate data from multiple sources. Develop and optimize Hadoop and PySpark applications for data processing. Ensure data quality, governance, and security standards are met across systems. Implement and manage Cloud-based data solutions (AWS, Azure, or GCP). Collaborate with data scientists and analysts to support business intelligence initiatives. Troubleshoot performance issues and optimize query executions in big data environments. Stay updated with industry trends and advancements in big data and cloud technologies . Required Skills: Strong programming skills in Python, Scala, or Java . Hands-on experience with Hadoop ecosystem (HDFS, Hive, Spark, etc.). Expertise in PySpark for distributed data processing. Proficiency in ETL tools and workflows (SSIS, Apache Nifi, or custom pipelines). Experience with Cloud platforms (AWS, Azure, GCP) and their data-related services. Knowledge of SQL and NoSQL databases. Familiarity with data warehousing concepts and data modeling techniques. Strong analytical and problem-solving skills. Interested can reach us at +91 7305206696/ saranyadevib@talentien.com
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
Delhi, India
On-site
What is Findem: Findem is the only talent data platform that combines 3D data with AI. It automates and consolidates top-of-funnel activities across your entire talent ecosystem, bringing together sourcing, CRM, and analytics into one place. Only 3D data connects people and company data over time - making an individual’s entire career instantly accessible in a single click, removing the guesswork, and unlocking insights about the market and your competition no one else can. Powered by 3D data, Findem’s automated workflows across the talent lifecycle are the ultimate competitive advantage. Enabling talent teams to deliver continuous pipelines of top, diverse candidates while creating better talent experiences, Findem transforms the way companies plan, hire, and manage talent. Learn more at www.findem.ai Experience - 5 - 9 years We are looking for an experienced Big Data Engineer, who will be responsible for building, deploying and managing various data pipelines, data lake and Big data processing solutions using Big data and ETL technologies. Location- Delhi, India Hybrid- 3 days onsite Responsibilities Build data pipelines, Big data processing solutions and data lake infrastructure using various Big data and ETL technologies Assemble and process large, complex data sets that meet functional non-functional business requirements ETL from a wide variety of sources like MongoDB, S3, Server-to-Server, Kafka etc., and processing using SQL and big data technologies Build analytical tools to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics Build interactive and ad-hoc query self-serve tools for analytics use cases Build data models and data schema for performance, scalability and functional requirement perspective Build processes supporting data transformation, metadata, dependency and workflow management Research, experiment and prototype new tools/technologies and make them successful Skill Requirements Must have-Strong in Python/Scala Must have experience in Big data technologies like Spark, Hadoop, Athena / Presto, Redshift, Kafka etc Experience in various file formats like parquet, JSON, Avro, orc etc Experience in workflow management tools like airflow Experience with batch processing, streaming and message queues Any of visualization tools like Redash, Tableau, Kibana etc Experience in working with structured and unstructured data sets Strong problem solving skills Good to have Exposure to NoSQL like MongoDB Exposure to Cloud platforms like AWS, GCP, etc Exposure to Microservices architecture Exposure to Machine learning techniques The role is full-time and comes with full benefits. We are globally headquartered in the San Francisco Bay Area with our India headquarters in Bengaluru. Equal Opportunity As an equal opportunity employer, we do not discriminate on the basis of race, color, religion, national origin, age, sex (including pregnancy), physical or mental disability, medical condition, genetic information, gender identity or expression, sexual orientation, marital status, protected veteran status or any other legally-protected characteristic. Show more Show less
Posted 1 week ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Transform data into a format that can be easily analyzed by developing, maintaining, and testing infrastructures for data generation. Work closely with data scientists and are largely in charge of architecting solutions for data scientists that enable them to do their jobs. Role involves creating Data pipelines and integrating, transforming & enabling data for wider enterprise use. Job Description Duties for this role include but not limited to: supporting the design, build, test and maintain data pipelines at big data scale. Assists with updating data from multiple data sources. Work on batch processing of collected data and match its format to the stored data, make sure that the data is ready to be processed and analyzed. Assisting with keeping the ecosystem and the pipeline optimized and efficient, troubleshooting standard performance, data related problems and provide L3 support. Implementing parsers, validators, transformers and correlators to reformat, update and enhance the data. Provides recommendations to highly complex problems. Providing guidance to those in less senior positions. Additional Job Description Data Engineers play a pivotal role within Dataworks, focused on creating and driving engineering innovation and facilitating the delivery of key business initiatives. Acting as a “universal translator” between IT, business, software engineers and data scientists, data engineers collaborate across multi-disciplinary teams to deliver value. Data Engineers will work on those aspects of the Dataworks platform that govern the ingestion, transformation, and pipelining of data assets, both to end users within FedEx and into data products and services that may be externally facing. Day-to-day, they will be deeply involved in code reviews and large-scale deployments. Essential Job Duties & Responsibilities Understanding in depth both the business and technical problems Dataworks aims to solve Building tools, platforms and pipelines to enable teams to clearly and cleanly analyze data, build models and drive decisions Scaling up from “laptop-scale” to “cluster scale” problems, in terms of both infrastructure and problem structure and technique Collaborating across teams to drive the generation of data driven operational insights that translate to high value optimized solutions. Delivering tangible value very rapidly, collaborating with diverse teams of varying backgrounds and disciplines Codifying best practices for future reuse in the form of accessible, reusable patterns, templates, and code bases Interacting with senior technologists from the broader enterprise and outside of FedEx (partner ecosystems and customers) to create synergies and ensure smooth deployments to downstream operational systems Skill/Knowledge Considered a Plus Technical background in computer science, software engineering, database systems, distributed systems Fluency with distributed and cloud environments and a deep understanding of optimizing computational considerations with theoretical properties Experience in building robust cloud-based data engineering and curation solutions to create data products useful for numerous applications Detailed knowledge of the Microsoft Azure tooling for large-scale data engineering efforts and deployments is highly preferred. Experience with any combination of the following azure tools: Azure Databricks, Azure Data Factory, Azure SQL D, Azure Synapse Analytics Developing and operationalizing capabilities and solutions including under near real-time high-volume streaming conditions. Hands-on development skills with the ability to work at the code level and help debug hard to resolve issues. A compelling track record of designing and deploying large scale technical solutions, which deliver tangible, ongoing value Direct experience having built and deployed robust, complex production systems that implement modern, data processing methods at scale Ability to context-switch, to provide support to dispersed teams which may need an “expert hacker” to unblock an especially challenging technical obstacle, and to work through problems as they are still being defined Demonstrated ability to deliver technical projects with a team, often working under tight time constraints to deliver value An ‘engineering’ mindset, willing to make rapid, pragmatic decisions to improve performance, accelerate progress or magnify impact Comfort with working with distributed teams on code-based deliverables, using version control systems and code reviews Ability to conduct data analysis, investigation, and lineage studies to document and enhance data quality and access Use of agile and devops practices for project and software management including continuous integration and continuous delivery Demonstrated expertise working with some of the following common languages and tools: Spark (Scala and PySpark), Kafka and other high-volume data tools SQL and NoSQL storage tools, such as MySQL, Postgres, MongoDB/CosmosDB Java, Python data tools Azure DevOps experience to track work, develop using git-integrated version control patterns, and build and utilize CI/CD pipelines Working knowledge and experience implementing data architecture patterns to support varying business needs Experience with different data types (json, xml, parquet, avro, unstructured) for both batch and streaming ingestions Use of Azure Kubernetes Services, Eventhubs, or other related technologies to implement streaming ingestions Experience developing and implementing alerting and monitoring frameworks Working knowledge of Infrastructure as Code (IaC) through Terraform to create and deploy resources Implementation experience across different data stores, messaging systems, and data processing engines Data integration through APIs and/or REST service PowerPlatform (PowerBI, PowerApp, PowerAutomate) development experience a plus Minimum Qualifications Data Engineer I: Bachelor’s Degree in Information Systems, Computer Science or a quantitative discipline such as Mathematics or Engineering and/or One (1) year equivalent formal training or work experience. Basic knowledge in data engineering and machine learning frameworks including design, development and implementation of highly complex systems and data pipelines. Basic knowledge in Information Systems including design, development and implementation of large batch or online transaction-based systems. Experience as a junior member of multi-functional project teams. Strong oral and written communication skills. A related advanced degree may offset the related experience requirements. Data Engineer II Bachelor's Degree in Computer Science, Information Systems, a related quantitative field such as Engineering or Mathematics or equivalent formal training or work experience. Two (2) years equivalent work experience in measurement and analysis, quantitative business problem solving, simulation development and/or predictive analytics. Strong knowledge in data engineering and machine learning frameworks including design, development and implementation of highly complex systems and data pipelines. Strong knowledge in Information Systems including design, development and implementation of large batch or online transaction-based systems. Strong understanding of the transportation industry, competitors, and evolving technologies. Experience as a member of multi-functional project teams. Strong oral and written communication skills. A related advanced degree may offset the related experience requirements. Data Engineer III Bachelor’s Degree in Information Systems, Computer Science or a quantitative discipline such as Mathematics or Engineering and/or equivalent formal training or work experience. Three to Four (3 - 4) years equivalent work experience in measurement and analysis, quantitative business problem solving, simulation development and/or predictive analytics. Extensive knowledge in data engineering and machine learning frameworks including design, development and implementation of highly complex systems and data pipelines. Extensive knowledge in Information Systems including design, development and implementation of large batch or online transaction-based systems. Strong understanding of the transportation industry, competitors, and evolving technologies. Experience providing leadership in a general planning or consulting setting. Experience as a senior member of multi-functional project teams. Strong oral and written communication skills. A related advanced degree may offset the related experience requirements. Data Engineer Lead Bachelor’s Degree in Information Systems, Computer Science, or a quantitative discipline such as Mathematics or Engineering and/or equivalent formal training or work experience. Five to Seven (5 - 7) years equivalent work experience in measurement and analysis, quantitative business problem solving, simulation development and/or predictive analytics. Extensive knowledge in data engineering and machine learning frameworks including design, development and implementation of highly complex systems and data pipelines. Extensive knowledge in Information Systems including design, development and implementation of large batch or online transaction-based systems. Strong understanding of the transportation industry, competitors, and evolving technologies. Experience providing leadership in a general planning or consulting setting. Experience as a leader or a senior member of multi-function project teams. Strong oral and written communication skills. A related advanced degree may offset the related experience requirements. Analytical Skills, Accuracy & Attention to Detail, Planning & Organizing Skills, Influencing & Persuasion Skills, Presentation Skills FedEx was built on a philosophy that puts people first, one we take seriously. We are an equal opportunity/affirmative action employer and we are committed to a diverse, equitable, and inclusive workforce in which we enforce fair treatment, and provide growth opportunities for everyone. All qualified applicants will receive consideration for employment regardless of age, race, color, national origin, genetics, religion, gender, marital status, pregnancy (including childbirth or a related medical condition), physical or mental disability, or any other characteristic protected by applicable laws, regulations, and ordinances. Our Company FedEx is one of the world's largest express transportation companies and has consistently been selected as one of the top 10 World’s Most Admired Companies by "Fortune" magazine. Every day FedEx delivers for its customers with transportation and business solutions, serving more than 220 countries and territories around the globe. We can serve this global network due to our outstanding team of FedEx team members, who are tasked with making every FedEx experience outstanding. Our Philosophy The People-Service-Profit philosophy (P-S-P) describes the principles that govern every FedEx decision, policy, or activity. FedEx takes care of our people; they, in turn, deliver the impeccable service demanded by our customers, who reward us with the profitability necessary to secure our future. The essential element in making the People-Service-Profit philosophy such a positive force for the company is where we close the circle, and return these profits back into the business, and invest back in our people. Our success in the industry is attributed to our people. Through our P-S-P philosophy, we have a work environment that encourages team members to be innovative in delivering the highest possible quality of service to our customers. We care for their well-being, and value their contributions to the company. Our Culture Our culture is important for many reasons, and we intentionally bring it to life through our behaviors, actions, and activities in every part of the world. The FedEx culture and values have been a cornerstone of our success and growth since we began in the early 1970’s. While other companies can copy our systems, infrastructure, and processes, our culture makes us unique and is often a differentiating factor as we compete and grow in today’s global marketplace. Show more Show less
Posted 1 week ago
12.0 - 15.0 years
35 - 50 Lacs
Hyderabad
Work from Office
Skill : Java, Spark, Kafka Experience : 10 to 16 years Location : Hyderabad As Data Engineer, you will : Support in designing and rolling out the data architecture and infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources Identify data source, design and implement data schema/models and integrate data that meet the requirements of the business stakeholders Play an active role in the end-to-end delivery of AI solutions, from ideation, feasibility assessment, to data preparation and industrialization. Work with business, IT and data stakeholders to support with data-related technical issues, their data infrastructure needs as well as to build the most flexible and scalable data platform. With a strong focus on DataOps, design, develop and deploy scalable batch and/or real-time data pipelines. Design, document, test and deploy ETL/ELT processes Find the right tradeoffs between the performance, reliability, scalability, and cost of the data pipelines you implement Monitor data processing efficiency and propose solutions for improvements. • Have the discipline to create and maintain comprehensive project documentation. • Build and share knowledge with colleagues and coach junior profiles.
Posted 1 week ago
10.0 years
0 Lacs
India
Remote
Job Description EMPLOYMENT TYPE: Full-Time, Permanent LOCATION: Remote (Pan India) SHIFT TIMINGS: 2.00 pm-11:00 pm IST Budget- As per company standards REPORTING: This position will report to our CEO or any other Lead as assigned by Management. The Senior Data Engineer will be responsible for building and extending our data pipeline architecture, as well as optimizing data flow and collection for cross-functional teams. The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys working with big data and building systems from the ground up. You will collaborate with our software engineers, database architects, data analysts, and data scientists to ensure our data delivery architecture is consistent throughout the platform. You must be self-directed and comfortable supporting the data needs of multiple teams, systems and products. The right candidate will be excited by the prospect of optimizing or even re-designing our company’s data architecture to support our next generation of products and data initiatives. What You’ll Be Doing: ● Design and build parts of our data pipeline architecture for extraction, transformation, and loading of data from a wide variety of data sources using the latest Big Data technologies. ● Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. ● Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs. ● Work with machine learning, data, and analytics experts to drive innovation, accuracy and greater functionality in our data system. Qualifications: ● Bachelor's degree in Engineering, Computer Science, or relevant field. ● 10+ years of relevant and recent experience in a Data Engineer role. ● 5+ years recent experience with Apache Spark and solid understanding of the fundamentals. ● Deep understanding of Big Data concepts and distributed systems. ● Strong coding skills with Scala, Python, Java and/or other languages and the ability to quickly switch between them with ease. ● Advanced working SQL knowledge and experience working with a variety of relational databases such as Postgres and/or MySQL. ● Cloud Experience with DataBricks ● Experience working with data stored in many formats including Delta Tables, Parquet, CSV and JSON. ● Comfortable working in a linux shell environment and writing scripts as needed. ● Comfortable working in an Agile environment ● Machine Learning knowledge is a plus. ● Must be capable of working independently and delivering stable, efficient and reliable software. ● Excellent written and verbal communication skills in English. ● Experience supporting and working with cross-functional teams in a dynamic environment. Show more Show less
Posted 1 week ago
4.0 - 6.0 years
6 - 8 Lacs
Bengaluru
Work from Office
Design and implement cloud-native data architectures on AWS, including data lakes, data warehouses, and streaming pipelines using services like S3, Glue, Redshift, Athena, EMR, Lake Formation, and Kinesis. Develop and orchestrate ETL/ELT pipelines Required Candidate profile Participate in pre-sales and consulting activities such as: Engaging with clients to gather requirements and propose AWS-based data engineering solutions. Supporting RFPs/RFIs, technical proposals
Posted 1 week ago
4.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
At eBay, we're more than a global ecommerce leader — we’re changing the way the world shops and sells. Our platform empowers millions of buyers and sellers in more than 190 markets around the world. We’re committed to pushing boundaries and leaving our mark as we reinvent the future of ecommerce for enthusiasts. Our customers are our compass, authenticity thrives, bold ideas are welcome, and everyone can bring their unique selves to work — every day. We're in this together, sustaining the future of our customers, our company, and our planet. Join a team of passionate thinkers, innovators, and dreamers — and help us connect people and build communities to create economic opportunity for all. Software Engineer (T25), Product Knowledge Do you love Big Data? Deploying Machine Learning models? Challenging optimization problems? Knowledgeable, collaborative co-workers? Come work at eBay and help us redefine global, online commerce! Who Are We? The Product Knowledge team is at the epicenter of eBay’s Tech-driven, Customer-centric overhaul. Our team is entrusted with creating and using eBay’s Product Knowledge - a vast Big Data system which is built up of listings, transactions, products, knowledge graphs, and more. Our team has a mix of highly proficient people from multiple fields such as Machine Learning, Data Science, Software Engineering, Operations, and Big Data Analytics. We have a strong culture of collaboration, and plenty of opportunity to learn, make an impact, and grow! What Will You Do We are looking for exceptional Engineers, who take pride in creating simple solutions to apparently-complex problems. Our Engineering tasks typically involve at least one of the following: Building a pipeline that processes up to billions of items, frequently employing ML models on these datasets Creating services that provide Search or other Information Retrieval capabilities at low latency on datasets of hundreds of millions of items Crafting sound API design and driving integration between our Data layers and Customer-facing applications and components Designing and running A/B tests in Production experiences in order to vet and measure the impact of any new or improved functionality If you love a good challenge, and are good at handling complexity - we’d love to hear from you! eBay is an amazing company to work for. Being on the team, you can expect to benefit from: A competitive salary - including stock grants and a yearly bonus A healthy work culture that promotes business impact and at the same time highly values your personal well-being Being part of a force for good in this world - eBay truly cares about its employees, its customers, and the world’s population, and takes every opportunity to make this clearly apparent Job Responsibilities Design, deliver, and maintain significant features in data pipelines, ML processing, and / or service infrastructure Optimize software performance to achieve the required throughput and / or latency Work with your manager, peers, and Product Managers to scope projects and features Come up with a sound technical strategy, taking into consideration the project goals, timelines, and expected impact Take point on some cross-team efforts, taking ownership of a business problem and ensuring the different teams are in sync and working towards a coherent technical solution Take active part in knowledge sharing across the organization - both teaching and learning from others Minimum Qualifications Passion and commitment for technical excellence B.Sc. or M.Sc. in Computer Science or an equivalent professional experience 4+ years of software design and development experience, tackling non-trivial problems in backend services and / or data pipelines Solid foundation in Data Structures, Algorithms, Object-Oriented Programming, and Software Design Experience in production-grade coding in Java, and Python/Scala Experience in designing and operating Big Data processing pipelines, such as: Hadoop and Spark Experience handling complexities in production software systems Excellent verbal and written communication and collaboration skills eBay is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, sex, sexual orientation, gender identity, veteran status, and disability, or other legally protected status. If you have a need that requires accommodation, please contact us at talent@ebay.com. We will make every effort to respond to your request for accommodation as soon as possible. View our Accessibility Statement to learn more about eBay's commitment to ensuring digital accessibility for people with disabilities. Please see the Talent Privacy Notice for information regarding how eBay handles your personal data collected when you use the eBay Careers website or apply for a job with eBay. eBay is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, sex, sexual orientation, gender identity, veteran status, and disability, or other legally protected status. If you have a need that requires accommodation, please contact us at talent@ebay.com. We will make every effort to respond to your request for accommodation as soon as possible. View our accessibility statement to learn more about eBay's commitment to ensuring digital accessibility for people with disabilities. The eBay Jobs website uses cookies to enhance your experience. By continuing to browse the site, you agree to our use of cookies. Visit our Privacy Center for more information. Show more Show less
Posted 1 week ago
4.0 - 8.0 years
25 - 27 Lacs
Bengaluru
Hybrid
Job Summary: We are looking for a highly skilled Azure Data Engineer with experience in building and managing scalable data pipelines using Azure Data Factory, Synapse, and Databricks . The ideal candidate should be proficient in big data tools and Azure services, with strong programming knowledge and a solid understanding of data architecture and cloud platforms. Key Responsibilities: Design and deliver robust data pipelines using Azure-native tools Work with Azure services like ADLS, Azure SQL DB, Cosmos DB, and Synapse Develop ETL/ELT solutions and collaborate in cloud-native architecture discussions Support real-time and batch data processing using tools like Kafka, Spark, and Stream Analytics Partner with global teams to develop high-performing, secure, and scalable solutions Required Skills: 4 years to 7 years of experience in Data Engineering and Azure platform Expertise in Azure Data Factory, Synapse, Databricks, Stream Analytics, PowerBI Hands-on with Python, Scala, SQL, C#, Java and big data tools like Spark, Hive, Kafka, EventHub Experience with distributed systems, data governance, and large-scale data environments Apply now to join a cutting-edge data engineering team enabling innovation through Azure cloud solutions.
Posted 1 week ago
4.0 years
0 Lacs
Kochi, Kerala, India
On-site
Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your Role And Responsibilities As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform Responsibilities Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Preferred Education Master's Degree Required Technical And Professional Expertise Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala. Minimum 3 years of experience on Cloud Data Platforms on AWS Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB Good to excellent SQL skills Exposure to streaming solutions and message brokers like Kafka technologies Preferred Technical And Professional Experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers Show more Show less
Posted 1 week ago
4.0 - 7.0 years
13 - 17 Lacs
Pune
Work from Office
Overview The Data Technology team at MSCI is responsible for meeting the data requirements across various business areas, including Index, Analytics, and Sustainability. Our team collates data from multiple sources such as vendors (e.g., Bloomberg, Reuters), website acquisitions, and web scraping (e.g., financial news sites, company websites, exchange websites, filings). This data can be in structured or semi-structured formats. We normalize the data, perform quality checks, assign internal identifiers, and release it to downstream applications. Responsibilities As data engineers, we build scalable systems to process data in various formats and volumes, ranging from megabytes to terabytes. Our systems perform quality checks, match data across various sources, and release it in multiple formats. We leverage the latest technologies, sources, and tools to process the data. Some of the exciting technologies we work with include Snowflake, Databricks, and Apache Spark. Qualifications Core Java, Spring Boot, Apache Spark, Spring Batch, Python. Exposure to sql databases like Oracle, Mysql, Microsoft Sql is a must. Any experience/knowledge/certification on Cloud technology preferrably Microsoft Azure or Google cloud platform is good to have. Exposures to non sql databases like Neo4j or Document database is again good to have. What we offer you Transparent compensation schemes and comprehensive employee benefits, tailored to your location, ensuring your financial security, health, and overall wellbeing. Flexible working arrangements, advanced technology, and collaborative workspaces. A culture of high performance and innovation where we experiment with new ideas and take responsibility for achieving results. A global network of talented colleagues, who inspire, support, and share their expertise to innovate and deliver for our clients. Global Orientation program to kickstart your journey, followed by access to our Learning@MSCI platform, LinkedIn Learning Pro and tailored learning opportunities for ongoing skills development. Multi-directional career paths that offer professional growth and development through new challenges, internal mobility and expanded roles. We actively nurture an environment that builds a sense of inclusion belonging and connection, including eight Employee Resource Groups. All Abilities, Asian Support Network, Black Leadership Network, Climate Action Network, Hola! MSCI, Pride & Allies, Women in Tech, and Women’s Leadership Forum. At MSCI we are passionate about what we do, and we are inspired by our purpose – to power better investment decisions. You’ll be part of an industry-leading network of creative, curious, and entrepreneurial pioneers. This is a space where you can challenge yourself, set new standards and perform beyond expectations for yourself, our clients, and our industry. MSCI is a leading provider of critical decision support tools and services for the global investment community. With over 50 years of expertise in research, data, and technology, we power better investment decisions by enabling clients to understand and analyze key drivers of risk and return and confidently build more effective portfolios. We create industry-leading research-enhanced solutions that clients use to gain insight into and improve transparency across the investment process. MSCI Inc. is an equal opportunity employer. It is the policy of the firm to ensure equal employment opportunity without discrimination or harassment on the basis of race, color, religion, creed, age, sex, gender, gender identity, sexual orientation, national origin, citizenship, disability, marital and civil partnership/union status, pregnancy (including unlawful discrimination on the basis of a legally protected parental leave), veteran status, or any other characteristic protected by law. MSCI is also committed to working with and providing reasonable accommodations to individuals with disabilities. If you are an individual with a disability and would like to request a reasonable accommodation for any part of the application process, please email Disability.Assistance@msci.com and indicate the specifics of the assistance needed. Please note, this e-mail is intended only for individuals who are requesting a reasonable workplace accommodation; it is not intended for other inquiries. To all recruitment agencies MSCI does not accept unsolicited CVs/Resumes. Please do not forward CVs/Resumes to any MSCI employee, location, or website. MSCI is not responsible for any fees related to unsolicited CVs/Resumes. Note on recruitment scams We are aware of recruitment scams where fraudsters impersonating MSCI personnel may try and elicit personal information from job seekers. Read our full note on careers.msci.com
Posted 1 week ago
2.0 - 4.0 years
8 - 12 Lacs
Mumbai
Work from Office
The SAS to Databricks Migration Developer will be responsible for migrating existing SAS code, data processes, and workflows to the Databricks platform This role requires expertise in both SAS and Databricks, with a focus on converting SAS logic into scalable PySpark and Python code The developer will design, implement, and optimize data pipelines, ensuring seamless integration and functionality within the Databricks environment Collaboration with various teams is essential to understand data requirements and deliver solutions that meet business needs
Posted 1 week ago
10.0 - 15.0 years
25 - 30 Lacs
Bengaluru
Work from Office
Develop solutions following established technical design, application development standards, and quality processes in projects Assess the impacts on technical design because of the changes in functional requirements Perform independent code reviews and execute unit tests on modules developed by self and other junior team members on the project Write well-designed, efficient, and testable code Interact with other stakeholders, not limited to end-user clients, Project Manager/Scrum master, Business Analysts, offshore development, testing, and other cross-functional teams Skills Must have Overall 10+ years of experience in Java Development Projects 7+ Years of development experience in development with React Microservices development using Spring Boot Technical Stack: Core Java, Java, J2EE, Spring, MongoDB, GKE, Terraform, GitHub, GCP Developer, Kubernetes, Scala, Kafka Technical Tools: Confluence/Jira/Bitbucket or Git, CI / CD (Maven, Git, Jenkins), Eclipse or IntelliJ IDEA Experience in event-driven architectures (CQRS and SAGA patterns). Experience in Design patterns Build Tools (Gulp, Webpack), Jenkins, Docker, Automation, Bash, Redis, Elasticsearch, Kibana Technical Stack (UI): JavaScript, React JS, CSS/SCSS, HTML5, Git Nice to have Experience in Agile (Scrum) project an added plus
Posted 1 week ago
7.0 - 12.0 years
14 - 15 Lacs
Bengaluru
Work from Office
Develop solutions following established technical design, application development standards, and quality processes in projects Assess the impacts on technical design because of the changes in functional requirements Perform independent code reviews and execute unit tests on modules developed by self and other junior team members on the project Write well-designed, efficient, and testable code Interact with other stakeholders, not limited to end-user clients, Project Manager/Scrum master, Business Analysts, offshore development, testing, and other cross-functional teams Skills Must have Overall 7+ years of experience in Java Development Projects 5+ Years of development experience in development with React Microservices development using Spring Boot Technical Stack: Core Java, Java, J2EE, Spring, MongoDB, GKE, Terraform, GitHub, GCP Developer, Kubernetes, Scala, Kafka Technical Tools: Confluence/Jira/Bitbucket or Git, CI / CD (Maven, Git, Jenkins), Eclipse or IntelliJ IDEA Experience in event-driven architectures (CQRS and SAGA patterns). Experience in Design patterns Build Tools (Gulp, Webpack), Jenkins, Docker, Automation, Bash, Redis, Elasticsearch, Kibana Technical Stack (UI): JavaScript, React JS, CSS/SCSS, HTML5, Git Nice to have Experience in Agile (Scrum) project is an added plus Other Languages English: C2 Proficient Seniority Regular Refer a Friend Positive work environments and stellar reputations attract and retain top talent. Find out why Luxoft stands apart from the rest. Recommend a friend Related jobs View all vacancies Test Engineer Backend Automated Testing Java Egypt Remote Egypt Test Engineer Backend Automated Testing Java Serbia Remote Serbia Senior QA Automation Engineer Automated Testing Java Romania Bucharest Bengaluru, India Req. VR-114588 Java BCM Industry 26/05/2025 Req. VR-114588 Apply for Java Developer with React and GCP in Bengaluru *
Posted 1 week ago
7.0 - 12.0 years
37 - 45 Lacs
Bengaluru
Work from Office
Our team is responsible for delivering next-generation Search and Question Answering systems across Apple products including Siri, Safari, Spotlight, and more. This is your chance to shape how people get information by leveraging your Search and applied machine learning expertise along with robust software engineering skills.You will collaborate with outstanding Search and AI engineers on large scale machine learning to improve Query Understanding, Retrieval, and Ranking, developing fundamental building blocks needed for AI powered experiences such as fine-tuning and reinforcement learning. This involves pushing the boundaries on document retrieval and ranking, developing sophisticated machine learning models, using embeddings and deep learning to understand the quality of matches. It also includes online learning to react quickly to change and natural language processing to understand queries. You will work with petabytes of data and combine information from multiple structured and unstructured sources to provide best results and accurate answers to satisfy users information-seeking needs. 7+ years experience in shipping Search and Q&A technologies and ML systems Excellent programming skills in mainstream programming languages such as C++, Python, Scala, and Go Experience delivering tooling and frameworks to evaluate individual components and end-to-end quality Strong analytical skills to systematically identify opportunities to improve search relevance and answer accuracy Excellent communication skills and ability to work in a collaborative research environment Passion for building phenomenal products and curiosity to learn Preferred Qualifications Background in Search Relevance and Ranking, Question Answering systems, Query Understanding, Personalization or Recommendation systems, or data-informed decision-making Hands-on experience in Retrieval Augmented Generation, including developing, evaluating and enhancing for both retrievers and generative LLMs MS in AI, Machine Learning, Information Retrieval, Computer Science, Statistics, or a related field
Posted 1 week ago
8.0 - 12.0 years
7 - 11 Lacs
Bengaluru
Work from Office
Allime Tech Solutions is looking for Hadoop with Scala Developer to join our dynamic team and embark on a rewarding career journey. A Developer is responsible for designing, developing, and maintaining software applications and systems They collaborate with a team of software developers, designers, and stakeholders to create software solutions that meet the needs of the business Key responsibilities:Design, code, test, and debug software applications and systemsCollaborate with cross-functional teams to identify and resolve software issuesWrite clean, efficient, and well-documented codeStay current with emerging technologies and industry trendsParticipate in code reviews to ensure code quality and adherence to coding standardsParticipate in the full software development life cycle, from requirement gathering to deploymentProvide technical support and troubleshooting for production issues Requirements:Strong programming skills in one or more programming languages, such as Python, Java, C++, or JavaScriptExperience with software development tools, such as version control systems (e g Git), integrated development environments (IDEs), and debugging toolsFamiliarity with software design patterns and best practicesGood communication and collaboration skills
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Scala is a popular programming language that is widely used in India, especially in the tech industry. Job seekers looking for opportunities in Scala can find a variety of roles across different cities in the country. In this article, we will dive into the Scala job market in India and provide valuable insights for job seekers.
These cities are known for their thriving tech ecosystem and have a high demand for Scala professionals.
The salary range for Scala professionals in India varies based on experience levels. Entry-level Scala developers can expect to earn around INR 6-8 lakhs per annum, while experienced professionals with 5+ years of experience can earn upwards of INR 15 lakhs per annum.
In the Scala job market, a typical career path may look like: - Junior Developer - Scala Developer - Senior Developer - Tech Lead
As professionals gain more experience and expertise in Scala, they can progress to higher roles with increased responsibilities.
In addition to Scala expertise, employers often look for candidates with the following skills: - Java - Spark - Akka - Play Framework - Functional programming concepts
Having a good understanding of these related skills can enhance a candidate's profile and increase their chances of landing a Scala job.
Here are 25 interview questions that you may encounter when applying for Scala roles:
As you explore Scala jobs in India, remember to showcase your expertise in Scala and related skills during interviews. Prepare well, stay confident, and you'll be on your way to a successful career in Scala. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.