Jobs
Interviews

6071 Scala Jobs - Page 11

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 7.0 years

0 Lacs

hyderabad, telangana

On-site

We have an exciting and rewarding opportunity for you to take your software engineering career to the next level. As a Software Engineer III at JPMorgan Chase within the Consumer and Community Banking in Data Technology, you serve as a seasoned member of an agile team to design and deliver trusted market-leading technology products in a secure, stable, and scalable way. You are responsible for carrying out critical technology solutions across multiple technical areas within various business functions in support of the firm's business objectives. You will execute software solutions, design, development, and technical troubleshooting with the ability to think beyond routine or conventional approaches to build solutions or break down technical problems. Your role involves creating secure and high-quality production code and maintaining algorithms that run synchronously with appropriate systems. You will produce architecture and design artifacts for complex applications while being accountable for ensuring design constraints are met by software code development. Additionally, you will gather, analyze, synthesize, and develop visualizations and reporting from large, diverse data sets in service of continuous improvement of software applications and systems. Proactively identifying hidden problems and patterns in data and using these insights to drive improvements to coding hygiene and system architecture is also a crucial aspect of your responsibilities. Furthermore, you will contribute to software engineering communities of practice and events that explore new and emerging technologies, adding to the team culture of diversity, equity, inclusion, and respect. The required qualifications, capabilities, and skills for this role include formal training or certification on software engineering concepts and 3+ years of applied experience. You should have full Software Development Life Cycle experience within an Agile framework and expert-level implementation skills with Java, AWS, Database technologies, Python, Scala, Spark, and Ab Initio. Experience with the development and decomposition of complex SQL (RDMS Platforms) and Data Warehousing concepts, such as Star Schema, is essential. Practical experience in delivering projects in Data and Analytics, Big Data, Data Warehousing, Business Intelligence, and familiarity with relevant technological solutions and industry best practices are also required. A good understanding of data engineering challenges and proven experience with data platform engineering (batch and streaming, ingestion, storage, processing, management, integration, consumption) is necessary. Familiarity with multiple Data & Analytics technology stacks and awareness of various Data & Analytics tools and techniques (e.g., Python, data mining, predictive analytics, machine learning, data modeling, etc.) are important aspects of this role. Experience with one or more leading cloud providers (AWS/Azure/GCP) is also a requirement. Preferred qualifications, capabilities, and skills include the ability to work fast and quickly ramp up on new technologies and strategies, work collaboratively in teams to develop meaningful relationships to achieve common goals, appreciation of Controls and Compliance processes for applications and data, in-depth understanding of data technologies and solutions, drive process improvements and implement process changes as necessary, and knowledge of industry-wide Big Data technology trends and best practices.,

Posted 4 days ago

Apply

3.0 - 7.0 years

0 Lacs

chennai, tamil nadu

On-site

You will need to possess strong analytical, problem-solving, and programming skills for this role. Your experience in Scala development and good understanding of the Java platform will be essential. Additionally, you should have experience in building Restful APIs in Scala and hands-on experience with AWS. Familiarity with relational and non-relational databases like MongoDB, Cassandra, and Elasticsearch is required. Being self-directed and able to deliver solutions effectively with minimal oversight is crucial. Experience with streaming data applications such as Kafka will also be beneficial. In this position, you will be responsible for leveraging your analytical, problem-solving, and programming skills. Your Scala development experience and understanding of the Java platform will be utilized to build Restful APIs. You will work hands-on with AWS and various databases like MongoDB, Cassandra, and Elasticsearch. As a self-directed individual, you will deliver solutions efficiently with minimal supervision. Your expertise in streaming data applications like Kafka will be put to use in this role. As part of the team, you will have the opportunity to work on exciting projects in industries like High-Tech, communication, media, healthcare, retail, and telecom. You will collaborate with a diverse team of talented individuals in an open and laidback environment, both locally and potentially abroad. GlobalLogic values work-life balance and offers flexible work schedules, remote work options, and paid time off. Professional development is a priority at GlobalLogic, and you will have access to Communication skills training, stress management programs, professional certifications, and technical and soft skill trainings. Competitive salaries, family medical insurance, Group Term Life Insurance, Group Personal Accident Insurance, NPS (National Pension Scheme), extended maternity leave, performance bonuses, and referral bonuses are some of the excellent benefits provided. In addition to professional growth opportunities, you can enjoy various fun perks such as sports events, cultural activities, food subsidies, corporate parties, and vibrant office spaces with dedicated zones and rooftop decks. GlobalLogic also offers discounts at popular stores and restaurants through the GL Club. GlobalLogic is a digital engineering leader that helps global brands design and build innovative products and digital experiences. With a focus on experience design, complex engineering, and data expertise, the company assists clients in envisioning future possibilities and accelerating their digital transformation. Headquartered in Silicon Valley, GlobalLogic operates globally across various industries, including automotive, communications, financial services, healthcare, manufacturing, media, and technology, as part of the Hitachi Group Company under Hitachi, Ltd.,

Posted 4 days ago

Apply

5.0 - 9.0 years

0 Lacs

chennai, tamil nadu

On-site

You should have 5+ years of experience with expertise in Data Engineering. Your hands-on experience should include design and development of big data platforms. The ideal candidate will have deep understanding of modern data processing technology stacks such as Spark, HBase, Hive, and other Hadoop ecosystem technologies, with a focus on development using Scala. Additionally, you should possess deep understanding of streaming data architectures and technologies for real-time and low-latency data processing. Experience with agile development methods, including core values, guiding principles, and key agile practices, is required. Understanding of the theory and application of Continuous Integration/Delivery is a plus. Familiarity with NoSQL technologies, including column family, graph, document, and key-value data storage technologies, is desirable. A passion for software craftsmanship is essential for this role. Experience in the Financial Industry would be beneficial.,

Posted 4 days ago

Apply

10.0 - 31.0 years

4 - 6 Lacs

Salt Lake City, Kolkata/Calcutta

On-site

Data scientist roles and responsibilities include: Data mining or extracting usable data from valuable data sources Using machine learning tools to select features, create and optimize classifiers Carrying out the preprocessing of structured and unstructured data Enhancing data collection procedures to include all relevant information for developing analytic systems Processing, cleansing, and validating the integrity of data to be used for analysis Analyzing large amounts of information to find patterns and solutions Developing prediction systems and machine learning algorithms Presenting results in a clear manner Propose solutions and strategies to tackle business challenges Collaborate with Business and IT teams Data Scientist SkillsYou need to master the skills required for data scientist jobs in various industries and organizations if you want to pursue a data scientist career. Let’s look at the must-have data scientist qualifications. Key skills needed to become a data scientist: Programming Skills – knowledge of statistical programming languages like R, Python, and database query languages like SQL, Hive, Pig is desirable. Familiarity with Scala, Java, or C++ is an added advantage. Statistics – Good applied statistical skills, including knowledge of statistical tests, distributions, regression, maximum likelihood estimators, etc. Proficiency in statistics is essential for data-driven companies. Machine Learning – good knowledge of machine learning methods like k-Nearest Neighbors, Naive Bayes, SVM, Decision Forests. Strong Math Skills (Multivariable Calculus and Linear Algebra) - understanding the fundamentals of Multivariable Calculus and Linear Algebra is important as they form the basis of a lot of predictive performance or algorithm optimization techniques. Data Wrangling – proficiency in handling imperfections in data is an important aspect of a data scientist job description. Experience with Data Visualization Tools like matplotlib, ggplot, d3.js., Tableau that help to visually encode data Excellent Communication Skills – it is incredibly important to describe findings to a technical and non-technical audience. Strong Software Engineering Background Hands-on experience with data science tools Problem-solving aptitude Analytical mind and great business sense Degree in Computer Science, Engineering or relevant field is preferred Proven Experience as Data Analyst or Data Scientist

Posted 4 days ago

Apply

5.0 - 9.0 years

0 Lacs

chennai, tamil nadu

On-site

We are seeking a skilled and seasoned Senior Data Engineer to become a valued member of our innovative team. The ideal candidate should possess a solid foundation in data engineering and demonstrate proficiency in Azure, particularly Azure Data Factory (ADF), Azure Fabric, Databricks, and Snowflake. In this role, you will be responsible for the design, construction, and upkeep of data pipelines, ensuring data quality and accessibility, as well as collaborating with various teams to support our data-centric initiatives. Your responsibilities will include crafting, enhancing, and sustaining robust data pipelines utilizing tools such as Azure Data Factory, Azure Fabric, Databricks, and Snowflake. Moreover, you will work closely with data scientists, analysts, and stakeholders to comprehend data requirements, guarantee data availability, and maintain data quality. Implementing and refining ETL processes to efficiently ingest, transform, and load data from diverse sources into data warehouses, data lakes, and Snowflake will also be part of your role. Furthermore, you will play a crucial role in ensuring data integrity and security by adhering to best practices and data governance policies. Monitoring and rectifying data pipelines for timely and accurate data delivery, as well as optimizing data storage and retrieval processes to enhance performance and scalability, will be among your key responsibilities. Staying abreast of industry trends and best practices in data engineering and cloud technologies is essential, along with mentoring and providing guidance to junior data engineers. To qualify for this position, you should hold a Bachelor's or Master's degree in Computer Science, Information Technology, or a related field. Additionally, you must have over 5 years of experience in data engineering, with a strong emphasis on Azure, ADF, Azure Fabric, Databricks, and Snowflake. Proficiency in SQL, experience in data modeling and database design, and solid programming skills in Python, Scala, or Java are prerequisites. Familiarity with big data technologies like Apache Spark, Hadoop, and Kafka, as well as a sound grasp of data warehousing concepts and solutions, including Azure Synapse Analytics and Snowflake, are highly desirable. Knowledge of data governance, data quality, and data security best practices, exceptional problem-solving skills, and effective communication and collaboration abilities within a team setting are essential. Preferred qualifications include experience with other Azure services such as Azure Blob Storage, Azure SQL Database, and Azure Cosmos DB, familiarity with DevOps practices and tools for CI/CD in data engineering, and certifications in Azure Data Engineering, Snowflake, or related areas.,

Posted 4 days ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

Are you ready to power the world's connections If you don't think you meet all of the criteria below but are still interested in the job, please apply. Nobody checks every box - we're looking for candidates who are particularly strong in a few areas and have some interest and capabilities in others. Design, develop, and maintain microservices that power Kong Konnect, the Service Connectivity Platform. Working closely with Product Management and teams across Engineering, you will develop software that has a direct impact on our customers" business and Kong's success. This opportunity is hybrid (Bangalore Based) with 3 days in the office and 2 days work from home. Implement, and maintain services that power high bandwidth logging and tracing services for our cloud platform such as indexing and searching logs and traces of API requests powered by Kong Gateway and Kuma Service Mesh. Implement efficient solutions at scale using distributed and multi-tenant cloud storage and streaming systems. Implement cloud systems that are resilient to regional and zonal outages. Participate in an on-call rotation to support services in production, ensuring high performance and reliability. Write and maintain automated tests to ensure code integrity and prevent regressions. Mentor other team members. Undertake additional tasks as assigned by the manager. 5+ years working in a team to develop, deliver, and maintain complex software solutions. Experience in log ingestion, indexing, and search at scale. Excellent verbal and written communication skills. Proficiency with OpenSearch/Elasticsearch and other full-text search engines. Experience with streaming platforms such as Kafka, AWS Kinesis, etc. Operational experience in running large-scale, high-performance internet services, including on-call responsibilities. Experience with JVM and languages such as Java and Scala. Experience with AWS and cloud platforms for SaaS teams. Experience designing, prototyping, building, monitoring, and debugging microservices architectures and distributed systems. Understanding of cloud-native systems like Kubernetes, Gitops, and Terraform. Bachelors or Masters degree in Computer Science. Bonus points if you have experience with columnar stores like Druid/Clickhouse/Pinot, working on new products/startups, contributing to Open Source Software projects, or working or developing L4/L7 proxies such as Nginx, HA-proxy, Envoy, etc. Kong is THE cloud native API platform with the fastest, most adopted API gateway in the world (over 300m downloads!). Loved by developers and trusted with enterprises" most critical traffic volumes, Kong helps startups and Fortune 500 companies build with confidence allowing them to bring solutions to market faster with API and service connectivity that scales easily and securely. 83% of web traffic today is API calls! APIs are the connective tissue of the cloud and the underlying technology that allows software to talk and interact with one another. Therefore, we believe that APIs act as the nervous system of the cloud. Our audacious mission is to build the nervous system that will safely and reliably connect all of humankind! For more information about Kong, please visit konghq.com or follow @thekonginc on Twitter.,

Posted 4 days ago

Apply

5.0 - 10.0 years

0 Lacs

haryana

On-site

The Senior AI Engineer - Agentic AI position at JMD Megapolis, Gurugram requires a minimum of 5+ years of experience in Machine Learning engineering, Data Science, or similar roles focusing on applied data science and entity resolution. You will be expected to have a strong background in machine learning, data mining, and statistical analysis for model development, validation, implementation, and product integration. Proficiency in programming languages like Python or Scala, along with experience in working with data manipulation and analysis libraries such as Pandas, NumPy, and scikit-learn is essential. Additionally, experience with large-scale data processing frameworks like Spark, proficiency in SQL and database concepts, as well as a solid understanding of feature engineering, dimensionality reduction, and data preprocessing techniques are required. As a Senior AI Engineer, you should possess excellent problem-solving skills and the ability to devise creative solutions to complex data challenges. Strong communication skills are crucial for effective collaboration with cross-functional teams and explaining technical concepts to non-technical stakeholders. Attention to detail, ability to work independently, and a passion for staying updated with the latest advancements in the field of data science are desirable traits for this role. The ideal candidate for this position would hold a Masters or PhD in Computer Science, Data Science, Statistics, or a related quantitative field. They should have 5-10 years of industry experience in developing AI solutions, including machine learning and deep learning models. Strong programming skills in Python and familiarity with libraries such as TensorFlow, PyTorch, or scikit-learn are necessary. Furthermore, a solid understanding of machine learning algorithms, statistical analysis, data preprocessing techniques, and experience in working with large datasets to implement scalable AI solutions are required. Proficiency in data visualization and reporting tools, knowledge of cloud platforms like AWS, Azure, Google Cloud for AI deployment, familiarity with software development practices, and version control systems are all valued skills. Problem-solving abilities, creative thinking to overcome challenges, strong communication, and teamwork skills to collaborate effectively with cross-functional teams are essential for success in this role.,

Posted 4 days ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

You should be a skilled and experienced Spark Scala Developer with a strong expertise in AWS cloud services and SQL to join our data engineering team. Your primary responsibility will be to design, build, and optimize scalable data processing systems that support our data platform. Your key responsibilities will include developing and maintaining large-scale distributed data processing pipelines using Apache Spark with Scala, working with AWS services (S3, EMR, Lambda, Glue, Redshift, etc.) to build and manage data solutions in the cloud, writing complex SQL queries for data extraction, transformation, and analysis, optimizing Spark jobs for performance and cost-efficiency, collaborating with data scientists, analysts, and other developers to understand data requirements, building and maintaining data lake and data warehouse solutions, implementing best practices in coding, testing, and deployment, and ensuring data quality and consistency across systems. To be successful in this role, you should have strong hands-on experience with Apache Spark (preferably using Scala), proficiency in the Scala programming language, solid experience with SQL (including complex joins, window functions, and performance tuning), working knowledge of AWS services like S3, EMR, Glue, Lambda, Athena, Redshift, experience in building and maintaining ETL/ELT pipelines, familiarity with data modeling and data warehousing concepts, experience with version control (e.g., Git) and CI/CD pipelines is a plus, and strong problem-solving and communication skills.,

Posted 4 days ago

Apply

6.0 - 14.0 years

0 Lacs

haryana

On-site

As a Data Engineer with Python + SQL, you will be responsible for leveraging your 6 to 14 years of experience to create data solutions using Python scripting and SQL. Your strong knowledge of Object Oriented and functional programming concepts will be key in developing efficient and effective pipelines. While proficiency in Python is required, experience with Java, Ruby, Scala, or Clojure will also be considered. In this role, you will be integrating services to build pipeline solutions in various cloud platforms such as AWS, Hadoop, EMR, Azure, and Google Cloud. While AWS experience is a plus, it is not required. Additionally, having experience with Relational and NoSQL databases will be beneficial. As a valuable member of the team, it would be advantageous to have DevOps or Data Ops experience. Your ability to work in a hybrid office environment for 3 days a week in Gurgaon will ensure seamless collaboration with the team. Join us and contribute to the development of innovative data solutions!,

Posted 4 days ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

BizViz is a company that offers a comprehensive view of a business's data, catering to various industries and meeting the diverse needs of business executives. With a dedicated team of over 50 professionals working on the BizViz platform for several years, the company aims to develop technological solutions that provide our clients with a competitive advantage. At BizViz, we are committed to the success of our customers, striving to create applications that align with their unique visions and requirements. We steer clear of generic ERP templates, offering businesses a more tailored solution. As a Big Data Engineer at BizViz, you will join a small, agile team of data engineers focused on building an innovative big data platform for enterprises dealing with critical data management and diverse application stakeholders at scale. The platform handles data ingestion, warehousing, and governance, allowing developers to create complex queries efficiently. With features like automatic scaling, elasticity, security, logging, and data provenance, our platform empowers developers to concentrate on algorithms rather than administrative tasks. We are seeking engineers who are eager for technical challenges, to enhance our current platform for existing clients and develop new capabilities for future customers. Key Responsibilities: - Work as a Senior Big Data Engineer within the Data Science Innovation team, collaborating closely with internal and external stakeholders throughout the development process. - Understand the needs of key stakeholders to enhance or create new solutions related to data and analytics. - Collaborate in a cross-functional, matrix organization, even in ambiguous situations. - Contribute to scalable solutions using large datasets alongside other data scientists. - Research innovative data solutions to address real market challenges. - Analyze data to provide fact-based recommendations for innovation projects. - Explore Big Data and other unstructured data sources to uncover new insights. - Partner with cross-functional teams to develop and execute business strategies. - Stay updated on advancements in data analytics, Big Data, predictive analytics, and technology. Qualifications: - BTech/MCA degree or higher. - Minimum 5 years of experience. - Proficiency in Java, Scala, Python. - Familiarity with Apache Spark, Hadoop, Hive, Spark SQL, Spark Streaming, Apache Kafka. - Knowledge of Predictive Algorithms, Mllib, Cassandra, RDMS (MYSQL, MS SQL, etc.), NOSQL, Columnar Databases, Big table. - Deep understanding of search engine technology, including Elasticsearch/Solr. - Experience in Agile development practices such as Scrum. - Strong problem-solving skills for designing algorithms related to data cleaning, mining, clustering, and pattern recognition. - Ability to work effectively in a matrix-driven organization under varying circumstances. - Desirable personal qualities: creativity, tenacity, curiosity, and a passion for technical excellence. Location: Bangalore To apply for this position, interested candidates can send their applications to careers@bdb.ai.,

Posted 4 days ago

Apply

2.0 - 6.0 years

0 Lacs

indore, madhya pradesh

On-site

Golden Eagle IT Technologies Pvt. Ltd. is looking for a skilled Data Engineer with 2 to 4 years of experience to join the team in Indore. The ideal candidate should have a solid background in data engineering, big data technologies, and cloud platforms. As a Data Engineer, you will be responsible for designing, building, and maintaining efficient, scalable, and reliable data pipelines. You will be expected to develop and maintain ETL pipelines using tools like Apache Airflow, Spark, and Hadoop. Additionally, you will design and implement data solutions on AWS, leveraging services such as DynamoDB, Athena, Glue Data Catalog, and SageMaker. Working with messaging systems like Kafka for managing data streaming and real-time data processing will also be part of your responsibilities. Proficiency in Python and Scala for data processing, transformation, and automation is essential. Ensuring data quality and integrity across multiple sources and formats will be a key aspect of your role. Collaboration with data scientists, analysts, and other stakeholders to understand data needs and deliver solutions is crucial. Optimizing and tuning data systems for performance and scalability, as well as implementing best practices for data security and compliance, are also expected. Preferred skills include experience with infrastructure as code tools like Pulumi, familiarity with GraphQL for API development, and exposure to machine learning and data science workflows, particularly using SageMaker. Qualifications for this position include a Bachelor's degree in Computer Science, Information Technology, or a related field, along with 2-4 years of experience in data engineering or a similar role. Proficiency in AWS cloud services and big data technologies, strong programming skills in Python and Scala, knowledge of data warehousing concepts and tools, as well as excellent problem-solving and communication skills are required.,

Posted 4 days ago

Apply

7.0 - 11.0 years

0 Lacs

chandigarh

On-site

As a Senior Azure Data Engineer at iO Associates in Mohali, you will be responsible for building and optimizing data pipelines, supporting data integration across systems, and enhancing the Azure-based Enterprise Data Platform. The company leads the real estate sector with headquarters in Mohali and offices in the US and over 17 other countries. Your key responsibilities will include building and enhancing the Azure-based EDP using modern tools like Databricks, Synapse, ADF, and ADLS Gen2. You will develop and maintain ETL pipelines, collaborate with teams to deliver efficient data solutions, create data products for enterprise-wide use, mentor team members, promote code reusability, and contribute to documentation, reviews, and architecture planning. To excel in this role, you should have at least 7 years of experience in data engineering with expertise in Databricks, Python, Scala, Azure Synapse, and ADF. You should have a proven track record of building and managing ETL/data pipelines across various sources and formats, along with strong skills in data modeling, warehousing, and CI/CD practices. This is an excellent opportunity to join a company that values your growth, emphasizes work-life balance, and recognizes your contributions. If you are interested in this position, please email at [Email Address].,

Posted 4 days ago

Apply

6.0 - 10.0 years

0 Lacs

hyderabad, telangana

On-site

At Apple, we believe in turning phenomenal ideas into exceptional products, services, and customer experiences with remarkable speed. If you are driven by passion and dedication, the possibilities of what you can achieve are limitless. We are currently seeking a highly motivated Performance Engineer to join our Engineering team at Apple Online Store. As a member of the AOS Performance Engineering team, your primary focus will be on ensuring the code and functional quality remain paramount, serving as a key metric of success. You will collaborate closely with application and service engineering teams, as well as engage with product, design, content, QA, and various other groups to enhance the e-commerce experience and features of the highly successful Apple Online Store on all platforms - Web, MOW, and Native iOS. In this dynamic and fast-paced environment, your role will involve crafting and delivering exceptional e-commerce experiences, from merchandising to checkout. The ideal candidate will have a solid track record in developing high-quality enterprise software solutions. We are looking for a hands-on individual who thrives on delving into problem details, exploring various solutions, and providing guidance to the team through practical examples during implementation. You will be exposed to a diverse range of technologies and concepts such as Java, Scala, Microservices, AWS, Event Driven Architectures, Oracle, and No-SQL databases. This challenging position demands a strong technological background and collaborative skills to ensure the software meets high functional standards and operational excellence across production and non-production environments. We are seeking a self-starting, positive team player with exceptional written and communication skills, someone who is hardworking and proactive in questioning assumptions. If this sounds like you, we would love to hear from you. **Minimum Qualifications:** - Bachelor's degree in Computer Science or equivalent experience - Minimum of 6 years of experience in Software performance engineering/ Load Test Engineering in a professional environment - Strong knowledge of computer science, including a deep understanding of data structures, algorithms, and Service Oriented Architectures - Proficiency in performance testing tools like Jmeter, K8, Gatling - Experience in programming with Java, Scala, or any other object-oriented language with a thorough understanding of object-oriented concepts - Familiarity with No-SQL databases (e.g., Cassandra, Mongo DB, Couchbase, Oracle) and RDS DB - Excellent understanding of web technologies such as HTTP, cookies, AJAX, etc. - Sound knowledge of performance engineering concepts like perf modeling, application benchmarking, requirement analysis, and testing - Understanding of multi-tier scalable, high-volume performing, multi-threaded, and reliable web services **Preferred Qualifications:** - Excellent written and verbal communication skills - Experience in scaling distributed systems to handle millions of concurrent requests is a plus - Familiarity with EKS, Containerization, Serverless Technologies, SNS/SQS, ElastiCache, S3, and Kubernetes is advantageous - Previous experience working with large-scale consumer-facing websites would be beneficial If you meet the above criteria and are ready to take on this exciting challenge, please submit your CV for consideration.,

Posted 4 days ago

Apply

8.0 - 12.0 years

0 Lacs

Pune, Maharashtra, India

On-site

The Applications Development Senior Programmer Analyst is an intermediate level position responsible for participation in the establishment and implementation of new or revised application systems and programs in coordination with the Technology team. The overall objective of this role is to contribute to applications systems analysis and programming activities. Key Responsibilities: The Senior Big Data Engineer will be working very closely with and managing the work of a team of data engineers working on our Big Data Platform. The tech lead will need below core skills: Work closely with the Olympus core & product processor teams and drive the build out & implementation of the CitiDigital reporting using Olympus framework Accountable for all phases of development process – analysis, design, construction, testing and implementation in agile development lifecycles Perform Unit Testing, System Testing for all applications developed / enhancements and ensure that all critical and high-severity bugs are addressed. Subject Matter Expert (SME) in at least one area of Applications Development Align to Engineering Excellence Development principles and standards Promote and increase our Development Productivity scores for coding Fully adhere to and evangelize a full Continuous Integration and Continuous Deploy pipeline Strong SQL skills to extract, analyze and reconcile huge data sets Demonstrate ownership and initiative taking Project will run in iteration lifecycles with agile practices, so experience of agile development and scrums is highly beneficial. Qualifications: Bachelor's degree/University degree or equivalent experience, Master's degree preferred 8-12 year’s experience in application / software development Skills: Prior work Experience in Capital/Regulatory Market or related industry Experience with Big Data technologies (Spark, Hadoop, HDFS, Hive, Impala) Experience with Python/Scala and Unix Shell scripting is a must Excellent analytical, problem solving, negotiating, influencing, facilitation, prioritization, decision-making and conflict resolution skills are required Solid understanding of the Big Data architecture and the ability to trouble shoot development / Performance issues on Hadoop (Cloudera preferably) Strong data analysis skills and the ability to slice and dice the data as needed for business reporting Passionate, self-driven with can do attitude Able to build practical solutions Good Team player, who can work with global team model and deadline oriented The candidate is expected to be dynamic, flexible with a high energy level as this is a demanding and rapidly changing environment. Ability to work independently given general guidance Education: Bachelor’s degree/University degree or equivalent experience This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Applications Development ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster.

Posted 4 days ago

Apply

4.0 - 11.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Hello, Greeting from Quess Corp!! Hope you are doing well we have job opportunity with one of our client Designation_ Data Engineer Location – Gurugram Experience – 4yrs to 11 Yrs Qualification – Graduate / PG ( IT) Skill Set – Data Engineer, Python, AWS, SQL Essential capabilities Enthusiasm for technology, keeping up with latest trends Ability to articulate complex technical issues and desired outcomes of system enhancements Proven analytical skills and evidence-based decision making Excellent problem solving, troubleshooting & documentation skills Strong written and verbal communication skills Excellent collaboration and interpersonal skills Strong delivery focus with an active approach to quality and auditability Ability to work under pressure and excel within a fast-paced environment Ability to self-manage tasks Agile software development practices Desired Experience Hands on in SQL and its Big Data variants (Hive-QL, Snowflake ANSI, Redshift SQL) Python and Spark and one or more of its API (PySpark, Spark SQL, Scala), Bash/Shell scripting Experience with Source code control - GitHub, VSTS etc. Knowledge and exposure to Big Data technologies Hadoop stack such as HDFS, Hive, Impala, Spark etc, and cloud Big Data warehouses - RedShift, Snowflake etc. Experience with UNIX command-line tools. Exposure to AWS technologies including EMR, Glue, Athena, Data Pipeline, Lambda, etc Understanding and ability to translate/physicalise Data Models (Star Schema, Data Vault 2.0 etc) Essential Experience It is expected that the role holder will most likely have the following qualifications and experience 4-11 years technical experience (within financial services industry preferred) Technical Domain experience (Subject Matter Expertise in Technology or Tools) Solid experience, knowledge and skills in Data Engineering, BI/software development such as ELT/ETL, data extraction and manipulation in Data Lake/Data Warehouse/Lake House environment. Hands on programming experience in writing Python, SQL, Unix Shell scripts, Pyspark scripts, in a complex enterprise environment Experience in configuration management using Ansible/Jenkins/GIT Hands on cloud-based solution design, configuration and development experience with Azure and AWS Hands on experience of using AWS Services - S3,EC2, EMR, SNS, SQS, Lambda functions, Redshift Hands on experience Of building Data pipelines to ingest, transform on Databricks Delta Lake platform from a range of data sources - Data bases, Flat files, Streaming etc.. Knowledge of Data Modelling techniques and practices used for a Data Warehouse/Data Mart application. Quality engineering development experience (CI/CD – Jenkins, Docker) Experience in Terraform, Kubernetes and Docker Experience with Source Control Tools – Github or BitBucket Exposure to relational Databases - Oracle or MS SQL or DB2 (SQL/PLSQL, Database design, Normalisation, Execution plan analysis, Index creation and maintenance, Stored Procedures) , PostGres/MySQL Skilled in querying data from a range of data sources that store structured and unstructured data Knowledge or understanding of Power BI (Recommended) Key Accountabilities Design, develop, test, deploy, maintain and improve software Develop flowcharts, layouts and documentation to identify requirements & solutions Write well designed & high-quality testable code Produce specifications and determine operational feasibility Integrate software components into fully functional platform Apply pro-actively & perform hands-on design and implementation of best practice CI/CD Coaching & mentoring of other Service Team members Develop/contribute to software verification plans and quality assurance procedures Document and maintain software functionality Troubleshoot, debug and upgrade existing systems, including participating in DR tests Deploy programs and evaluate customer feedback Contribute to team estimation for delivery and expectation management for scope. Comply with industry standards and regulatory requirements

Posted 4 days ago

Apply

6.0 - 10.0 years

8 - 12 Lacs

Chennai

Work from Office

Join us as a Data Engineer Were looking for someone to build effortless, digital first customer experiences to help simplify our organisation and keep our data safe and secure Day-to-day, youll develop innovative, data-driven solutions through data pipelines, modelling and ETL design while inspiring to be commercially successful through insights If youre ready for a new challenge, and want to bring a competitive edge to your career profile by delivering streaming data ingestions, this could be the role for you We're offering this role at associate vice president level What youll do Your daily responsibilities will include you developing a comprehensive knowledge of our data structures and metrics, advocating for change when needed for product development Youll also provide transformation solutions and carry out complex data extractions, Well expect you to develop a clear understanding of data platform cost levels to build cost-effective and strategic solutions Youll also source new data by using the most appropriate tooling before integrating it into the overall solution to deliver it to our customers, Youll Also Be Responsible For Driving customer value by understanding complex business problems and requirements to correctly apply the most appropriate and reusable tools to build data solutions Participating in the data engineering community to deliver opportunities to support our strategic direction Carrying out complex data engineering tasks to build a scalable data architecture and the transformation of data to make it usable to analysts and data scientists Building advanced automation of data engineering pipelines through the removal of manual stages Leading on the planning and design of complex products and providing guidance to colleagues and the wider team when required The skills youll need To be successful in this role, youll have an understanding of data usage and dependencies with wider teams and the end customer Youll also have experience of extracting value and features from large scale data, Well expect you to have experience of ETL technical design, data quality testing, cleansing and monitoring, data sourcing, exploration and analysis, and data warehousing and data modelling capabilities, Youll Also Need Experience of using programming language such as Python for developing custom operators and sensors in Airflow, improving workflow capabilities and reliability Good knowledge of Kafka and Kinesis for effective real-time data processing, Scala and Spark to enhance data processing efficiency and scalability, Great communication skills with the ability to proactively engage with a range of stakeholders Show

Posted 5 days ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to effectively migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and deliver solutions that meet business needs, while also troubleshooting any issues that arise in the data flow. Roles & Responsibilities: - Expected to be an SME. - Collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Mentor junior team members to enhance their skills and knowledge in data engineering. - Continuously evaluate and improve data processes to enhance efficiency and effectiveness. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. - Good To Have Skills: Experience with Apache Spark and data warehousing solutions. - Strong understanding of data modeling and database design principles. - Experience with cloud platforms such as AWS, Azure, or Google Cloud. - Familiarity with programming languages such as Python or Scala for data manipulation. Additional Information: - The candidate should have minimum 7.5 years of experience in Databricks Unified Data Analytics Platform. - This position is based in Hyderabad. - A 15 years full time education is required., 15 years full time education

Posted 5 days ago

Apply

5.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Job Title: Big Data Engineer (AWS-Scala Specialist) Location: Greater Noida/Hyderabad Experience: 5-10 Years About the Role- We are seeking a highly skilled Senior Big Data Engineer with deep expertise in Big Data technologies and AWS Cloud Services. The ideal candidate will bring strong hands-on experience in designing, architecting, and implementing scalable data engineering solutions while driving innovation within the team. Key Responsibilities- Design, develop, and optimize Big Data architectures leveraging AWS services for large-scale, complex data processing. Build and maintain data pipelines using Spark (Scala) for both structured and unstructured datasets. Architect and operationalize data engineering and analytics platforms (AWS preferred; Hortonworks, Cloudera, or MapR experience a plus). Implement and manage AWS services including EMR, Glue, Kinesis, DynamoDB, Athena, CloudFormation, API Gateway, and S3. Work on real-time streaming solutions using Kafka and AWS Kinesis. Support ML model operationalization on AWS (deployment, scheduling, and monitoring). Analyze source system data and data flows to ensure high-quality, reliable data delivery for business needs. Write highly efficient SQL queries and support data warehouse initiatives using Apache NiFi, Airflow, and Kylo. Collaborate with cross-functional teams to provide technical leadership, mentor team members, and strengthen the data engineering capability. Troubleshoot and resolve complex technical issues, ensuring scalability, performance, and security of data solutions. Mandatory Skills & Qualifications- ✅ 5+ years of solid hands-on experience in Big Data Technologies (AWS, Scala, Hadoop and Spark Mandatory) ✅ Proven expertise in Spark with Scala ✅ Hands-on experience with: AWS services (EMR, Glue, Lambda, S3, CloudFormation, API Gateway, Athena, Lake Formation) Share your resume at Aarushi.Shukla@coforge.com if you have experience with mandatory skills and you are an early.

Posted 5 days ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and deliver effective solutions that meet business needs, while also troubleshooting and optimizing existing data workflows to enhance performance and reliability. Roles & Responsibilities: - Expected to be an SME. - Collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Mentor junior team members to enhance their skills and knowledge in data engineering. - Continuously evaluate and improve data processes to ensure efficiency and effectiveness. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. - Good To Have Skills: Experience with Apache Spark and data warehousing solutions. - Strong understanding of data modeling and database design principles. - Familiarity with cloud platforms such as AWS, Azure, or Google Cloud. - Experience in programming languages such as Python or Scala for data processing. Additional Information: - The candidate should have minimum 5 years of experience in Databricks Unified Data Analytics Platform. - This position is based at our Bengaluru office. - A 15 years full time education is required., 15 years full time education

Posted 5 days ago

Apply

0.0 - 4.0 years

25 - 27 Lacs

Bengaluru

Work from Office

Your opportunity Do you love the transformative impact data can have on a businessAre you motivated to push for results and overcome all obstaclesThen we have a role for you. What youll do Lead the building of scalable, fault tolerant pipelines with built in data quality checks that transform, load and curate data from various internal and external systems Provide leadership to cross-functional initiatives and projects. Influence architecture design and decisions. Build cross-functional relationships with Data Scientists, Product Managers and Software Engineers to understand data needs and deliver on those needs. Improve engineering processes and cross-team collaboration. Ruthlessly prioritize work to align with company priorities. Provide thought leadership to grow and evolve DE function and implementation of SDLC best practices in building internal-facing data products by staying up-to-date with industry trends, emerging technologies, and best practices in data engineering This role requires Experience in BI and Data Warehousing. Strong experience with dbt, Airflow and snowflake Experience with Apache Iceberg tables Experience and knowledge of building data-lakes in AWS (i.e. Spark/Glue, Athena), including data modeling, data quality best practices, and self-service tooling. Experience mentoring data professionals from junior to senior levels Demonstrated success leading cross functional initiatives Passionate about data quality, code quality, SLAs and continuous improvement Deep understanding of data system architecture Deep understanding of ETL/ELT patterns Development experience in at least one object-oriented language (Python,R,Scala, etc.). Comfortable with SQL and related tooling Bonus points if you have Experience with Observability Please note that visa sponsorship is not available for this position. Fostering a diverse, welcoming and inclusive environment is important to us. We work hard to make everyone feel comfortable bringing their best, most authentic selves to work every day. We celebrate our talented Relics different backgrounds and abilities, and recognize the different paths they took to reach us including nontraditional ones. Their experiences and perspectives inspire us to make our products and company the best they can be. We re looking for people who feel connected to our mission and values, not just candidates who check off all the boxes. . We believe in empowering all Relics to achieve professional and business success through a flexible workforce model. This model allows us to work in a variety of workplaces that best support our success, including fully office-based, fully remote, or hybrid. Our hiring process In compliance with applicable law, all persons hired will be required to verify identity and eligibility to work and to complete employment eligibility verification. Note: Our stewardship of the data of thousands of customers means that a criminal background check is required to join New Relic. We will consider qualified applicants with arrest and conviction records based on individual circumstances and in accordance with applicable law including, but not limited to, the San Francisco Fair Chance Ordinance . Headhunters and recruitment agencies may not submit resumes/CVs through this website or directly to managers. New Relic does not accept unsolicited headhunter and agency resumes, and will not pay fees to any third-party agency or company that does not have a signed agreement with New Relic. Candidates are evaluated based on qualifications, regardless of race, religion, ethnicity, national origin, sex, sexual orientation, gender expression or identity, age, disability, neurodiversity, veteran or marital status, political viewpoint, or other legally protected characteristics. Review our Applicant Privacy Notice at https: / / newrelic.com / termsandconditions / applicant-privacy-policy

Posted 5 days ago

Apply

1.0 - 6.0 years

8 - 9 Lacs

Bengaluru

Work from Office

IN Data Engineering & Analytics(IDEA) Team is looking to hire a rock star Data Engineer to build and manage the largest petabyte-scale data infrastructure in India for Amazon India businesses. IN Data Engineering & Analytics (IDEA) team is the central Data engineering and Analytics team for all A.in businesses. The teams charter includes 1) Providing Unified Data and Analytics Infrastructure (UDAI) for all A.in teams which includes central Petabyte-scale Redshift data warehouse, analytics infrastructure and frameworks for visualizing and automating generation of reports & insights and self-service data applications for ingesting, storing, discovering, processing & querying of the data 2) Providing business specific data solutions for various business streams like Payments, Finance, Consumer & Delivery Experience. The Data Engineer will play a key role in being a strong owner of our Data Platform. He/she will own and build data pipelines, automations and solutions to ensure the availability, system efficiency, IMR efficiency, scaling, expansion, operations and compliance of the data platform that serves 200 + IN businesses. The role sits in the heart of technology & business worlds and provides opportunity for growth, high business impact and working with seasoned business leaders. An ideal candidate will be someone with sound technical background in managing large data infrastructures, working with petabyte-scale data, building scalable data solutions/automations and driving operational excellence. An ideal candidate will be someone who is a self-starter that can start with a Platform requirement & work backwards to conceive and devise best possible solution, a good communicator while driving customer interactions, a passionate learner of new technology when the need arises, a strong owner of every deliverable in the team, obsessed with customer delight, business impact and gets work done in business time. 1. Design/implement automation and manage our massive data infrastructure to scale for the analytics needs of Amazon IN. 2. Build solutions to achieve BAA(Best At Amazon) standards for system efficiency, IMR efficiency, data availability, consistency & compliance. 3. Enable efficient data exploration, experimentation of large datasets on our data platform and implement data access control mechanisms for stand-alone datasets 4. Design and implement scalable and cost effective data infrastructure to enable Non-IN(Emerging Marketplaces and WW) use cases on our data platform 5. Interface with other technology teams to extract, transform, and load data from a wide variety of data sources using SQL, Amazon and AWS big data technologies 6. Must possess strong verbal and written communication skills, be self-driven, and deliver high quality results in a fast-paced environment. 7. Drive operational excellence strongly within the team and build automation and mechanisms to reduce operations 8. Enjoy working closely with your peers in a group of very smart and talented engineers. A day in the life India Data Engineering and Analytics (IDEA) team is central data engineering team for Amazon India. Our vision is to simplify and accelerate data driven decision making for Amazon India by providing cost effective, easy & timely access to high quality data. We achieve this by providing UDAI (Unified Data & Analytics Infrastructure for Amazon India) which serves as a central data platform and provides data engineering infrastructure, ready to use datasets and self-service reporting capabilities. Our core responsibilities towards India marketplace include a) providing systems(infrastructure) & workflows that allow ingestion, storage, processing and querying of data b) building ready-to-use datasets for easy and faster access to the data c) automating standard business analysis / reporting/ dash-boarding d) empowering business with self-service tools to manage data and generate insights. 1+ years of data engineering experience Experience with SQL Experience with data modeling, warehousing and building ETL pipelines Experience with one or more query language (e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala) Experience with one or more scripting language (e.g., Python, KornShell) Experience with big data technologies such as: Hadoop, Hive, Spark, EMR Experience with any ETL tool like, Informatica, ODI, SSIS, BODI, Datastage, etc.

Posted 5 days ago

Apply

5.0 - 7.0 years

30 - 35 Lacs

Bengaluru

Work from Office

We celebrate the rich diversity of the communities in which we operate and are committed to creating inclusive and safe environments where all our team members can contribute and succeed. We believe that all team members should feel valued, respected, and safe irrespective of your gender, ethnicity, indigeneity, religious beliefs, education, age, disability, family responsibilities, sexual orientation and gender identity and we encourage applications from all candidates. Job Description: 3+ yrs in AWS service like IAM, API gateway, EC2,S3 2+yrs expereince in creating and deploying containers on kubernestes 2+yrs expereince with CI-CD pipelines like Jrnkins, Github 2+yrs expereince with snowflake data warehousing, 5-7 yrs with ETL/ELT paradign 5-7 yrs in Big data technologies like Spark, Kafka Strong Expereince skills in Python, Java or scala

Posted 5 days ago

Apply

3.0 - 8.0 years

30 - 35 Lacs

Bengaluru

Work from Office

At Anko you ll be joining a diverse team who come together to collaborate globally around tech. We are an innovation hub which power and support our retail brands. You ll feel the impacts of the work you ll do for our millions of customers and team members every day. Our brands are focused on being customer-led, digitally enabled retailers, providing you with challenging and rewarding work that you will be proud of.Join our team, choose your own path and work on projects that excite you. Job Description: 3+ yrs in AWS service like IAM, API gateway, EC2,S3 2+yrs expereince in creating and deploying containers on kubernestes 2+yrs expereince with CI-CD pipelines like Jrnkins, Github 2+yrs expereince with snowflake data warehousing, 5-7 yrs with ETL/ELT paradign 5-7 yrs in Big data technologies like Spark, Kafka Strong Expereince skills in Python, Java or scala A place you can belong: We celebrate the rich diversity of the communities in which we operate and are committed to creating inclusive and safe environments where all our team members can contribute and succeed. We believe that all team members should feel valued, respected, and safe irrespective of your gender, ethnicity, indigeneity, religious beliefs, education, age, disability, family responsibilities, sexual orientation and gender identity and we encourage applications from all candidates.

Posted 5 days ago

Apply

3.0 years

0 Lacs

Bhubaneswar, Odisha, India

On-site

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and deliver effective solutions that meet business needs, while also troubleshooting and optimizing existing data workflows to enhance performance and reliability. Roles & Responsibilities: - Expected to be an SME. - Collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Mentor junior team members to enhance their skills and knowledge in data engineering. - Continuously evaluate and improve data processes to ensure efficiency and effectiveness. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. - Good To Have Skills: Experience with Apache Spark and data warehousing solutions. - Strong understanding of data modeling and database design principles. - Familiarity with cloud platforms such as AWS, Azure, or Google Cloud. - Experience in programming languages such as Python or Scala for data processing. Additional Information: - The candidate should have minimum 5 years of experience in Databricks Unified Data Analytics Platform. - This position is based at our Bengaluru office. - A 15 years full time education is required., 15 years full time education

Posted 5 days ago

Apply

2.0 - 7.0 years

30 - 35 Lacs

Bengaluru

Work from Office

Data Engineer Apply now Date: 27 Jul 2025 Location: Bangalore, IN Company: kmartaustr Brighter Futures Start Here At Anko you ll be joining a diverse team who come together to collaborate globally around tech. We are an innovation hub which power and support our retail brands. You ll feel the impacts of the work you ll do for our millions of customers and team members every day. Our brands are focused on being customer-led, digitally enabled retailers, providing you with challenging and rewarding work that you will be proud of.Join our team, choose your own path and work on projects that excite you. Job Description: 3 Yrs of expereience in Data Engineer 3+ yrs in AWS service like IAM, API gateway, EC2,S3 2+yrs expereince in creating and deploying containers on kubernestes 2+yrs expereince with CI-CD pipelines like Jrnkins, Github 2+yrs expereince with snowflake data warehousing, 5-7 yrs with ETL/ELT paradign 5-7 yrs in Big data technologies like Spark, Kafka Strong Expereince skills in Python, Java or scala A place you can belong: We celebrate the rich diversity of the communities in which we operate and are committed to creating inclusive and safe environments where all our team members can contribute and succeed. We believe that all team members should feel valued, respected, and safe irrespective of your gender, ethnicity, indigeneity, religious beliefs, education, age, disability, family responsibilities, sexual orientation and gender identity and we encourage applications from all candidates. Apply now Find similar jobs:

Posted 5 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies