Jobs
Interviews

146 Spark Streaming Jobs - Page 4

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 8.0 years

3 - 7 Lacs

Hyderabad

Work from Office

Long Description Experienceand Expertise inany of the followingLanguagesat least 1 of them : Java, Scala, Python Experienceand expertise in SPARKArchitecture Experience in the range of 6-10 yrs plus Good Problem SolvingandAnalytical Skills Ability to Comprehend the Business requirementand translate to the Technical requirements Good communicationand collaborative skills with fellow teamandacross Vendors Familiar with development of life cycle includingCI/CD pipelines. Proven experienceand interested in supportingexistingstrategicapplications Familiarity workingwithagile methodology Mandatory Skills: Scala programming.: Experience: 5-8 Years.

Posted 1 month ago

Apply

7.0 - 12.0 years

9 - 12 Lacs

Bengaluru

Work from Office

Responsibilities: * Design, develop, test & maintain Scala applications using Spark. * Collaborate with cross-functional teams on project delivery. * Optimize application performance through data analysis.

Posted 1 month ago

Apply

6.0 - 11.0 years

11 - 16 Lacs

Noida

Work from Office

Data Engineering- Technical Lead Paytm is Indias leading digital payments and financial services company, which is focused on driving consumers and merchants to its platform by offering them a variety of payment use cases. Paytm provides consumers with services like utility payments and money transfers, while empowering them to pay via Paytm Payment Instruments (PPI) like Paytm Wallet, Paytm UPI, Paytm Payments Bank Netbanking, Paytm FASTag and Paytm Postpaid - Buy Now, Pay Later. To merchants, Paytm offers acquiring devices like Soundbox, EDC, QR and Payment Gateway where payment aggregation is done through PPI and also other banks financial instruments. To further enhance merchants business, Paytm offers merchants commerce services through advertising and Paytm Mini app store.Operating on this platform leverage, the company then offers credit services such as merchant loans, personal loans and BNPL, sourced by its financial partners. About the Role: This position requiressomeone to work on complex technical projects and closely work with peers in an innovative andfast-paced environment. For this role, we require someone with a strong product design sense & specialized in Hadoop and Spark technologies. Requirements: Minimum 6+ years of experience in Big Data technologies. The position Grow our analytics capabilities with faster, more reliabletools, handling petabytes ofdataevery day. Brainstorm and create new platforms that can help in our quest to makeavailable to cluster users in all shapes and forms, with low latency and horizontalscalability. Make changes to ourdiagnosing any problems across the entire technical stack. Design and develop a real-time events pipeline forDataingestion for real-time dash-boarding.Develop complex and efficient functions to transform rawdatasources into powerful,reliable components of ourdatalake. Design & implement new components and various emerging technologies in HadoopEco- System, and successful execution of various projects. Be a brand ambassador for Paytm- Stay Hungry, Stay Humble, Stay Relevant! Skills that will help you succeed in this role: Fluent withStrong hands-on experience with Hadoop, MapReduce, Hive, Spark, PySpark etc.Excellent programming/debugging skills in Python/Java/Scala. Experience with any scripting language such as Python, Bash etc. Good to have experience of working with noSQL databases like Hbase, Cassandra.Hands-on programming experience with multithreaded applications.Good to have experience in Database, SQL, messaging queues like Kafka. Good to have experience in developing streaming applications e.g. Spark Streaming,Flink, Storm, etc.Good to have experience with AWS and cloud technologies such as S3 Experience with caching architectures like Redis etc. Why join us: Because you get an opportunity to make a difference, and have a great time doing that.You are challenged and encouraged here to do stuff that is meaningful for you and for those we serve.You should work with us if you think seriously about what technology can do for people.We are successful, and our successes are rooted in our people's collective energy and unwavering focus on the customer, and that's how it will always be. Compensation: If you are the right fit, we believe in creating wealth for you with enviable 500 mn+ registered users, 21 mn+ merchants and depth of data in our ecosystem, we are in a unique position to democratize credit for deserving consumers & merchants- and we are committed to it. Indias largest digital lending story is brewing here. Its your opportunity to be a part of the story!

Posted 1 month ago

Apply

2.0 - 7.0 years

4 - 9 Lacs

Hyderabad

Work from Office

Job Summary Synechron is seeking a skilled PySpark Data Engineer to design, develop, and optimize data processing solutions leveraging modern big data technologies. In this role, you will lead efforts to build scalable data pipelines, support data integration initiatives, and work closely with cross-functional teams to enable data-driven decision-making. Your expertise will contribute to enhancing business insights and operational efficiency, positioning Synechron as a pioneer in adopting emerging data technologies. Software Requirements Required Software Skills: PySpark (Apache Spark with Python) experience in developing data pipelines Apache Spark ecosystem knowledge Python programming (versions 3.7 or higher) SQL and relational database management systems (e.g., PostgreSQL, MySQL) Cloud platforms (preferably AWS or Azure) Version control: GIT Data workflow orchestration tools like Apache Airflow Data management tools: SQL Developer or equivalent Preferred Software Skills: Experience with Hadoop ecosystem components Knowledge of containerization (Docker, Kubernetes) Familiarity with data lake and data warehouse solutions (e.g., AWS S3, Redshift, Snowflake) Monitoring and logging tools (e.g., Prometheus, Grafana) Overall Responsibilities Lead the design and implementation of large-scale data processing solutions using PySpark and related technologies Collaborate with data scientists, analysts, and business teams to understand data requirements and deliver scalable pipelines Mentor junior team members on best practices in data engineering and emerging technologies Evaluate new tools and methodologies to optimize data workflows and improve data quality Ensure data solutions are robust, scalable, and aligned with organizational data governance policies Stay informed on industry trends and technological advancements in big data and analytics Support production environment stability and performance tuning of data pipelines Drive innovative approaches to extract value from large and complex datasets Technical Skills (By Category) Programming Languages: Required: Python (PySpark experience minimum 2 years) Preferred: Scala (for Spark), SQL, Bash scripting Databases/Data Management: Relational databases (PostgreSQL, MySQL) Distributed storage solutions (HDFS, cloud object storage like S3 or Azure Blob Storage) Data warehousing platforms (Snowflake, Redshift preferred) Cloud Technologies: Required: Experience deploying and managing data solutions on AWS or Azure Preferred: Knowledge of cloud-native services like EMR, Data Factory, or Azure Data Lake Frameworks and Libraries: Apache Spark (PySpark) Airflow or similar orchestration tools Data processing frameworks (Kafka, Spark Streaming preferred) Development Tools and Methodologies: Version control with GIT Agile management tools: Jira, Confluence Continuous integration/deployment pipelines (Jenkins, GitLab CI) Security Protocols: Understanding of data security, access controls, and GDPR compliance in cloud environments Experience Requirements Minimum of 5+ years in data engineering, with hands-on PySpark experience Proven track record of developing, deploying, and maintaining scalable data pipelines Experience working with data lakes, data warehouses, and cloud data services Demonstrated leadership in projects involving big data technologies Experience mentoring junior team members and collaborating across teams Prior experience in financial, healthcare, or retail sectors is beneficial but not mandatory Day-to-Day Activities Develop, optimize, and deploy big data pipelines using PySpark and related tools Collaborate with data analysts, data scientists, and business teams to define data requirements Conduct code reviews, troubleshoot pipeline issues, and optimize performance Mentor junior team members on best practices and emerging technologies Design solutions for data ingestion, transformation, and storage Evaluate new tools and frameworks for continuous improvement Maintain documentation, monitor system health, and ensure security compliance Participate in sprint planning, daily stand-ups, and project retrospectives to align priorities Qualifications Bachelors or Masters degree in Computer Science, Information Technology, or related discipline Relevant industry certifications (e.g., AWS Data Analytics, GCP Professional Data Engineer) preferred Proven experience working with PySpark and big data ecosystems Strong understanding of software development lifecycle and data governance standards Commitment to continuous learning and professional development in data engineering technologies Professional Competencies Analytical mindset and problem-solving acumen for complex data challenges Effective leadership and team management skills Excellent communication skills tailored to technical and non-technical audiences Adaptability in fast-evolving technological landscapes Strong organizational skills to prioritize tasks and manage multiple projects Innovation-driven with a passion for leveraging emerging data technologies

Posted 1 month ago

Apply

3.0 - 8.0 years

5 - 12 Lacs

Chennai

Work from Office

Min 3+ yrs in Data engineer(GenAI platform) ETL/ELT workflows using AWS,Azure Databricks,Airflow,Azure Data Factory Exp in Azure Databricks,Snowflake,Airflow,Python,SQL,Spark,Spark Streaming,AWS EKS, CI/CD(Jenkins) Elasticsearch,SOLR,OpenSearch,Vespa

Posted 1 month ago

Apply

6.0 - 11.0 years

19 - 27 Lacs

Haryana

Work from Office

About Company Founded in 2011, ReNew, is one of the largest renewable energy companies globally, with a leadership position in India. Listed on Nasdaq under the ticker RNW, ReNew develops, builds, owns, and operates utility-scale wind energy projects, utility-scale solar energy projects, utility-scale firm power projects, and distributed solar energy projects. In addition to being a major independent power producer in India, ReNew is evolving to become an end-to-end decarbonization partner providing solutions in a just and inclusive manner in the areas of clean energy, green hydrogen, value-added energy offerings through digitalisation, storage, and carbon markets that increasingly are integral to addressing climate change. With a total capacity of more than 13.4 GW (including projects in pipeline), ReNew’s solar and wind energy projects are spread across 150+ sites, with a presence spanning 18 states in India, contributing to 1.9 % of India’s power capacity. Consequently, this has helped to avoid 0.5% of India’s total carbon emissions and 1.1% India’s total power sector emissions. In the over 10 years of its operation, ReNew has generated almost 1.3 lakh jobs, directly and indirectly. ReNew has achieved market leadership in the Indian renewable energy industry against the backdrop of the Government of India’s policies to promote growth of this sector. ReNew’s current group of stockholders contains several marquee investors including CPP Investments, Abu Dhabi Investment Authority, Goldman Sachs, GEF SACEF and JERA. Its mission is to play a pivotal role in meeting India’s growing energy needs in an efficient, sustainable, and socially responsible manner. ReNew stands committed to providing clean, safe, affordable, and sustainable energy for all and has been at the forefront of leading climate action in India. Job Description Key responsibilities: 1. Understand, implement, and automate ETL pipelines with better industry standards 2. Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, design infrastructure for greater scalability, etc 3. Developing, integrating, testing, and maintaining existing and new applications 4. Design, and create data pipelines (data lake / data warehouses) for real world energy analytical solutions 5. Expert-level proficiency in Python (preferred) for automating everyday tasks 6. Strong understanding and experience in distributed computing frameworks, particularly Spark, Spark-SQL, Kafka, Spark Streaming, Hive, Azure Databricks etc 7. Limited experience in using other leading cloud platforms preferably Azure. 8. Hands on experience on Azure data factory, logic app, Analysis service, Azure blob storage etc. 9. Ability to work in a team in an agile setting, familiarity with JIRA and clear understanding of how Git works 10. Must have 5-7 years of experience

Posted 2 months ago

Apply

13.0 - 20.0 years

30 - 45 Lacs

Pune

Hybrid

Hi, Wishes from GSN!!! Pleasure connecting with you!!! We been into Corporate Search Services for Identifying & Bringing in Stellar Talented Professionals for our reputed IT / Non-IT clients in India. We have been successfully providing results to various potential needs of our clients for the last 20 years. At present, GSN is hiring DATA ENGINEERING - Solution Architect for one of our leading MNC client. PFB the details for your better understanding : 1. WORK LOCATION : PUNE 2. Job Role: DATA ENGINEERING - Solution Architect 3. EXPERIENCE : 13+ yrs 4. CTC Range: Rs. 35 LPA to Rs. 50 LPA 5. Work Type : WFO Hybrid ****** Looking for SHORT JOINERS ****** Job Description : Who are we looking for : Architectural Vision & Strategy: Define and articulate the technical vision, strategy and roadmap for Big Data, data streaming, and NoSQL solutions , aligning with overall enterprise architecture and business goals. Required Skills : 13+ years of progressive EXP in software development, data engineering and solution architecture roles, with a strong focus on large-scale distributed systems. Expertise in Big Data Technologies: Apache Spark: Deep expertise in Spark architecture, Spark SQL, Spark Streaming, performance tuning, and optimization techniques. Experience with data processing paradigms (batch and real-time). Hadoop Ecosystem: Strong understanding of HDFS, YARN, Hive and other related Hadoop components . Real-time Data Streaming: Apache Kafka: Expert-level knowledge of Kafka architecture, topics, partitions, producers, consumers, Kafka Streams, KSQL, and best practices for high-throughput, low-latency data pipelines. NoSQL Databases: Couchbase: In-depth experience with Couchbase OR MongoDB OR Cassandra), including data modeling, indexing, querying (N1QL), replication, scaling, and operational best practices. API Design & Development: Extensive experience in designing and implementing robust, scalable and secure APIs (RESTful, GraphQL) for data access and integration. Programming & Code Review: Hands-on coding proficiency in at least one relevant language ( Python, Scala, Java ) with a preference for Python and/or Scala for data engineering tasks. Proven experience in leading and performing code reviews, ensuring code quality, performance, and adherence to architectural guidelines. Cloud Platforms: Extensive EXP in designing and implementing solutions on at least one major cloud platform ( AWS, Azure, GCP ), leveraging their Big Data, streaming, and compute services . Database Fundamentals: Solid understanding of relational database concepts, SQL, and data warehousing principles. System Design & Architecture Patterns: Deep knowledge of various architectural patterns (e.g., Microservices, Event-Driven Architecture, Lambda/Kappa Architecture, Data Mesh ) and their application in data solutions. DevOps & CI/CD: Familiarity with DevOps principles, CI/CD pipelines, infrastructure as code (IaC) and automated deployment strategies for data platforms . ****** Looking for SHORT JOINERS ****** Interested, don't hesitate to call NAK @ 9840035825 / 9244912300 for IMMEDIATE response. Best, ANANTH | GSN | Google review : https://g.co/kgs/UAsF9W

Posted 2 months ago

Apply

10.0 - 12.0 years

9 - 13 Lacs

Chennai

Work from Office

Job Title Data ArchitectExperience 10-12 YearsLocation Chennai : 10-12 years experience as Data Architect Strong expertise in streaming data technologies like Apache Kafka, Flink, Spark Streaming, or Kinesis. ProficiencyinprogramminglanguagessuchasPython,Java,Scala,orGo ExperiencewithbigdatatoolslikeHadoop,Hive,anddatawarehousessuchas Snowflake,Redshift,Databricks,MicrosoftFabric. Proficiencyindatabasetechnologies(SQL,NoSQL,PostgreSQL,MongoDB,DynamoDB,YugabyteDB). Should be flexible to work as anIndividual contributor

Posted 2 months ago

Apply

5.0 - 10.0 years

3 - 7 Lacs

Bengaluru

Work from Office

Job Title:EMR_Spark SMEExperience:5-10 YearsLocation:Bangalore : Technical Skills: 5+ years of experience in big data technologies with hands-on expertise in AWS EMR and Apache Spark. Proficiency in Spark Core, Spark SQL, and Spark Streaming for large-scale data processing. Strong experience with data formats (Parquet, Avro, JSON) and data storage solutions (Amazon S3, HDFS). Solid understanding of distributed systems architecture and cluster resource management (YARN). Familiarity with AWS services (S3, IAM, Lambda, Glue, Redshift, Athena). Experience in scripting and programming languages such as Python, Scala, and Java. Knowledge of containerization and orchestration (Docker, Kubernetes) is a plus. Architect and develop scalable data processing solutions using AWS EMR and Apache Spark. Optimize and tune Spark jobs for performance and cost efficiency on EMR clusters. Monitor, troubleshoot, and resolve issues related to EMR and Spark workloads. Implement best practices for cluster management, data partitioning, and job execution. Collaborate with data engineering and analytics teams to integrate Spark solutions with broader data ecosystems (S3, RDS, Redshift, Glue, etc.). Automate deployments and cluster management using infrastructure-as-code tools like CloudFormation, Terraform, and CI/CD pipelines. Ensure data security and governance in EMR and Spark environments in compliance with company policies. Provide technical leadership and mentorship to junior engineers and data analysts. Stay current with new AWS EMR features and Spark versions to recommend improvements and upgrades. Requirements and Skills Performance tuning and optimization of Spark jobs. Problem-solving skills with the ability to diagnose and resolve complex technical issues. Strong experience with version control systems (Git) and CI/CD pipelines. Excellent communication skills to explain technical concepts to both technical and non-technical audiences. Qualification: Education qualificationB.Tech, BE, BCA, MCA, M. Tech or equivalent technical degree from a reputed college. Certifications: AWS Certified Solutions Architect Associate/Professional AWS Certified Data Analytics Specialty

Posted 2 months ago

Apply

5.0 - 8.0 years

7 - 10 Lacs

Pune

Work from Office

Java+Spark Primary skill - ApacheSpark Secondary skill -Java Strong knowledge in Apache Sparkframework CoreSpark,SparkData Frames,Sparkstreaming Hands-on experience in any one of the programming languages (Java) Good understanding of distributed programming concepts. Experience in optimizingSparkDAG, and Hive queries on Tez Experience using tools like Git, Autosys, Bitbucket, Jira Ability to apply DWH principles within Hadoop environment and NoSQL databases. Mandatory Skills: Apache Spark.: Experience: 5-8 Years.

Posted 2 months ago

Apply

6.0 - 9.0 years

25 - 32 Lacs

Bangalore/Bengaluru

Work from Office

Full time with top German MNC for location Bangalore - Experience on SCALA is a must Job Overview: To work on development, monitoring and maintenance of Data pipelines across clusters. Primary responsibilities: Develop, Monitor and Maintain data pipeline for various plants. Create and maintain optimal data pipeline architecture. Assemble large, complex data sets that meet functional / non-functional business requirements. Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability. Work with stakeholders including the Data officers and stewards to assist with data-related technical issues and support their data infrastructure needs. Work on incidents highlighted by the data officers. Incident diagnosis, routing, evaluation & resolution. Analyze the root cause of incidents. Create incident closure report. Qualifications Qualifications Bachelors degree in Computer Science, Electronics & Communication Engineering, a related technical field, or equivalent practical experience. 6-8 years of experience in Spark, Scala software development. Experience in large-scale software development. Excellent software engineering skills (i.e., data structures, algorithms, software design). Excellent problem-solving, investigative, and troubleshooting skills. Experience in Kafka is mandatory Additional Information Skills Self-starter and empowered professional with strong execution and project management capabilities Ability to collaborate effectively, well developed inter personal relationships with all levels in the organization and outside contacts. Outstanding written and verbal communication skills. High Collaboration & a perseverance to drive performance & change Additional information Key Competencies- Distributed computing systems Experience with CI/CD tools such as Jenkins or Github Actions Experience with Python programming Working knowledge of Docker & Kubernetes Experience in developing data pipelines using spark & scala. Experience in debugging pipeline issues. Experience in writing python and shell scripts. In-Depth Knowledge of SQL and Other Database Solutions Having a strong understanding of Apache Hadoop-based analytics Hands on experience on InteliJ, Github /Bitbucket, HUE.

Posted 2 months ago

Apply

4.0 - 8.0 years

0 Lacs

maharashtra

On-site

The opportunity available at EY is for a Bigdata Engineer based in Pune, requiring a minimum of 4 years of experience. As a key member of the technical team, you will collaborate with Engineers, Data Scientists, and Data Users in an Agile environment. Your responsibilities will include software design, Scala & Spark development, automated testing, promoting development standards, production support, troubleshooting, and liaising with BAs to ensure correct interpretation and implementation of requirements. You will be involved in implementing tools and processes, handling performance, scale, availability, accuracy, and monitoring. Additionally, you will participate in regular planning and status meetings, provide input in Sprint reviews and retrospectives, and contribute to system architecture and design. Peer code reviews will also be a part of your responsibilities. Key technical skills required for this role include Scala or Java development and design, experience with technologies such as Apache Hadoop, Apache Spark, Spark streaming, YARN, Kafka, Hive, Python, and ETL frameworks. Hands-on experience in building data pipelines using Hadoop components and familiarity with version control tools, automated deployment tools, and requirement management is essential. Knowledge of big data modelling techniques and debugging code issues are also necessary. Desired qualifications include experience with Elastic search, scheduling tools like Airflow and Control-M, understanding of Cloud design patterns, exposure to DevOps & Agile Project methodology, and Hive QL development. The ideal candidate will possess strong communication skills, the ability to collaborate effectively, mentor developers, and lead technical initiatives. A Bachelors or Masters degree in Computer Science, Engineering, or a related field is required. EY is looking for individuals who can work collaboratively across teams, solve complex problems, and deliver practical solutions while adhering to commercial and legal requirements. The organization values agility, curiosity, mindfulness, positive energy, adaptability, and creativity in its employees. EY offers a personalized Career Journey, ample learning opportunities, and resources to help individuals understand their roles and opportunities better. EY is committed to being an inclusive employer that focuses on achieving a balance between delivering excellent client service and supporting the career growth and wellbeing of its employees. As a global leader in assurance, tax, transaction, and advisory services, EY believes in providing training, opportunities, and creative freedom to its employees to help build a better working world. The organization encourages personal and professional growth, offering motivating and fulfilling experiences to help individuals reach their full potential.,

Posted 2 months ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

You should have experience in understanding and translating data, analytic requirements, and functional needs into technical requirements while collaborating with global customers. Your responsibilities will include designing cloud-native data architectures to support scalable, real-time, and batch processing. You will be required to build and maintain data pipelines for large-scale data management in alignment with data strategy and processing standards. Additionally, you will define strategies for data modeling, data integration, and metadata management. Your role will also involve having strong experience in database, data warehouse, and data lake design and architecture. You should be proficient in leveraging cloud platforms such as AWS, Azure, or GCP for data storage, compute, and analytics services. Experience in database programming using various SQL flavors is essential. Moreover, you will need to implement data governance frameworks encompassing data quality, lineage, and cataloging. Collaboration with cross-functional teams, including business analysts, data engineers, and DevOps teams, will be a key aspect of this role. Familiarity with the Big Data ecosystem, whether on-premises (Hortonworks/MapR) or in the Cloud, is required. You should be able to evaluate emerging cloud technologies and suggest enhancements to the data architecture. Proficiency in any orchestration tool like Airflow or Oozie for scheduling pipelines is preferred. Hands-on experience in utilizing tools such as Spark Streaming, Kafka, Databricks, and Snowflake is necessary. You should be adept at working in an Agile/Scrum development process and optimizing data systems for cost efficiency, performance, and scalability.,

Posted 2 months ago

Apply

6.0 - 8.0 years

8 - 10 Lacs

Chennai

Work from Office

We are seeking a highly experienced Big Data Lead with strong expertise in Apache Spark, Spark SQL, and Spark Streaming The ideal candidate should have extensive hands-on experience with the Hadoop ecosystem, a solid grasp of multiple programming languages including Java, Scala, and Python, and a proven ability to design and implement data processing pipelines in distributed environments Roles & Responsibilities Lead design and development of scalable data processing pipelines using Apache Spark , Spark SQL , and Spark Streaming Work with Java , Scala , and Python to implement big data solutions Design efficient data ingestion pipelines leveraging Sqoop , Kafka , HDFS , and MapReduce Optimize and troubleshoot Spark jobs for performance and reliability Interface with relational databases ( Oracle , MySQL , SQL Server ) and NoSQL databases Work within Unix/Linux environments, employing tools like Git , Jenkins , and CI/CD pipelines Collaborate with cross-functional teams to ensure delivery of robust big data solutions Ensure code quality through unit testing , BDD/TDD practices , and automated testing frameworks Competencies Required 6+ years of hands-on experience in Apache Spark , Spark SQL , and Spark Streaming Strong proficiency in Java , Scala , and Python as applied to Spark applications In-depth experience with the Hadoop ecosystem : HDFS , MapReduce , Hive , HBase , Sqoop , and Kafka Proficiency in working with both relational and NoSQL databases Hands-on experience with build and automation tools like Maven , Gradle , Jenkins , and version control systems like Git Experience working in Linux/Unix environments and developing RESTful services Familiarity with modern testing methodologies including unit testing , BDD , and TDD

Posted 2 months ago

Apply

5.0 - 9.0 years

25 - 32 Lacs

Pune, Chennai, Coimbatore

Work from Office

Job Description : We are seeking an experienced Data Engineer with expertise in Big Data technologies and a strong background in distributed computing. The ideal candidate will have a proven track record of designing, implementing, and optimizing scalable data solutions using tools like Apache Spark, Python, and various cloud-based platforms. Key Responsibilities : Experience : 5-12 years of hands-on experience in Big Data and related technologies. Distributed Computing Expertise : Deep understanding of distributed computing principles and their application in real-world data systems. Apache Spark Mastery : Extensive experience in leveraging Apache Spark for building large-scale data processing systems. Python Programming : Strong hands-on programming skills in Python, with a focus on data engineering and automation. Big Data Ecosystem Knowledge : Proficiency with Hadoop v2, MapReduce, HDFS, and Sqoop for managing and processing large datasets. Stream Processing Systems : Proven experience in building and optimizing stream-processing systems using technologies like Apache Storm or Spark Streaming. Messaging Systems : Experience with messaging and event streaming technologies, such as Kafka or RabbitMQ , for handling real-time data. Big Data Querying : Solid understanding of Big Data querying tools such as Hive and Impala for querying distributed data sets. Data Integration : Experience in integrating data from diverse sources like RDBMS (e.g., SQL Server, Oracle), ERP systems , and flat files . SQL Expertise : Strong knowledge of SQL, including advanced queries, joins, stored procedures, and relational schemas. NoSQL Databases : Hands-on experience with NoSQL databases like HBase , Cassandra , and MongoDB for handling unstructured data. ETL Frameworks : Familiarity with various ETL techniques and frameworks for efficient data transformation and integration. Performance Optimization : Expertise in performance tuning and optimization of Spark jobs to handle large-scale datasets effectively. Cloud Data Services : Experience working with cloud-based data services such as AWS , Azure , Databricks , or GCP . Team Leadership : Proven ability to lead and mentor teams effectively, ensuring collaboration, growth, and project success. Big Data Solutions : Strong experience in designing and implementing comprehensive Big Data solutions that are scalable, efficient, and reliable. Agile Methodology : Practical experience working within Agile frameworks to deliver high-quality data solutions in a fast-paced environment. Please note: The role is having some F2F events so please do not apply from other locations, please be assured your resumes will be highly confidential, and will not be taken ahead without your consent.

Posted 2 months ago

Apply

5.0 - 10.0 years

10 - 20 Lacs

Noida, Hyderabad, Greater Noida

Work from Office

Streaming data Technical skills requirements :- Experience- 5+ Years Solid hands-on and Solution Architecting experience in Big-Data Technologies (AWS preferred) - Hands on experience in: AWS Dynamo DB, EKS, Kafka, Kinesis, Glue, EMR - Hands-on experience of programming language like Scala with Spark. - Good command and working experience on Hadoop Map Reduce, HDFS, Hive, HBase, and/or No-SQL Databases - Hands on working experience on any of the data engineering analytics platform (Hortonworks Cloudera MapR AWS), AWS preferred - Hands-on experience on Data Ingestion Apache Nifi, Apache Airflow, Sqoop, and Oozie - Hands on working experience of data processing at scale with event driven systems, message queues (Kafka FlinkSpark Streaming) - Hands on working Experience with AWS Services like EMR, Kinesis, S3, CloudFormation, Glue, API Gateway, Lake Foundation - Hands on working Experience with AWS Athena - Experience building data pipelines for structured unstructured, real-time batch, events synchronous asynchronous using MQ, Kafka, Steam processing. Mandatory Skills- Spark, Scala, AWS, Hadoop

Posted 2 months ago

Apply

5.0 - 10.0 years

6 - 15 Lacs

Bengaluru

Work from Office

Greetings!!! If you're interested please apply by clicking below link https://bloomenergy.wd1.myworkdayjobs.com/BloomEnergyCareers/job/Bangalore-Karnataka/Staff-Engineer---Streaming-Analytics_JR-19447 Role & responsibilities Our team at Bloom Energy embraces the unprecedented opportunity to change the way companies utilize energy. Our technology empowers businesses and communities to responsibly take charge of their energy. Our energy platform has three key value propositions: resiliency, sustainability, and predictability. We provide infrastructure that is flexible for the evolving net zero ecosystem. We have deployed more than 30,000 fuel cell modules since our first commercial shipments in 2009, sending energy platforms to data centers, hospitals, manufacturing facilities, biotechnology facilities, major retail stores, financial institutions, telecom facilities, utilities, and other critical infrastructure customers around the world. Our mission is to make clean, reliable energy affordable globally. We never stop striving to improve our technology, to expand and improve our company performance, and to develop and support the many talented employees that serve our mission! Role & responsibilities: Assist in developing distributed learning algorithms Responsible for building real-time analytics on cloud and edge devices Responsible for developing scalable data pipelines and analytics tools Solve challenging data and architectural problems using cutting edge technology Cross functional collaboration with data scientists / data engineering / firmware controls teams Skills and Experience: Strong Java/ Scala programming/debugging ability and clear design patterns understanding, Python is a bonus Understanding of Kafka/ Spark / Flink / Hadoop / HBase etc. internals (Hands on experience in one or more preferred) Implementing data wrangling, transformation and processing solutions, demonstrated experience of working with large datasets Knowhow of cloud computing platforms like AWS/GCP/Azure beneficial Exposure to data lakes and data warehousing concepts, SQL, NoSQL databases Working on REST APIs, gRPC are good to have skills Ability to adapt to new technology, concept, approaches, and environment faster Problem-solving and analytical skills Must have a learning attitude and improvement mindset

Posted 2 months ago

Apply

1.0 - 3.0 years

15 - 30 Lacs

Bengaluru

Work from Office

About the Role Does digging deep for data and turning it into useful, impactful insights get you excited? Then you could be our next SDE II Data-Real Time Streaming. In this role, youll oversee your entire teams work, ensuring that each individual is working towards achieving their personal goals and Meeshos organisational goals. Moreover, youll keep an eye on all engineering projects and ensure the team is not straying from the right track. Youll also be tasked with directing programming activities, evaluating system performance, and designing new programs and features for smooth functioning. What you will do Build a platform for ingesting and processing multi terabytes of data daily Curate, build and transform raw data into scalable information Create prototypes and proofs-of-concept for iterative development Reduce technical debt with quality coding Keep a closer look at various projects and monitor the progress Carry on smooth collaborations with the sales team and engineering teams Provide management mentorship which sets the tone for holistic growth Ensure everyone is on the same page and taking ownership of the project What you will need Bachelors/Masters degree in Computer Science At least 1 to 3 years of professional experience Exceptional coding skills using Java, Scala, Python Working knowledge of Redis, MySQL and messaging systems like Kafka Knowledge of RxJava, Java Springboot, Microservices architecture Hands-on experience with the distributed systems architecture dealing with high throughput. Experience in building streaming and real-time solutions using Apache Flink/Spark Streaming/Samza. Familiarity with software engineering best practices across all stages of software development Expertise in Data system internalsStrong problem-solving and analytical skills Familiarity with Big Data systems (Spark/EMR, Hive/Impala, Delta Lake, Presto, Airflow, Data Lineage) is an advantage Familiarity with data modeling, end-to-end data pipelining, OLAP data cubes and BI tools is a plus Experience as a contributor/committer to the Big data stack is a plus Having been a contributor/committer to the big data stack Data modeling experience and end-to-end data pipelining experience is a plus Brownie points for knowledge of OLAP data cubes and BI tools

Posted 2 months ago

Apply

5.0 - 6.0 years

3 - 6 Lacs

Hyderabad

Work from Office

Responsibilities: * Design, develop, test, deploy big data solutions using PySpark, Java, Scala, AWS. * Implement CI/CD pipelines with Docker, Kubernetes, SQL, NoSQL databases.

Posted 2 months ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Hyderabad, Bengaluru

Work from Office

Job Summary Synechron is seeking an experienced Big Data Developer with strong expertise in Spark, Scala, and Python to lead and contribute to large-scale data projects. The role involves designing, developing, and implementing robust data solutions that leverage emerging technologies to enhance business insights and operational efficiency. The successful candidate will play a key role in driving innovation, mentoring team members, and ensuring the delivery of high-quality data products aligned with organizational objectives. Software Requirements Required: Apache Spark (latest stable version) Scala (version 2.12 or higher) Python (version 3.6 or higher) Big Data tools and frameworks supporting Spark and Scala Preferred: Cloud platforms such as AWS, Azure, or GCP for data deployment Data processing or orchestration tools like Kafka, Hadoop, or Airflow Data visualization tools for data insights Overall Responsibilities Lead the development and implementation of data pipelines and solutions using Spark, Scala, and Python Collaborate with business and technology teams to understand data requirements and translate them into scalable solutions Mentor and guide junior team members on best practices in big data development Evaluate and recommend new technologies and tools to improve data processing and quality Stay informed about industry trends and emerging technologies relevant to big data and analytics Ensure timely delivery of data projects with high standards of quality, performance, and security Lead technical reviews, code reviews, and provide inputs to improve overall development standards and practices Contribute to architecture design discussions and assist in establishing data governance standards Technical Skills (By Category) Programming Languages: Essential: Spark (Scala), Python Preferred: Knowledge of Java or other JVM languages Data Management & Databases: Experience with distributed data storage solutions (HDFS, S3, etc.) Familiarity with NoSQL databases (e.g., Cassandra, HBase) and relational databases for data integration Cloud Technologies: Preferred: Cloud platforms (AWS, Azure, GCP) for data processing, storage, and deployment Frameworks & Libraries: Spark MLlib, Spark SQL, Spark Streaming Data processing libraries in Python (pandas, PySpark) Development Tools & Methodologies: Version control (Git, Bitbucket) Agile methodologies (Scrum, Kanban) Data pipeline orchestration tools (Apache Airflow, NiFi) Security & Compliance: Understanding of data security best practices and data privacy regulations Experience Requirements 5 to 10 years of hands-on experience in big data development and architecture Proven experience in designing and developing large-scale data pipelines using Spark, Scala, and Python Demonstrated ability to lead technical projects and mentor team members Experience working with cross-functional teams including data analysts, data scientists, and business stakeholders Track record of delivering scalable, efficient, and secure data solutions in complex environments Day-to-Day Activities Develop, test, and optimize scalable data pipelines using Spark, Scala, and Python Collaborate with data engineers, analysts, and stakeholders to gather requirements and translate into technical solutions Lead code reviews, mentor junior team members, and enforce coding standards Participate in architecture design and recommend best practices in big data development Monitor data workflows performance and troubleshoot issues to ensure data quality and reliability Stay updated with industry trends and evaluate new tools and frameworks for potential implementation Document technical designs, data flows, and implementation procedures Contribute to continuous improvement initiatives to optimize data processing workflows Qualifications Bachelors or Masters degree in Computer Science, Information Technology, or a related field Relevant certifications in cloud platforms, big data, or programming languages are advantageous Continuous learning on innovative data technologies and frameworks Professional Competencies Strong analytical and problem-solving skills with a focus on scalable data solutions Leadership qualities with the ability to guide and mentor team members Excellent communication skills to articulate technical concepts to diverse audiences Ability to work collaboratively in cross-functional teams and fast-paced environments Adaptability to evolving technologies and industry trends Strong organizational skills for managing multiple projects and priorities

Posted 2 months ago

Apply

6.0 - 11.0 years

8 - 13 Lacs

Bengaluru

Work from Office

As a Senior Data Engineer at JLL Technologies, you will: Design, Architect, and Develop solutions leveraging cloud big data technology to ingest, process and analyze large, disparate data sets to exceed business requirements Develop systems that ingest, cleanse and normalize diverse datasets, develop data pipelines from various internal and external sources and build structure for previously unstructured data Interact with internal colleagues and external professionals to determine requirements, anticipate future needs, and identify areas of opportunity to drive data development Develop good understanding of how data will flow & stored through an organization across multiple applications such as CRM, Broker & Sales tools, Finance, HR etc Unify, enrich, and analyze variety of data to derive insights and opportunities Design & develop data management and data persistence solutions for application use cases leveraging relational, non-relational databases and enhancing our data processing capabilities Develop POCs to influence platform architects, product managers and software engineers to validate solution proposals and migrate Develop data lake solution to store structured and unstructured data from internal and external sources and provide technical guidance to help migrate colleagues to modern technology platform Contribute and adhere to CI/CD processes, development best practices and strengthen the discipline in Data Engineering Org Mentor other members in the team and organization and contribute to organizations growth. What we are looking for: 6+ years work experience and bachelors degree in Information Science, Computer Science, Mathematics, Statistics or a quantitative discipline in science, business, or social science. Hands-on engineer who is curious about technology, should be able to quickly adopt to change and one who understands the technologies supporting areas such as Cloud Computing (AWS, Azure(preferred), etc.), Micro Services, Streaming Technologies, Network, Security, etc. 3 or more years of active development experience as a data developer using Python-spark, Spark Streaming, Azure SQL Server, Cosmos DB/Mongo DB, Azure Event Hubs, Azure Data Lake Storage, Azure Search etc. Design & develop data management and data persistence solutions for application use cases leveraging relational, non-relational databases and enhancing our data processing capabilities Build, test and enhance data curation pipelines integration data from wide variety of sources like DBMS, File systems, APIs and streaming systems for various KPIs and metrics development with high data quality and integrity Maintain the health and monitoring of assigned data engineering capabilities that span analytic functions by triaging maintenance issues; ensure high availability of the platform; monitor workload demands; work with Infrastructure Engineering teams to maintain the data platform; serve as an SME of one or more application Team player, Reliable, self-motivated, and self-disciplined individual capable of executing on multiple projects simultaneously within a fast-paced environment working with cross functional teams 3+ years of experience working with source code control systems and Continuous Integration/Continuous Deployment tools Independent and able to manage, prioritize & lead workload What you can expect from us: Our Total Rewards program reflects our commitment to helping you achieve your ambitions in career, recognition, well-being, benefits and pay. Join us to develop your strengths and enjoy a fulfilling career full of varied experiences. Keep those ambitions in sights and imagine where JLL can take you...

Posted 2 months ago

Apply

4.0 - 9.0 years

6 - 11 Lacs

Bengaluru

Work from Office

What this job involves: JLL, an international real estate management company, is seeking an Data Engineer to join our JLL Technologies Team. We are seeking candidates that are self-starters to work in a diverse and fast-paced environment that can join our Enterprise Data team. We are looking for a candidate that is responsible for designing and developing of data solutions that are strategic for the business using the latest technologies Azure Databricks, Python, PySpark, SparkSQL, Azure functions, Delta Lake, Azure DevOps CI/CD. Responsibilities Design, Architect, and Develop solutions leveraging cloud big data technology to ingest, process and analyze large, disparate data sets to exceed business requirements. Design & develop data management and data persistence solutions for application use cases leveraging relational, non-relational databases and enhancing our data processing capabilities. Develop POCs to influence platform architects, product managers and software engineers to validate solution proposals and migrate. Develop data lake solution to store structured and unstructured data from internal and external sources and provide technical guidance to help migrate colleagues to modern technology platform. Contribute and adhere to CI/CD processes, development best practices and strengthen the discipline in Data Engineering Org. Develop systems that ingest, cleanse and normalize diverse datasets, develop data pipelines from various internal and external sources and build structure for previously unstructured data. Using PySpark and Spark SQL, extract, manipulate, and transform data from various sources, such as databases, data lakes, APIs, and files, to prepare it for analysis and modeling. Build and optimize ETL workflows using Azure Databricks and PySpark. This includes developing efficient data processing pipelines, data validation, error handling, and performance tuning. Perform the unit testing, system integration testing, regression testing and assist with user acceptance testing. Articulates business requirements in a technical solution that can be designed and engineered. Consults with the business to develop documentation and communication materials to ensure accurate usage and interpretation of JLL data. Implement data security best practices, including data encryption, access controls, and compliance with data protection regulations. Ensure data privacy, confidentiality, and integrity throughout the data engineering processes. Performs data analysis required to troubleshoot data related issues and assist in the resolution of data issues. Experience & Education Minimum of 4 years of experience as a data developer using Python, PySpark, Spark Sql, ETL knowledge, SQL Server, ETL Concepts. Bachelors degree in Information Science, Computer Science, Mathematics, Statistics or a quantitative discipline in science, business, or social science. Experience in Azure Cloud Platform, Databricks, Azure storage. Effective written and verbal communication skills, including technical writing. Excellent technical, analytical and organizational skills. Technical Skills & Competencies Experience handling un-structured, semi-structured data, working in a data lake environment, leveraging data streaming and developing data pipelines driven by events/queues Hands on Experience and knowledge on real time/near real time processing and ready to code Hands on Experience in PySpark, Databricks, and Spark Sql. Knowledge on json, Parquet and Other file format and work effectively with them No Sql Databases Knowledge like Hbase, Mongo, Cosmos etc. Preferred Cloud Experience on Azure or AWS Python-spark, Spark Streaming, Azure SQL Server, Cosmos DB/Mongo DB, Azure Event Hubs, Azure Data Lake Storage, Azure Search etc. Team player, Reliable, self-motivated, and self-disciplined individual capable of executing on multiple projects simultaneously within a fast-paced environment working with cross functional teams. Youll join an entrepreneurial, inclusive culture. One where we succeed together across the desk and around the globe. Where like-minded people work naturally together to achieve great things. Our Total Rewards program reflects our commitment to helping you achieve your ambitions in career, recognition, well-being, benefits and pay. Join us to develop your strengths and enjoy a fulfilling career full of varied experiences. Keep those ambitions in sights and imagine where JLL can take you.

Posted 2 months ago

Apply

4.0 - 8.0 years

0 - 1 Lacs

Hyderabad, Bengaluru

Hybrid

Role & responsibilities The Senior Associate People Senior Associate L1 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solutions. Utilize a deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution Your Impact: Data Ingestion, Integration and Transformation Data Storage and Computation Frameworks, Performance Optimizations Analytics & Visualizations Infrastructure & Cloud Computing Data Management Platforms Build functionality for data ingestion from multiple heterogeneous sources in batch & real-time Build functionality for data analytics, search and aggregation Preferred candidate profile Minimum 2 years of experience in Big Data technologies Hands-on experience with the Hadoop stack HDFS, sqoop, kafka, Pulsar, NiFi, Spark, Spark Streaming, Flink, Storm, hive, oozie, airflow, and other components required in building end-to-end data pipelines. Bachelor’s degree and year of work experience of 4 to 6 years or any combination of education, training, and/or experience that demonstrates the ability to perform the duties of the position Working knowledge of real-time data pipelines is added advantage. Strong experience in at least the programming language Java, Scala, and Python. Java preferable Hands-on working knowledge of NoSQL and MPP data platforms like Hbase, MongoDB, Cassandra, AWS Redshift, Azure SQLDW, GCP BigQuery, etc. Well-versed and working knowledge with data platform-related services on Azure Set Yourself Apart With: Good knowledge of traditional ETL tools (Informatica, Talend, etc) and database technologies (Oracle, MySQL, SQL Server, Postgres) with hands-on experience Knowledge of data governance processes (security, lineage, catalog) and tools like Collibra, Alation, etc Knowledge of distributed messaging frameworks like ActiveMQ / RabbiMQ / Solace, search & indexing, and Microservices architectures Performance tuning and optimization of data pipelines Cloud data specialty and other related Big data technology certifications A Tip from the Hiring Manager: Join the team to sharpen your skills and expand your collaborative methods. Make an impact on our clients and their businesses directly through your work.

Posted 2 months ago

Apply

8.0 - 11.0 years

45 - 50 Lacs

Noida, Kolkata, Chennai

Work from Office

Dear Candidate, We are hiring a Scala Developer to work on scalable data pipelines, distributed systems, and backend services. This role is perfect for candidates passionate about functional programming and big data. Key Responsibilities: Develop data-intensive applications using Scala . Work with frameworks like Akka, Play, or Spark . Design and maintain scalable microservices and ETL jobs. Collaborate with data engineers and platform teams. Write clean, testable, and well-documented code. Required Skills & Qualifications: Strong in Scala, Functional Programming, and JVM internals Experience with Apache Spark, Kafka, or Cassandra Familiar with SBT, Cats, or Scalaz Knowledge of CI/CD, Docker, and cloud deployment tools Soft Skills: Strong troubleshooting and problem-solving skills. Ability to work independently and in a team. Excellent communication and documentation skills. Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Kandi Srinivasa Delivery Manager Integra Technologies

Posted 2 months ago

Apply

2.0 - 6.0 years

6 - 10 Lacs

Hyderabad

Work from Office

About the Role: Grade Level (for internal use): 09 The Role Software Engineer II The Team Our team is responsible for the design, architecture, and development of our client facing applications using a variety of tools that are regularly updated as new technologies emerge . You will have the opportunity every day to work with people from a wide variety of backgrounds and will be able to develop a close team dynamic with c oworkers from around the globe. The Impact The work you do will be used every single day , its the essential code youll write that provides the data and analytics required for crucial, daily decisions in the capital and commodities markets. Whats in it for y ou Build a career with a global company Work on code that fuels the global financial markets Grow and improve your skills by working on enterprise level products and new technologies Responsibilities Identify , prio ritize and execute tasks in Agile software development environment Develop tools and applications by producing clean, high quality and efficient code Develop solutions to develop/support key business needs Engineer components and common services based on standard development models, languages and tools Produce system design documents and participate actively in technical walkthroughs Collaborate effectively with technical and non-technical partners As a team-member should continuously improve the architecture What Were Looking For Basic Qualifications Bachelor's / Masters Degree in Computer Science , Information Systems or equivalent. 5 to 8 years of experience in application development using .Net Full-stack with React. Hands on experience in functional, distributed application programming. Command of essential technologies Spark, Hive, Hadoop and SQL Knowledge of working with AWS Experience with Elastic Search Experience with Spring boot framework Proficient with software development lifecycle (SDLC) methodologies like Agile, Test- driven development. Nice to Have Knowledge of Python, Java Script and React Knowledge of streaming systems such as Kafka and Spark streaming is a plus. Must be a quick learner to evaluate and embrace new technologies in the Big data space. Excellent written and verbal communication skills. Good collaboration skills . Ability to lead, train and mentor Whats In It For You Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technologythe right combination can unlock possibility and change the world.Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you cantake care of business. We care about our people. Thats why we provide everything youand your careerneed to thrive at S&P Global. Health & WellnessHealth care coverage designed for the mind and body. Continuous LearningAccess a wealth of resources to grow your career and learn valuable new skills. Invest in Your FutureSecure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly PerksIts not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the BasicsFrom retail discounts to referral incentive awardssmall perks can make a big difference. For more information on benefits by country visithttps://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected andengaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, pre-employment training or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- , SWP Priority Ratings - (Strategic Workforce Planning)

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies