Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
7.5 years
0 Lacs
Coimbatore, Tamil Nadu, India
On-site
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Scala Good to have skills : NoSQL Minimum 7.5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and deliver effective solutions that meet business needs, while also troubleshooting any issues that arise in the data flow and processing stages. Roles & Responsibilities: - Expected to be an SME. - Collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Mentor junior team members to enhance their skills and knowledge in data engineering. - Continuously evaluate and improve data processes to enhance efficiency and effectiveness. Professional & Technical Skills: - Must To Have Skills: Proficiency in Scala. - Good To Have Skills: Experience with NoSQL. - Strong understanding of data pipeline architecture and design. - Experience with ETL tools and frameworks. - Familiarity with data warehousing concepts and technologies. - Proficient in data quality assurance and data governance practices. Additional Information: - The candidate should have minimum 7.5 years of experience in Scala. - This position is based at our Bengaluru office. - A 15 years full time education is required. Show more Show less
Posted 1 day ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
NVIDIA is searching for a highly motivated software engineer for the NVIDIA NetQ team that is building a next gen Network management and Telemetry system in cloud using modern design principles at internet scale. NVIDIA NetQ is a highly scalable, modern network operations toolset that provides visibility, troubleshooting, and validation of your Cumulus fabrics in real time. NetQ utilizes telemetry and delivers actionable insights about the health of your data center network, integrating the fabric into your DevOps ecosystem. What you'll be doing: Building and maintaining infrastructure components like NoSQL DB (Cassandra, Mongo), TSDB, Kafka etc Maintain CI/CD pipelines to automate the build, test, and deployment process and build improvements on the bottlenecks. Managing tools and enabling automations for redundant manual workflows via Jenkins, Ansible, Terraforms etc Enable performing scans and handling of security CVEs for infrastructure components Enable triage and handling of production issues to improve system reliability and servicing for customers What we need to see: 5+ years of experience in complex microservices based architectures and Bachelors degree. Highly skilled in Kubernetes and Docker/containerd. Experienced with modern deployment architecture for non-disruptive cloud operations including blue green and canary rollouts. Automation expert with hands on skills in frameworks like Ansible & Terraform. Strong knowledge of NoSQL DB (preferably Cassandra), Kafka/Kafka Streams and Nginx. Expert in AWS, Azure or GCP. Having good programming background in languages like Scala or Python. Knows best practices and discipline of managing a highly available and secure production infrastructure. Ways to stand out from the crowd: Experience with APM tools like Dynatrace, Datadog, AppDynamics, New Relic, etc. Skills in Linux/Unix Administration. Experience with Prometheus/Grafana. Implemented highly scalable log aggregation systems in past using ELK stack or similar. Implemented robust metrics collection and alerting infrastructure. NVIDIA is widely considered to be one of the technology world’s most desirable employers. We have some of the most forward-thinking and hardworking people on the planet working for us. If you're creative, passionate and self-motivated, we want to hear from you! NVIDIA is leading the way in ground-breaking developments in Artificial Intelligence, High-Performance Computing and Visualization. The GPU, our invention, serves as the visual cortex of modern computers and is at the heart of our products and services. JR1998880 Show more Show less
Posted 1 day ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description ( Candidates with less than 5 years of experience, please do not apply for this. Wait for my next post for engineer roles) In this role, you will be working with one of the top engineering companies in the world, will be handling complex algorithms involving peta bytes of data. In this role you'll be responsible for building machine learning based systems and conduct data analysis that improves the quality of our large geospatial data. You’ll be developing NLP models to extract information, using outlier detection to identifying anomalies and applying data science methods to quantify the quality of our data. You will take part in the development, integration, productionisation and deployment of the models at scale, which would require a good combination of data science and software development. Responsibilities Development of machine learning models Building and maintaining software development solutions Provide insights by applying data science methods Take ownership of delivering features and improvements on time Must-have Qualifications 6+ years experience Senior data scientist/ML engineer preferable with knowledge of NLP Strong programming skills and extensive experience with PythonProfessional experience working with LLMs, transformers and open-source models from HuggingFace Professional experience working with machine learning and data science, such as classification, feature engineering, clustering, anomaly detection and neural networks Knowledgeable in classic machine learning algorithms (SVM, Random Forest, Naive Bayes, KNN etc). Experience using deep learning libraries and platforms, such as PyTorch Experience with frameworks such as Sklearn, Numpy, Pandas, Polars Excellent analytical and problem solving skillsExcellent oral and written communication skills Extra Merit Qualifications Knowledge in at least one of the following: NLP, information retrieval, data mining Ability to do statistical modeling and building predictive models Programming skills and experience with Scala and/or Java Show more Show less
Posted 1 day ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Project Role : Responsible AI Tech Lead Project Role Description : Ensure the ethical and responsible use of artificial intelligence (AI) technologies. Design and deploy Responsible AI solutions; align AI projects with ethical principles and regulatory requirements. Provide leadership, fosters cross-functional collaboration, and advocates for ethical AI adoption. Must have skills : Ab Initio, Scala, Core Banking Good to have skills : Oracle Procedural Language Extensions to SQL (PLSQL) Minimum 5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Responsible AI Tech Lead, you will ensure the ethical and responsible use of artificial intelligence technologies. Your typical day will involve designing and deploying Responsible AI solutions, aligning AI projects with ethical principles and regulatory requirements, and providing leadership to foster cross-functional collaboration. You will advocate for the adoption of ethical AI practices, ensuring that all AI initiatives are conducted with integrity and transparency, while also engaging with various stakeholders to promote a culture of responsibility in AI development and implementation. Roles & Responsibilities: - Expected to be an SME. - Collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Facilitate training sessions to enhance team knowledge on Responsible AI practices. - Monitor and evaluate the impact of AI solutions to ensure compliance with ethical standards. Professional & Technical Skills: - Must To Have Skills: Proficiency in Ab Initio, Core Banking, Scala. - Good To Have Skills: Experience with Oracle Procedural Language Extensions to SQL (PLSQL). - Strong understanding of data integration and transformation processes. - Experience in developing and implementing data governance frameworks. - Familiarity with machine learning algorithms and their ethical implications. Additional Information: - The candidate should have minimum 5 years of experience in Ab Initio. - This position is based at our Pune office. - A 15 years full time education is required. Show more Show less
Posted 1 day ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Project Role : Responsible AI Tech Lead Project Role Description : Ensure the ethical and responsible use of artificial intelligence (AI) technologies. Design and deploy Responsible AI solutions; align AI projects with ethical principles and regulatory requirements. Provide leadership, fosters cross-functional collaboration, and advocates for ethical AI adoption. Must have skills : Ab Initio, Scala, Core Banking Good to have skills : Oracle Procedural Language Extensions to SQL (PLSQL) Minimum 5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Responsible AI Tech Lead, you will ensure the ethical and responsible use of artificial intelligence technologies. Your typical day will involve designing and deploying Responsible AI solutions, aligning AI projects with ethical principles and regulatory requirements, and providing leadership to foster cross-functional collaboration. You will advocate for the adoption of ethical AI practices, ensuring that all AI initiatives are conducted with integrity and accountability, while also engaging with various stakeholders to promote a culture of responsible AI usage across the organization. Roles & Responsibilities: - Expected to be an SME. - Collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Facilitate training sessions to enhance team understanding of ethical AI practices. - Monitor and evaluate the impact of AI solutions to ensure compliance with ethical standards. Professional & Technical Skills: - Must To Have Skills: Proficiency in Ab Initio, Core Banking, Scala. - Good To Have Skills: Experience with Oracle Procedural Language Extensions to SQL (PLSQL). - Strong understanding of data integration and transformation processes. - Experience in developing and implementing data governance frameworks. - Familiarity with machine learning models and their ethical implications. Additional Information: - The candidate should have minimum 5 years of experience in Ab Initio. - This position is based in Pune. - A 15 years full time education is required. Show more Show less
Posted 1 day ago
8.0 - 12.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Overview: TekWissen is a global workforce management provider throughout India and many other countries in the world. The below clientis a global company with shared ideals and a deep sense of family. From our earliest days as a pioneer of modern transportation, we have sought to make the world a better place – one that benefits lives, communities and the planet Job Title: Data Architect Location: Chennai Work Type: Onsite Position Description: Materials Management Platform (MMP) is a multi-year transformation initiative aimed at transforming the client's Materials Requirement Planning & Inventory Management capabilities. This is part of a larger Industrial Systems IT Transformation effort. This position responsibility is to design & deploy Data Centric Architecture in GCP for Materials Management platform which would get / give data from multiple applications modern & Legacy in Product Development, Manufacturing, Finance, Purchasing, N-Tier Supply Chain, Supplier Collaboration Skills Required: Data Architecture, GCP Skills Preferred: Cloud Architecture Experience Required: 8 to 12 years Experience Preferred: Requires a bachelor's or foreign equivalent degree in computer science, information technology or a technology related field 8 years of professional experience in: Data engineering, data product development and software product launches At least three of the following languages: Java, Python, Spark, Scala, SQL and experience performance tuning. 4 years of cloud data/software engineering experience building scalable, reliable, and cost-effective production batch and streaming data pipelines using: Data warehouses like Google BigQuery. Workflow orchestration tools like Airflow. Relational Database Management System like MySQL, PostgreSQL, and SQL Server. Real-Time data streaming platform like Apache Kafka, GCP Pub/Sub o Microservices architecture to deliver large-scale real-time data processing application. REST APIs for compute, storage, operations, and security. DevOps tools such as Tekton, GitHub Actions, Git, GitHub, Terraform, Docker. Project management tools like Atlassian JIRA Automotive experience is preferred Support in an onshore/offshore model is preferred Excellent at problem solving and prevention. Knowledge and practical experience of agile delivery Education Required: Bachelor's Degree Education Preferred: Certification Program TekWissen® Group is an equal opportunity employer supporting workforce diversity. Show more Show less
Posted 1 day ago
7.0 years
0 Lacs
Hyderabad, Telangana, India
Remote
Job Summary: We are seeking a highly skilled Lead Data Engineer/Associate Architect to lead the design, implementation, and optimization of scalable data architectures. The ideal candidate will have a deep understanding of data modeling, ETL processes, cloud data solutions, and big data technologies. You will work closely with cross-functional teams to build robust, high-performance data pipelines and infrastructure to enable data-driven decision-making. Experience: 7 - 12 years Work Location: Hyderabad (Hybrid) / Remote Mandatory skills: AWS, Python, SQL, Airflow, DBT Must have done 1 or 2 projects in Clinical Domain/Clinical Industry. Responsibilities: Design and Develop scalable and resilient data architectures that support business needs, analytics, and AI/ML workloads. Data Pipeline Development: Design and implement robust ETL/ELT processes to ensure efficient data ingestion, transformation, and storage. Big Data & Cloud Solutions: Architect data solutions using cloud platforms like AWS, Azure, or GCP, leveraging services such as Snowflake, Redshift, BigQuery, and Databricks. Database Optimization: Ensure performance tuning, indexing strategies, and query optimization for relational and NoSQL databases. Data Governance & Security: Implement best practices for data quality, metadata management, compliance (GDPR, CCPA), and security. Collaboration & Leadership: Work closely with data engineers, analysts, and business stakeholders to translate business requirements into scalable solutions. Technology Evaluation: Stay updated with emerging trends, assess new tools and frameworks, and drive innovation in data engineering. Required Skills: Education: Bachelor's or Master's degree in Computer Science, Data Engineering, or a related field. Experience: 7 - 12+ years of experience in data engineering Cloud Platforms: Strong expertise in AWS data services. Databases: Hands-on experience with SQL, NoSQL, and columnar databases such as PostgreSQL, MongoDB, Cassandra, and Snowflake. Programming: Proficiency in Python, Scala, or Java for data processing and automation. ETL Tools: Experience with tools like Apache Airflow, Talend, DBT, or Informatica. Machine Learning & AI Integration (Preferred): Understanding of how to architect data solutions for AI/ML applications Show more Show less
Posted 1 day ago
8.0 - 11.0 years
0 Lacs
Hyderabad, Telangana, India
Remote
About Demandbase: Demandbase is the Smarter GTM™ company for B2B brands. We help marketing and sales teams overcome the disruptive data and technology fragmentation that inhibits insight and forces them to spam their prospects. We do this by injecting Account Intelligence into every step of the buyer journey, wherever our clients interact with customers, and by helping them orchestrate every action across systems and channels - through advertising, account-based experience, and sales motions. The result? You spot opportunities earlier, engage with them more intelligently, and close deals faster. As a company, we’re as committed to growing careers as we are to building world-class technology. We invest heavily in people, our culture, and the community around us. We have offices in the San Francisco Bay Area, New York, Seattle, and teams in the UK and India, and allow employees to work remotely. We have also been continuously recognized as one of the best places to work in the San Francisco Bay Area. We're committed to attracting, developing, retaining, and promoting a diverse workforce. By ensuring that every Demandbase employee is able to bring a diversity of talents to work, we're increasingly capable of living out our mission to transform how B2B goes to market. We encourage people from historically underrepresented backgrounds and all walks of life to apply. Come grow with us at Demandbase! What you'll be doing: This job is for a responsible individual contributor with a primary duty of leading the development effort and building scalable distributed systems. Design & develop scalable data processing platforms. Work on developing scalable data architecture system. It provides the opportunity and flexibility to own a problem space and drive its product road map. With ample opportunities to learn and explore, a highly motivated and committed engineer can push the limits of technologies in NLP area as well. Follow engineering best practices to solve data matching, data search related problems Work closely with cross-functional teams in an agile environment. What we're looking for: You are a strong analytical and problem-solving skills. You are self-motivated learner. You are eager to learn new technologies. You are receptive to constructive feedback. You are Confident and articulate with excellent written and verbal communication skills. You are open to work in small development environment. Skills Required: Bachelor’s degree in computer science or equivalent discipline from a top engineering institution Adept in computer science fundamentals and passionate towards algorithms, programming and problem solving. 8- 11 years of Software Engineering experience in product companies is a plus. Should have experience in writing Production level code in Java or Scala . Good to have experience in writing Production level code in Python. Should have experience in Multithreading, Distributed Systems, Performance Optimization. Good knowledge of database concepts & proficient in SQL. Experience in Bigdata tech stack like Spark, Kafka & Airflow is a plus. Should have knowledge/experience on one of the cloud AWS/Azure/GCP. Experience in writing unit test cases & Integration test is a must. Show more Show less
Posted 1 day ago
5.0 years
0 Lacs
Coimbatore, Tamil Nadu, India
On-site
Exp:5+yrs NP: Imm-15 days Rounds: 3 Rounds (Virtual) Mandate Skills: Apache spark, hive, Hadoop, spark, scala, Databricks Job Description The Role Designing and building optimized data pipelines using cutting-edge technologies in a cloud environment to drive analytical insights. Constructing infrastructure for efficient ETL processes from various sources and storage systems. Leading the implementation of algorithms and prototypes to transform raw data into useful information. Architecting, designing, and maintaining database pipeline architectures, ensuring readiness for AI/ML transformations. Creating innovative data validation methods and data analysis tools. Ensuring compliance with data governance and security policies. Interpreting data trends and patterns to establish operational alerts. Developing analytical tools, programs, and reporting mechanisms Conducting complex data analysis and presenting results effectively. Preparing data for prescriptive and predictive modeling. Continuously exploring opportunities to enhance data quality and reliability. Applying strong programming and problem-solving skills to develop scalable solutions. Requirements Experience in the Big Data technologies (Hadoop, Spark, Nifi, Impala) 5+ years of hands-on experience designing, building, deploying, testing, maintaining, monitoring, and owning scalable, resilient, and distributed data pipelines. High proficiency in Scala/Java and Spark for applied large-scale data processing. Expertise with big data technologies, including Spark, Data Lake, and Hive. Show more Show less
Posted 1 day ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Location: Hyderabad Budget: 3.5x Notice : Immediate joiners Requirements : • BS degree in computer science, computer engineering or equivalent • 5-9 years of experience delivering enterprise software solutions • Familiar with Spark, Scala, Python, AWS Cloud technologies • 2+ years of experience across multiple Hadoop / Spark technologies such as Hadoop, MapReduce, HDFS, HBase, Hive, Flume, Sqoop, Kafka, Scala • Flair for data, schema, data model, how to bring efficiency in big data related life cycle. • Experience with Agile Development methodologies. • Experience with data ingestion and transformation • Have understanding for secure application development methodologies. • Experience in with Airflow and Python will be preferred. • Understanding of automated QA needs related to Big data technology. • Strong object-oriented design and analysis skills • Excellent written and verbal communication skills Responsibilities • Utilize your software engineering skills including Spark, Python, Scala to Analyze disparate, complex systems and collaboratively design new products and services • Integrate new data sources and tools • Implement scalable and reliable distributed data replication strategies • Collaborate with other teams to design and develop and deploy data tools that support both operations and product use cases • Perform analysis of large data sets using components from the Hadoop ecosystem • Own product features from the development, testing through to production deployment • Evaluate big data technologies and prototype solutions to improve our data processing architecture • Automate different pipelines Show more Show less
Posted 1 day ago
6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Client: Our Client is a global IT services company headquartered in Southborough, Massachusetts, USA. Founded in 1996, with a revenue of $1.8B, with 35,000+ associates worldwide, specializes in digital engineering, and IT services company helping clients modernize their technology infrastructure, adopt cloud and AI solutions, and accelerate innovation. It partners with major firms in banking, healthcare, telecom, and media. Our Client is known for combining deep industry expertise with agile development practices, enabling scalable and cost-effective digital transformation. The company operates in over 50 locations across more than 25 countries, has delivery centers in Asia, Europe, and North America and is backed by Baring Private Equity Asia. Job Title :Data Engineer Key Skills :Python , ETL, Snowflake, Apache Airflow Job Locations : Pan India. Experience : 6-7 Education Qualification : Any Graduation. Work Mode : Hybrid. Employment Type : Contract. Notice Period : Immediate Job Description: 6 to 10 years of experience in data engineering roles with a focus on building scalable data solutions. Proficiency in Python for ETL, data manipulation, and scripting. Hands-on experience with Snowflake or equivalent cloud-based data warehouses. Strong knowledge of orchestration tools such as Apache Airflow or similar. Expertise in implementing and managing messaging queues like Kafka , AWS SQS , or similar. Demonstrated ability to build and optimize data pipelines at scale, processing terabytes of data. Experience in data modeling, data warehousing, and database design. Proficiency in working with cloud platforms like AWS, Azure, or GCP. Strong understanding of CI/CD pipelines for data engineering workflows. Experience working in an Agile development environment , collaborating with cross-functional teams. Preferred Skills: Familiarity with other programming languages like Scala or Java for data engineering tasks. Knowledge of containerization and orchestration technologies (Docker, Kubernetes). Experience with stream processing frameworks like Apache Flink . Experience with Apache Iceberg for data lake optimization and management. Exposure to machine learning workflows and integration with data pipelines. Soft Skills: Strong problem-solving skills with a passion for solving complex data challenges. Excellent communication and collaboration skills to work with cross-functional teams. Ability to thrive in a fast-paced, innovative environment. Show more Show less
Posted 1 day ago
4.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Role: Java Developer - Software Engineer Experience: 4-9 Years Location: Chennai (HYBRID) Interview: F2F Mandatory: Java Spring Boot Microservice -React Js -AWS Cloud- DevOps- Node(Added Advantage) Job Description: Overall 4+ years of experience in Java Development Projects 3+Years of development experience in development with React 2+Years Of experience in AWS Cloud, Devops. Microservices development using Spring Boot Technical StackCore Java, Java, J2EE, Spring, MongoDB, GKE, Terraform, GitHub, GCP Developer, Kubernetes, Scala, Kafka Technical ToolsConfluence/Jira/Bitbucket or Git, CI / CD (Maven, Git, Jenkins), Eclipse or IntelliJ IDEA Experience in event-driven architectures (CQRS and SAGA patterns). Experience in Design patterns Build Tools (Gulp, Webpack), Jenkins, Docker, Automation, Bash, Redis, Elasticsearch, Kibana Technical Stack (UI)JavaScript, React JS, CSS/SCSS, HTML5, Git+ Show more Show less
Posted 1 day ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
5 years of experience as a Data Engineer or a similar role. Bachelor’s degree in Computer Science, Data Engineering, Information Technology, or a related field. Strong knowledge of data engineering tools and technologies (e.g. SQL, ETL, data warehousing). Experience with data pipeline frameworks and data processing platforms (e.g. Apache Kafka, Apache Spark). Proficiency in programming languages such as Python, Java, or Scala. Experience with cloud platforms (e.g. AWS, Google Cloud Platform, Azure). Knowledge of data modeling, database design, and data governance. Mongo DB Is Must Show more Show less
Posted 1 day ago
2.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Introduction At IBM, work is more than a job - it's a calling: To build. To design. To code. To consult. To think along with clients and sell. To make markets. To invent. To collaborate. Not just to do something better, but to attempt things you've never thought possible. Are you ready to lead in this new era of technology and solve some of the world's most challenging problems? If so, lets talk. Your Role And Responsibilities As a software developer you will be responsible for Design, develop, implement, automate, deploy, and operate enterprise quality cloud-native software using a microservice architecture through agile development practices. Perform design and implementation reviews for other developers on multiple projects. Regularly develop automation testing and maintenance of microservices. Handle the operations of services deployed on the cloud. Prepare, write, or review technical documentation, such as content to be included in the product documentation and training materials. Prepare and present technology architecture/design and demos at playback meetings to show progress and solicit team direction. Assist customers and other IBMers to effectively use our products and learn from their experiences. Identify and recommend improvements that can be made to continuously evolve and improve. Preferred Education Bachelor's Degree Required Technical And Professional Expertise 2 + Years of strong development experience in cloud native reactive applications in Java Expertise in OOPS /Design patterns Strong development experience with REST API based microservices development Working Knowledge in DB/SQL Working knowledge in Container technologies: Kubernetes, Docker Excellent verbal and written communication skills with the ability to present complex technical information Self-starter, organized, willing to learn and solve things on your own Ability to work effectively as part of a world-wide, agile development team Proven track record of being able to own development projects from design through implementation and delivery Preferred Technical And Professional Experience Development experience in Python , Scala , Spark/SparkMLib , UNIX shell scripting Agile Application Development & Scrum methodologies Experience with Public Cloud Services (AWS, Azure, IBM Cloud) Good debugging skills and troubleshooting be able to participate in Quality and Automation of the product as needed as Quality is very integral to the product Experience with DevOps practices Team-mindset: Willingness to collaborate and iterate Growth-mindset: Willingness to learn new technologies and processes Preferred Bachelor’s or Master’s degree in Computer Science/Information Technology , BCA , MCA. Show more Show less
Posted 1 day ago
50.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Client :- Our client is a French multinational information technology (IT) services and consulting company, headquartered in Paris, France. Founded in 1967, It has been a leader in business transformation for over 50 years, leveraging technology to address a wide range of business needs, from strategy and design to managing operations. The company is committed to unleashing human energy through technology for an inclusive and sustainable future, helping organizations accelerate their transition to a digital and sustainable world. They provide a variety of services, including consulting, technology, professional, and outsourcing services. Job Details:- location : Hyderabad Mode Of Work : Hybrid Notice Period : Immediate Joiners Experience : 6-8 yrs Type Of Hire : Contract to Hire JOB DESCRIPTION: • Understanding of Spark core concepts like RDD’s, DataFrames, DataSets, SparkSQL and Spark Streaming. • Experience with Spark optimization techniques. • Deep knowledge of Delta Lake features like time travel, schema evolution, data partitioning. • Ability to design and implement data pipelines using Spark and Delta Lake as the data storage layer. • Proficiency in Python/Scala/Java for Spark development and integrate with ETL process. • Knowledge of data ingestion techniques from various sources (flat files, CSV, API, database) • Understanding of data quality best practices and data validation techniques. Other Skills: • Understanding of data warehouse concepts, data modelling techniques. • Expertise in Git for code management. • Familiarity with CI/CD pipelines and containerization technologies. • Nice to have experience using data integration tools like DataStage/Prophecy/Informatica/Ab Initio" Show more Show less
Posted 1 day ago
7.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
TEKsystems is seeking a Senior AWS + Data Engineer to join our dynamic team. The ideal candidate should have expertise Data engineer + Hadoop + Scala/Python with AWS services. This role involves designing, developing, and maintaining scalable and reliable software solutions. Job Title: Data Engineer – Spark/Scala (Batch Processing) Location: Manyata- Hybrid Experience: 7+yrs Type: Full-Time Mandatory Skills: 7-10 years’ experience in design, architecture or development in Analytics and Data Warehousing. Experience in building end-to-end solutions with the Big data platform, Spark or scala programming. 5 years of Solid experience in ETL pipeline building with spark or sclala programming framework with knowledge in developing UNIX Shell Script, Oracle SQL/ PL-SQL. Experience in Big data platform for ETL development with AWS cloud platform. Proficiency in AWS cloud services, specifically EC2, S3, Lambda, Athena, Kinesis, Redshift, Glue , EMR, DynamoDB, IAM, Secret Manager, Step functions, SQS, SNS, Cloud Watch. Excellent skills in Python-based framework development are mandatory. Should have experience with Oracle SQL database programming, SQL performance tuning, and relational model analysis. Extensive experience with Teradata data warehouses and Cloudera Hadoop. Proficient across Enterprise Analytics/BI/DW/ETL technologies such as Teradata Control Framework, Tableau, OBIEE, SAS, Apache Spark, Hive Analytics & BI Architecture appreciation and broad experience across all technology disciplines. Experience in working within a Data Delivery Life Cycle framework & Agile Methodology. Extensive experience in large enterprise environments handling large volume of datasets with High SLAs Good knowledge in developing UNIX scripts, Oracle SQL/PL-SQL, and Autosys JIL Scripts. Well versed in AI Powered Engineering tools like Cline, GitHub Copilo Please send the resumes to nvaseemuddin@teksystems.com or kebhat@teksystems.com Show more Show less
Posted 1 day ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Location: Hyderabad Contract Duration: 6 Months Experience Required: 8+ years (Overall), 5+ years (Relevant) 🔧 Primary Skills Python Spark (PySpark) SQL Delta Lake 📌 Key Responsibilities & Skills Strong understanding of Spark core: RDDs, DataFrames, DataSets, SparkSQL, Spark Streaming Proficient in Delta Lake features: time travel, schema evolution, data partitioning Experience designing and building data pipelines using Spark and Delta Lake Solid experience in Python/Scala/Java for Spark development Knowledge of data ingestion from files, APIs, and databases Familiarity with data validation and quality best practices Working knowledge of data warehouse concepts and data modeling Hands-on with Git for code versioning Exposure to CI/CD pipelines and containerization tools Nice to have: experience in ETL tools like DataStage, Prophecy, Informatica, or Ab Initio Show more Show less
Posted 1 day ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Location: Hyderabad Contract Duration: 6 Months Experience Required: 8+ years (Overall), 5+ years (Relevant) 🔧 Primary Skills Python Spark (PySpark) SQL Delta Lake 📌 Key Responsibilities & Skills Strong understanding of Spark core: RDDs, DataFrames, DataSets, SparkSQL, Spark Streaming Proficient in Delta Lake features: time travel, schema evolution, data partitioning Experience designing and building data pipelines using Spark and Delta Lake Solid experience in Python/Scala/Java for Spark development Knowledge of data ingestion from files, APIs, and databases Familiarity with data validation and quality best practices Working knowledge of data warehouse concepts and data modeling Hands-on with Git for code versioning Exposure to CI/CD pipelines and containerization tools Nice to have: experience in ETL tools like DataStage, Prophecy, Informatica, or Ab Initio Show more Show less
Posted 1 day ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Location: Hyderabad Contract Duration: 6 Months Experience Required: 8+ years (Overall), 5+ years (Relevant) 🔧 Primary Skills Python Spark (PySpark) SQL Delta Lake 📌 Key Responsibilities & Skills Strong understanding of Spark core: RDDs, DataFrames, DataSets, SparkSQL, Spark Streaming Proficient in Delta Lake features: time travel, schema evolution, data partitioning Experience designing and building data pipelines using Spark and Delta Lake Solid experience in Python/Scala/Java for Spark development Knowledge of data ingestion from files, APIs, and databases Familiarity with data validation and quality best practices Working knowledge of data warehouse concepts and data modeling Hands-on with Git for code versioning Exposure to CI/CD pipelines and containerization tools Nice to have: experience in ETL tools like DataStage, Prophecy, Informatica, or Ab Initio Show more Show less
Posted 1 day ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Location: Hyderabad Contract Duration: 6 Months Experience Required: 8+ years (Overall), 5+ years (Relevant) 🔧 Primary Skills Python Spark (PySpark) SQL Delta Lake 📌 Key Responsibilities & Skills Strong understanding of Spark core: RDDs, DataFrames, DataSets, SparkSQL, Spark Streaming Proficient in Delta Lake features: time travel, schema evolution, data partitioning Experience designing and building data pipelines using Spark and Delta Lake Solid experience in Python/Scala/Java for Spark development Knowledge of data ingestion from files, APIs, and databases Familiarity with data validation and quality best practices Working knowledge of data warehouse concepts and data modeling Hands-on with Git for code versioning Exposure to CI/CD pipelines and containerization tools Nice to have: experience in ETL tools like DataStage, Prophecy, Informatica, or Ab Initio Show more Show less
Posted 1 day ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Location: Hyderabad Contract Duration: 6 Months Experience Required: 8+ years (Overall), 5+ years (Relevant) 🔧 Primary Skills Python Spark (PySpark) SQL Delta Lake 📌 Key Responsibilities & Skills Strong understanding of Spark core: RDDs, DataFrames, DataSets, SparkSQL, Spark Streaming Proficient in Delta Lake features: time travel, schema evolution, data partitioning Experience designing and building data pipelines using Spark and Delta Lake Solid experience in Python/Scala/Java for Spark development Knowledge of data ingestion from files, APIs, and databases Familiarity with data validation and quality best practices Working knowledge of data warehouse concepts and data modeling Hands-on with Git for code versioning Exposure to CI/CD pipelines and containerization tools Nice to have: experience in ETL tools like DataStage, Prophecy, Informatica, or Ab Initio Show more Show less
Posted 1 day ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Location: Hyderabad Contract Duration: 6 Months Experience Required: 8+ years (Overall), 5+ years (Relevant) 🔧 Primary Skills Python Spark (PySpark) SQL Delta Lake 📌 Key Responsibilities & Skills Strong understanding of Spark core: RDDs, DataFrames, DataSets, SparkSQL, Spark Streaming Proficient in Delta Lake features: time travel, schema evolution, data partitioning Experience designing and building data pipelines using Spark and Delta Lake Solid experience in Python/Scala/Java for Spark development Knowledge of data ingestion from files, APIs, and databases Familiarity with data validation and quality best practices Working knowledge of data warehouse concepts and data modeling Hands-on with Git for code versioning Exposure to CI/CD pipelines and containerization tools Nice to have: experience in ETL tools like DataStage, Prophecy, Informatica, or Ab Initio Show more Show less
Posted 1 day ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Location: Hyderabad Contract Duration: 6 Months Experience Required: 8+ years (Overall), 5+ years (Relevant) 🔧 Primary Skills Python Spark (PySpark) SQL Delta Lake 📌 Key Responsibilities & Skills Strong understanding of Spark core: RDDs, DataFrames, DataSets, SparkSQL, Spark Streaming Proficient in Delta Lake features: time travel, schema evolution, data partitioning Experience designing and building data pipelines using Spark and Delta Lake Solid experience in Python/Scala/Java for Spark development Knowledge of data ingestion from files, APIs, and databases Familiarity with data validation and quality best practices Working knowledge of data warehouse concepts and data modeling Hands-on with Git for code versioning Exposure to CI/CD pipelines and containerization tools Nice to have: experience in ETL tools like DataStage, Prophecy, Informatica, or Ab Initio Show more Show less
Posted 1 day ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Location: Hyderabad Contract Duration: 6 Months Experience Required: 8+ years (Overall), 5+ years (Relevant) 🔧 Primary Skills Python Spark (PySpark) SQL Delta Lake 📌 Key Responsibilities & Skills Strong understanding of Spark core: RDDs, DataFrames, DataSets, SparkSQL, Spark Streaming Proficient in Delta Lake features: time travel, schema evolution, data partitioning Experience designing and building data pipelines using Spark and Delta Lake Solid experience in Python/Scala/Java for Spark development Knowledge of data ingestion from files, APIs, and databases Familiarity with data validation and quality best practices Working knowledge of data warehouse concepts and data modeling Hands-on with Git for code versioning Exposure to CI/CD pipelines and containerization tools Nice to have: experience in ETL tools like DataStage, Prophecy, Informatica, or Ab Initio Show more Show less
Posted 1 day ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Location: Hyderabad Contract Duration: 6 Months Experience Required: 8+ years (Overall), 5+ years (Relevant) 🔧 Primary Skills Python Spark (PySpark) SQL Delta Lake 📌 Key Responsibilities & Skills Strong understanding of Spark core: RDDs, DataFrames, DataSets, SparkSQL, Spark Streaming Proficient in Delta Lake features: time travel, schema evolution, data partitioning Experience designing and building data pipelines using Spark and Delta Lake Solid experience in Python/Scala/Java for Spark development Knowledge of data ingestion from files, APIs, and databases Familiarity with data validation and quality best practices Working knowledge of data warehouse concepts and data modeling Hands-on with Git for code versioning Exposure to CI/CD pipelines and containerization tools Nice to have: experience in ETL tools like DataStage, Prophecy, Informatica, or Ab Initio Show more Show less
Posted 1 day ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2