Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
4 - 9 years
10 - 20 Lacs
Gurgaon
Hybrid
About GSPANN : Headquartered in California, U.S.A., GSPANN provides consulting and IT services to global clients. We help clients transform how they deliver business value by helping them optimize their IT capabilities, practices, and operations with our experience in retail, high-technology, and manufacturing. With five global delivery centers and 1900+ employees, we provide the intimacy of a boutique consultancy with the capabilities of a large IT services firm. GSPANN is looking for a Data Architect. As we march ahead on a tremendous growth trajectory, we seek passionate and talented professionals to join our growing family. Title : GCP Developers /Leads Skill: GCP, Bigquery, Python, SQL Experience:- 5+ Yrs Work Location: Hyderabad/Gurgaon/Pune Responsibilities Actively participate in all phases of the software development lifecycle, including requirements gathering, functional and technical design, development, testing, roll-out, and support. Solve complex business problems by utilizing a disciplined development methodology. Produce scalable, flexible, efficient, and supportable solutions using appropriate technologies. Analyze the source and target system data. Map the transformation that meets the requirements. Interact with the client and onsite coordinators during different phases of a project Required Skills Hands on in writing Python and SQL Code Should be strong in writing ETL logic using any ETL tool/SQL script/Python Script. Any Data warehousing knowledge with Data Model experience Exposure to any Scheduling tool Experience with any cloud will add advantage.
Posted 3 months ago
6 - 10 years
15 - 30 Lacs
Bengaluru
Work from Office
About the Company : Headquartered in California, U.S.A., GSPANN provides consulting and IT services to global clients. We help clients transform how they deliver business value by helping them optimize their IT capabilities, practices, and operations with our experience in retail, high-technology, and manufacturing. With five global delivery centers and 1900+ employees, we provide the intimacy of a boutique consultancy with the capabilities of a large IT services firm. Role : Big Data Developers/Lead Experience : 6+ Years Technical Skill : BigData, Spark, Python / Pyspark, SQL and AWS (Airflow; S3 and EMR) Work Location : Bangalore Roles and Responsibilities: 6+ Years of Big Data development experience with minimum 5 years hands-on. Hands-on experience of API development (from application / software engineering perspective) Advanced level experience (5+ years) building Real time streaming and batch systems using Apache Spark and Kafka in Java programming language. Experience with any NoSQL stores (HBase, Cassandra, MongoDB, Influx DB). Solid understanding of secure application development methodologies Experience in developing microservices using spring framework is a plus Capable of working as an individual contributor and within team too Design, build & maintain efficient, reusable & reliable code Experience in Hadoop based technologies Java, Hive, Pig, Map Reduce, Spark, python/Scala , Azure Should be able to understand complex architectures and be comfortable working with multiple teams Excellent communication, client engagement and client management skills are strongly preferred. Minimum Bachelors degree in Computer Science, Engineering, Business Information Systems, or related field.
Posted 3 months ago
5 - 9 years
20 - 32 Lacs
Bengaluru
Hybrid
Role: Senior Member of Technical Staff Experience: 4 to 9 years Location: Manyata Tech Park, Bangalore Work mode: Hybrid Skills: Design Data Analytics, Data Engineering, Data Warehouse, Java, SQL, Spark & Spark-SQL, Autonomous Data Warehouse, Object Store, Data Flow/DIS (or similar tech stack), exposure to cloud Short description : Looking for a Software Developer (Data Engineer) who are willing to perform Data Engineering tasks with skills sets like SQL, Spark, Big Data Development, Java & Cloud experience. Job description: Design, develop, troubleshoot and debug software programs for databases, applications etc. Strong knowledge of SQL is required. You should be having programming experience in any of the Java, Python, Scala etc. You should have a good understanding of the data, so that you can perform the data engineering tasks which involves analyzing the data, doing required transformations and loading the data into the Data Warehouse. The Reporting and Visualizations knowledge is a very key for this role. You should be able to develop some visualizations using reporting tools when needed. The experience of development in Cloud like working on processing nodes, storing the data in Object Store buckets, knowledge of databases and performance tuning of data engineering pipelines (batch and real time) is very much needed. Deployment of code in Cloud environment is a required skill. You should have an experience in Agile development. You should be able to perform the Continuous Integration and Continuous deployment of the code we develop. Responsibilities: As a member of the software engineering division, you will apply basic to intermediate knowledge of software architecture to perform software development tasks associated with developing, debugging or designing software applications or operating systems according to provided design specifications. Build enhancements within an existing software architecture and occasionally suggest improvements to the architecture.
Posted 3 months ago
1 - 3 years
3 - 6 Lacs
Pune
Hybrid
Responsibilities: Testing big data ingestion and aggregation flows using spark shell and related queries. Developing automation framework using programming languages such as python and automate the big data workflows such as ingestion, aggregation, ETL processing etc. Debugging and troubleshooting issues within the big data ecosystem. Set up the Big data platform and Hadoop ecosystem for testing. Define test strategy and write test plan for the data platform enhancements and new features/services built on it. Define the operating procedures, service monitors and alerts and work with the NOC team to get them implemented. Responsible for system & performance testing of the data platform and applications Solve problems, establish plans, and provide technical consultation in the design, development, and test effort of complex engineering projects. Review product specifications and write test cases, develop test plans for assigned areas. Identifies issues and technical interdependencies and suggest possible solutions. Recreate complex customer and production reported issues to determine root cause and verify the fix. Requirements: Should have 1-3 years of experience working as SDET and doing meaningful automation. Good programming skills. Python preferred. Hands on experience in automating backend applications (e.g., database, REST API's) Hands on experience with automating any backend applications (e.g., database, server side). Knowledge of relational databases and SQL. Good debugging skills. Working experience working in Linux/Unix environment. Good understanding of testing methodologies. Good to have hands on experience in working on Big Data technologies like Hadoop, Spark Quick learner and good team member with positive attitude. Good verbal and written communication skills. Qualifications: Primary (Mandatory) Skills: Good hands-on experience with Unix/Linux Good hands-on experience in writing python codes QA Methodologies understanding. Secondary Skills (Good to have): Experience in Big data platform & data analytics testing is an advantage. Knowledge on distributed systems and technologies like Hadoop and spark. Competencies We Celebrate Teamwork We Use Data to Solve Problems We are Biased Towards Action We are Leaders and Innovators We put the Customer First Return to Office : PubMatic employees throughout the global have returned to our offices via a hybrid work schedule (3 days in office and 2 days working remotely) that is intended to maximize collaboration, innovation, and productivity among teams and across functions.
Posted 3 months ago
9 - 14 years
16 - 22 Lacs
Bengaluru
Hybrid
Experience in supply chain, logistics, retail, or e-commerce industry preferred. Job Title: Simulation Expert (Data Scientist / Operations Research) Location: Remote Job Summary Seeking a Simulation Expert (Ph.D. preferred) to design advanced discrete event simulations and develop digital twins for supply chains. The role involves collaborating with cross-functional teams, running "what-if" analyses, and applying optimization techniques to improve decision-making. Key Responsibilities Build and maintain simulation models using Python (SimPy, AnyLogic, etc.). Develop digital twins integrating real-time data. Apply optimization methods for transportation, inventory, and warehouse operations. Analyze data, validate models, and present actionable insights. Collaborate with stakeholders and lead project delivery. Stay updated with the latest tools and mentor junior scientists. Qualifications Ph.D./Masters in a relevant field. 1-3 years of simulation modeling experience. Strong Python, SQL, and data visualization skills. Familiarity with cloud platforms (Azure, AWS) and big data technologies. Excellent communication, problem-solving, and collaboration skills. Experience in supply chain, logistics, or e-commerce is a plus.
Posted 3 months ago
5 - 10 years
20 - 27 Lacs
Chennai
Work from Office
Min 5-8 years of experience in Hadoop/big data technologies. Hands-on experience with the Hadoop eco-system (HDFS, MapReduce, Hive, Pig, Impala, Spark, Kafka, Kudu, Solr). Hands-on experience with Python/Pyspark. Design, develop, and optimize ETL pipelines using Python and PySpark to process and transform large-scale datasets, ensuring performance and scalability on big data platforms. Implement big data solutions for Retail banking use cases such Risk analysis, Management Reporting (time series, Vintage curves, Executive summary) and regulatory reporting, while maintaining data accuracy and compliance standards. Collaborate with cross-functional teams to integrate data from various sources, troubleshoot production issues, and ensure efficient, reliable data processing operations.
Posted 3 months ago
5 - 8 years
7 - 10 Lacs
Chennai, Pune
Work from Office
5+ years of hands-on experience in designing, building and supporting Data Applications using Spark, Sqoop and Hive Bachelors or masters degree in Computer Science or related field Strong knowledge of working with large data sets and high-capacity big data processing platform Strong experience in Unix and Shell scripting Advanced knowledge of the Hadoop ecosystem and its components In-depth knowledge of Hive, Shell scripting, Python, Spark Ability to write MapReduce jobs Experience using Job Schedulers like Autosys Hands on experience in HiveQL Good knowledge on Hadoop Architecture and HDFS Strong knowledge of working with large data sets and high-capacity big data processing platform Strong experience in Unix and Shell scripting Experience with jenkins for Continuous Integration Experience using Source Code and Version Control Systems like Bitbucket, Git Good to have experience on Agile Development Responsibilities : Develop components, application interfaces, and solution enablers while ensuring principal architecture integrity is maintained Ensures solutions are well designed with maintainability/ease of integration and testing built-in from the outset Participates and guides team in estimating work necessary to realize a story/requirement through the software delivery lifecycle Responsible for developing and delivering complex software requirements to accomplish business goals Ensures that software is developed to meet functional, non-functional, and compliance requirements Codes solutions, Unit testing and ensure the solution can be integrated successfully into the overall application/system with clear robust, and well-tested interfaces Required Skills : Hadoop, Hive, HDFS, Spark, Python, Un
Posted 3 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2