Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
7 - 9 years
6 - 11 Lacs
Hyderabad
Work from Office
Flexible person with the ability to manage stressful situations and adapt to rapidly changing environments and requirements. Ability to thrive in a fast paced, multi-cultural, customer-oriented environment. Ability to work days, evenings, and overnights; with weekends and after-hours on-call as required; 24x7 support. Requires a minimum of 5-6 years experience in the database administration and automation role. Proficient working knowledge and implementation skills of Linux, Windows, and AIX Deep understanding of database platforms such as BigData, AzureSQL and Working knowledge of custom and commercial application integration methods to databases utilizing APIs and best query practices Comprehensive knowledge around database architecture, management, upkeep and administration of DBs. Mid-level certification or equivalent experience with DB manufactures or industry standard. (Microsoft, Google, Oracle, MariaDB or equivalent work experience of 4-7 years).
Posted 2 months ago
6 - 10 years
20 - 25 Lacs
Delhi NCR, Mumbai, Bengaluru
Work from Office
Develop and automate processes for gathering of expected results from data sources and comparing data results from testing. Assist with the development and maintenance of smoke, performance, functional, and regression tests to ensure code is functioning as designed. Work with the team to understand how changes in the software product affect maintenance of test scripts and the automated testing environments. Own the test automation framework and build appropriate test automation where needed. Write, monitor, execute, and evaluate application tests using industry standard automated testing tools. Set up data, tools, and databases to facilitate the testing process. Develop and maintain constructive working relationships within team and other functional teams Skills & Experience: Lead a QA team of minimum 3 team members 6+ years of technical QA experience with minimal 2 years of automation testing Experience writing test code in Python, Pytest or Robot Strong SQL skills and use of big data cloud platforms Experience writing/maintaining automated tests for Big Data projects. Experience/knowledge working on Apache Spark. Knowledge of a BDD framework like Cucumber or SpecFlow Having excellent knowledge and experience of Data Testing Strategies Data validation, Process validation, Outcome validation, code coverage Execute automated Big Data Testing Tasks such as Pen testing, Architecture Testing, Migration Testing, Performance Testing, Security Testing, Visualization testing Automate Testing of Relational, Flat Files, XML, NoSQL, Cloud, and Big Data Sources Hands-on ETL Validator Testing Tool for automating the ETL/ELT validation Experience in test setup, software installations and pipelines in CI /CD environment Hands-on experience using monitoring tools like new relic, Grafana, etc Behavioural Fit: Highly technical with a keen eye for detail. Driven, self-motivated and results oriented. Confident and an ability to challenge if necessary. Structured and organised. Ability to work in a cross-functional, multi-cultural team and in a collaborative environment with minimal supervision. Ability to multi-task and plan, organize and prioritize multiple projects. Role Key Performance Indicators: Testing Task completion within the time frame. Automation and regression testing percentage goal in every quarter Report and communicate issues to the scrum master, relevant team members and stakeholders. Take ownership of the testing task. Quality and consistency of data across the whole data landscape. Quality of documentation. Location : - Remote
Posted 2 months ago
2 - 7 years
0 Lacs
Bengaluru, Gurgaon, Noida
Work from Office
Impetus Technologies Impetus Technologies is a leading digital engineering company specializing in data, cloud, and AI-driven solutions. Headquartered in Los Gatos, California , with global offices in India, Australia, and Canada , Impetus partners with Fortune 100 clients across banking, airlines, pharmaceuticals, and other industries. Job Description: We are seeking a highly motivated and organized recruiter to join our team. The recruiter will play a key role in sourcing, screening, and hiring top-tier talent for various positions across the company. This role will involve collaborating with hiring managers to understand hiring needs, building and maintaining talent pipelines, and ensuring a positive candidate experience throughout the recruitment process. Key Responsibilities: Partner with hiring managers to understand hiring needs and create job descriptions. Source candidates using various recruiting methods including job boards, social media, and networking events. Screen resumes and conduct phone interviews to evaluate candidates' qualifications and fit. Coordinate and schedule interviews between candidates and hiring teams. Manage the candidate experience, providing clear communication and feedback. Maintain and update the applicant tracking system (ATS). Support the onboarding process for new hires. Build and maintain strong relationships with external recruitment agencies, if necessary. Stay up-to-date on industry trends and best practices.
Posted 3 months ago
6 - 10 years
8 - 18 Lacs
Bengaluru, Mumbai (All Areas)
Hybrid
We are hiring for top based MNC for the below mentioned keyskills and preferred is immediate joiners. Role & responsibilities Java + Spark / Spark + Scala combination : Primary Skill : Pyspark, Scala,Spark, Java spark any of the Spark modules with Big Data experience (exposure to large volume of data, Datawarehousing projects, backend for analytics etc) Primary Skill : we are also ok if candidate has strong Java and Big data experience/Data warehousing etc Preferred candidate profile Looking for Immediate Joiners Perks and benefits
Posted 3 months ago
4 - 9 years
0 - 0 Lacs
Chennai, Bengaluru, Hyderabad
Work from Office
Hi, This is Vinita from Silverlink Technologies. We have an excellent Job opportunity with TCS for the post of Data Engineer" at Bangalore/Hyderabad / Chennai Location. If interested, kindly forward me your word formatted updated resume ASAP on vinita@silverlinktechnologies.com, kindly fill in the below-mentioned details too. JD: BigData , Hadoop , Hive , Python PySpark Full Name: Contact No: Email ID: DOB: Experience: Relevant Exp: Current Company: Notice Period: Current CTC: Expected CTC: Offer in hand: If yes then offered ctc: Date of joining: Company name: Grades -- 10th: 12th: Graduation: Full time/Part Time? University Name: Current Location: Preferred Location: Gap in education: Gap in employment: **Mandatory**Pan Card Number: Have you ever worked with TCS? Do you have active PF account? Role: :Data Engineer Exp: 4-9 yrs Mode: Permanent Notice Period: up to 1-2 Months only Interview Mode: Virtual For any queries can revert back on the below-mentioned details. Regards, Thanks & Regards, Vinita Shetty Silverlink Group Tel: - 022 42000665 Email: Vinita@silverlinktechnologies.com Website: www.silverlinktechnologies.com
Posted 3 months ago
9 - 14 years
30 - 45 Lacs
Bengaluru
Work from Office
Experience in data architecture, data engineering,data architecture initiatives, including building data warehouses, data lakes, and cloud-based data platforms. big data processing frameworks, SQL, Scala, Java, ETL,Redshift Snowflake, Databricks
Posted 3 months ago
6 - 11 years
20 - 35 Lacs
Pune
Work from Office
Bachelor's or master's degree in computer science, Data Science, Machine Learning, or a related field. 7+ years of experience in machine learning, data science, and Python programming. Strong proficiency in Python and machine learning libraries such as TensorFlow, PyTorch, and scikit-learn. Experience with big data technologies such as Hadoop, Spark, and SQL. Experience with cloud platforms such as AWS, Azure, or Google Cloud is a plus.
Posted 3 months ago
11 - 15 years
50 - 55 Lacs
Bengaluru
Work from Office
Job Description: Job Title- Service Operations - Production Engineer Support, AVP Location- Bangalore, India Role Description You will be operating within Corporate Bank Production domain or in Corporate Banking subdivisions, as a AVP - Production Support Engineer. In this role, you will be accountable to drive a culture of proactive continual improvement into the Production environment through application, user request support, troubleshooting and resolving the errors in production environment. Automation of manual work, monitoring improvements and platform hygiene. Training and Mentoring new and existing team members, supporting the resolution of issues and conflicts and preparing reports and meetings. Candidate should have experience in all relevant tools used in the Service Operations environment and has specialist expertise in one or more technical domains and ensures that all associated Service Operations stakeholders are provided with an optimum level of service in line with Service Level Agreements (SLAs) / Operating Level Agreements (OLAs). Ensure all the BAU support queries from business are handled on priority and within agreed SLA and also to ensure all application stability issues are well taken care off. Support the resolution of incidents and problems within the team. Assist with the resolution of complex incidents. Ensure that the right problem-solving techniques and processes are applied Embrace a Continuous Service Improvement approach to resolve IT failings, drive efficiencies and remove repetition to streamline support activities, reduce risk, and improve system availability. Be responsible for your own engineering delivery and using data and analytics, drive a reduction in technical debt across the production environment with development and infrastructure teams. Act as a Production Engineering role model to enhance the technical capability of the Production Support teams to create a future operating model embedded with engineering culture. Train and Mentor team members to grow to the next role Bring in the culture of innovation engineering and automation mindset Deutsche Banks Corporate Bank division is a leading provider of cash management, trade finance and securities finance. We complete green-field projects that deliver the best Corporate Bank - Securities Services products in the world. Our team is diverse, international, and driven by shared focus on clean code and valued delivery. At every level, agile minds are rewarded with competitive pay, support, and opportunities to excel. You will work as part of a cross-functional agile delivery team. You will bring an innovative approach to software development, focusing on using the latest technologies and practices, as part of a relentless focus on business value. You will be someone who sees engineering as team activity, with a predisposition to open code, open discussion and creating a supportive, collaborative environment. You will be ready to contribute to all stages of software delivery, from initial analysis right through to production support. Your key responsibilities Lead by example to drive a culture of proactive continual improvement into the Production environment through automation of manual work, monitoring improvements and platform hygiene. Carry out technical analysis of the Production platform to identify and remediate performance and resiliency issues. Engage in the Software Development Lifecycle (SDLC) to enhance Production Standards and controls. Update the RUN Book and KEDB as & when required Participate in all BCP and component failure tests based on the run books Understand flow of data through the application infrastructure. It is critical to understand the dataflow so as to best provide operational support Event monitoring and management via a 24x7 workbench that is both monitoring and regularly probing the service environment and acting on instruction of a run book. Drive knowledge management across the supported applications and ensure full compliance. Works with team members to identify areas of focus, where training may improve team performance, and improve incident resolution. Your skills and experience Recent experience of applying technical solutions to improve the stability of production environments Working experience of some of the following technology skills: Technologies/Frameworks: Shell Scripting and/or Python JAVA 8/OpenJDK 11 (at least) - for debugging Familiarity with Spring Boot framework Unix Troubleshooting skills Hadoop framework stack Oracle 12c/19c - for pl/sql, familiarity with OEM tooling to review AWR reports and parameters No-SQL ITIL v3 Certified (must) Configuration Mgmt Tooling: Ansible Operating System/Platform: RHEL 7.x (preferred), RHEL6.x OpenShift (as we move towards Cloud computing and the fact that Fabric is dependent on OpenShift) CI/CD: Jenkins (preferred) Team City APM Tooling: Splunk Geneos NewRelic Prometheus-Grafana Other platforms: Scheduling Ctrl-M is a plus, AIRFLOW, CRONTAB or Autosys, etc Methodology: Micro-services architecture SDLC Agile Fundamental Network topology TCP, LAN, VPN, GSLB, GTM, etc Distributed systems experience on cloud platforms such as Azure, GCP is a plus familiarity with containerization/Kubernetes Tools: ServiceNow Jira Confluence BitBucket and/or GIT Oracle, SQL Plus Familiarity with simple Unix Tooling putty, mPutty, exceed (PL/)SQL Developer Good understanding of ITIL Service Management framework such as Incident, Problem, and Change processes. Ability to self-manage a book of work and ensure clear transparency on progress with clear, timely, communication of issues. Excellent troubleshooting and problem solving skills. Excellent communication skills, both written and verbal, with attention to detail. Ability to work in virtual teams and in matrix structures Experience | Exposure (Recommended): 11+ yrs experience in IT in large corporate environments, specifically in the area of controlled production environments or in Financial Services Technology in a client-facing function Service Operations, development experience within a global operations context Global Transaction Banking Experience is a plus. Experience of end-to-end Level 2,3,4 management and good overview of Production/Operations Management overall Experience of supporting complex application and infrastructure domains ITIL / best practice service context. ITIL foundation is plus. Good analytical and problem-solving skills Added advantage if knowing following technologies. ETL Flow and Pipelines. Knowledge of Bigdata, SPARK, HIVE etc. Hands on exp on Splunk/New Relic for creating dashboards along with alerts/rules setups Understanding of messaging systems like SWIFT. MQ messages Understanding Trade life cycles specially for back office
Posted 3 months ago
6 - 11 years
10 - 12 Lacs
Pune
Work from Office
We are looking for highly skilled Data Engineers to join our team for a long-term offshore position. The ideal candidates will have 5+ years of experience in Data Engineering, with a strong focus on Python and programming. The role requires proficiency in leveraging AWS services to build efficient, cost-effective datasets that support Business Reporting and AI/ML Exploration. Candidates must demonstrate the ability to functionally understand the Client Requirements and deliver Optimized Datasets for multiple Downstream Applications. The selected individuals will work under the guidance of an Lead from Onsite and closely with Client Stakeholders to meet business objectives. Key Responsibilities Cloud Infrastructure: Design and implement scalable, cost-effective data pipelines on the AWS platform using services like S3, Athena, Glue, RDS, etc. Manage and optimize data storage strategies for efficient retrieval and integration with other applications. Support the ingestion and transformation of large datasets for reporting and analytics. Tooling and Automation: Develop and maintain automation scripts using Python to streamline data processing workflows. Integrate tools and frameworks like PySpark to optimize performance and resource utilization. Implement monitoring and error-handling mechanisms to ensure reliability and scalability. Collaboration and Communication: Work closely with the onsite lead and client teams to gather and understand functional requirements. Collaborate with business stakeholders and the Data Science team to provide datasets suitable for reporting and AI/ML exploration. Document processes, provide regular updates, and ensure transparency in deliverables. Data Analysis and Reporting: Optimize AWS service utilization to maintain cost-efficiency while meeting performance requirements. Provide insights on data usage trends and support the development of reporting dashboards for cloud costs. Security and Compliance: Ensure secure handling of sensitive data with encryption (e.g., AES-256, TLS) and role-based access control using AWS IAM. Maintain compliance with organizational and industry regulations. Required Skills: 5+ years of experience in Data Engineering with a strong emphasis on AWS platforms. Hands-on expertise with AWS services such as S3, Glue, Athena, RDS, etc. Proficiency in Python and for building Data Pipelines for ingesting data and integrating it across applications. Demonstrated ability to design and develop scalable Data Pipelines and Workflows. Strong problem-solving skills and the ability to troubleshoot complex data issues. Preferred Skills: Experience with Big Data technologies, including Spark, Kafka, and Scala, for Distributed Data processing. Hands-on expertise in working with AWS Big Data services such as EMR, DynamoDB, Athena, Glue, and MSK (Managed Streaming for Kafka). Familiarity with on-premises Big Data platforms and tools for Data Processing and Streaming. Proficiency in scheduling data workflows using Apache Airflow or similar orchestration tools like One Automation, Control-M, etc. Strong understanding of DevOps practices, including CI/CD pipelines and Automation Tools. Prior experience in the Telecommunications Domain, with a focus on large-scale data systems and workflows. AWS certifications (e.g., Solutions Architect, Data Analytics Specialty) are a plus.
Posted 3 months ago
7 - 12 years
0 - 0 Lacs
Bengaluru
Hybrid
Big data admin resource with strong experience of Terraform SRE experience is an added advantage Data Platform Management and Optimization - designs, builds and manages data storage and workflows/compute in cloud environments.
Posted 3 months ago
8 - 13 years
10 - 14 Lacs
Pune, Bengaluru, Hyderabad
Work from Office
MUST HAVE - Minimum of 8+ yrs of role exp. Big data admin resource with strong experience of Terraform Big Data Administration with cloud expertise SRE experience is an added advantage Data Platform Management and Optimization - designs, builds and manages data storage and workflows/compute in cloud environments. They ensure that data is secure, accessible, and processed efficiently.
Posted 3 months ago
7 - 12 years
9 - 14 Lacs
Chennai
Work from Office
Proven experience as SPARK developer Strong programming skills in Java Familiarity with Big Data processing tools and techniques Experience with Hadoop ecosystem Good understanding of distributed system Experience with streaming data Understanding of standard SDLC and Agile processes Good experience in the coding standards and code quality Ability to understand requirements and develop code individually. Strong writing, communication, time-management skills and positive attitude Knowledge working in Agile development.Experience in JAVA / J2EE / JAVA 8/latest version developmentExperience in the Spring /Spring Boot /Spring MVC/Sprint JPA/ Spring Security.Mandatory Experience in CLOUD/DOCKER/CONATINER/Kubernetes Knowledge in Microservices / REST API / Webservices Qualification Proven experience as SPARK developerStrong programming skills in JavaFamiliarity with Big Data processing tools and techniquesExperience with Hadoop ecosystemGood understanding of distributed systemExperience with streaming dataUnderstanding of standard SDLC and Agile processesGood experience in the coding standards and code qualityAbility to understand requirements and develop code individually.Strong writing, communication, time-management skills and positive attitudeKnowledge working in Agile development.Experience in JAVA / J2EE / JAVA 8/latest version developmentExperience in the Spring /Spring Boot /Spring MVC/Sprint JPA/ Spring Security.Mandatory Experience in CLOUD/DOCKER/CONATINER/KubernetesKnowledge in Microservices / REST API / Webservices
Posted 3 months ago
10 - 15 years
7 - 10 Lacs
Chennai
Work from Office
Job description Responsibilities: Translate requirements and implement product features using open source technology stack viz. Python with PySpark Research technical feasibility of new functionalities and products Work on Proof of Concept for architectural and design aspect of new functionality Work on continuous improvement of the products through innovation and learning with focus on benchmarking and optimization Design and develop robust databases for real-time, and collaboration platform using Design and develop framework of hierarchy and access control for various resources Required Skills: Senior Python developer to work with existing team building strategic and tactical projects Hands on software engineer who will be writing code 5+ years of experience in python development required Full Stack Python web development experience preferred Understanding of both relational and non-relational databases Experience with MongoDB and/or Oracle preferred Banking domain experience preferred Familiarity with Windows and Linux operating systems and able to write shell & batch programs Able to work with Continuous Integration and Continuous Deployment tools Demonstrated Subject Matter Expert (SME) in area(s) of Applications Development Ability to adjust priorities quickly as circumstances dictate Demonstrated problem-solving and decision-making skills Nice to Have Skills: Strong collaboration and communication skills within distributed project teams Strong agile/scrum development experience
Posted 3 months ago
5 - 10 years
14 - 24 Lacs
Pune, Bengaluru, Hyderabad
Hybrid
Job Title: MLOps Engineer Locations: Hyderabad/Bnglr/Chennai/Mumbai/Pune/Kolkata/Gurgaon Job Description: We are seeking a highly skilled Senior MLOps Engineer with 4-10 yrs years of experience to join our team. The ideal candidate will have extensive expertise in model deployment, model monitoring, and productionizing machine learning models. Candidate will play a crucial role in designing and implementing efficient workflows for ML programming and team communication, ensuring seamless integration of ML solutions within our organization. Key Responsibilities: Model Deployment: Manage and optimize model deployment processes, including the use of Kubernetes for containerized model deployment and orchestration. Model Registry Management: Maintain and manage a model registry to track versions and ensure smooth transitions from development to production. Design, develop, and maintain robust ETL/ELT, curated and feature engineering processes using Python and SQL to extract, transform, and load data from various sources into our data platforms CI/CD Implementation: Develop and implement Continuous Integration/Continuous Deployment (CI/CD) pipelines for model training, testing, and deployment, ensuring high code quality through rigorous model code reviews. Model Monitoring & Optimization: Design and implement model inference pipelines and monitoring frameworks to support thousands of models across various pods, optimizing execution times and resource usage. ¢ Team Leadership & Training: Manage, mentor, and train junior engineers, fostering their growth and learning while overseeing a large team ¢ Collaboration with Data Science Teams: Train and collaborate with data science team members on best practices in tools such as Kubeflow, Jenkins, Docker, and Kubernetes to ensure smooth model productionization. ¢ Reusable Frameworks Development: Draft designs and apply reusable frameworks for drift detection, live inference, and API integration. ¢ Cost Optimization Initiatives: Propose and implement strategies to reduce operational costs, including optimizing models for resource efficiency, resulting in significant annual savings. ¢ Documentation & Standards Development: Produce MLE standards documents to assist data science teams in deploying their models effectively and consistently. Qualification: Education: Bachelors degree in Computer Science, Information Technology, Engineering, or a related field. Experience: 4-10 years of experience in data engineering or a related field, with a strong focus on Python, SQL, and Azure Cloud technologies. Technical Skills: Proficiency in advanced Python for model deployment, data manipulation, automation, and scripting. Proficient in Kubernetes, model monitoring, and CI/CD practices Productionizing machine learning models, Experience with programming languages and ML frameworks (e.g., TensorFlow, PyTorch). Advanced SQL skills for complex query writing, optimization, and database management. Experience with big data technologies (e.g., Spark, Hadoop) and data lake architectures. Familiarity with CI/CD pipelines, version control (Git), and containerization (Docker), Airflow is a plus. Soft Skills: Strong problem-solving and analytical skills. Excellent communication and collaboration abilities. Ability to work independently and as part of a team in a fast-paced environment.
Posted 3 months ago
10 - 20 years
16 - 30 Lacs
Pune
Hybrid
About the Company : Headquartered in California, U.S.A., GSPANN provides consulting and IT services to global clients. We help clients transform how they deliver business value by helping them optimize their IT capabilities, practices, and operations with our experience in retail, high-technology, and manufacturing. With five global delivery centers and 1900+ employees, we provide the intimacy of a boutique consultancy with the capabilities of a large IT services firm. Job Position BigData Architect Experience 10 +Yrs Technical Skill Set :- BigData, GCP, Kafka, ETL, SQL (Tech Architect with 30% of PM) Location Hyderabad, Pune, Gurgaon Role and Responsibilities Participates in the design of the architecture, supports projects, reviews information elements including models, glossary, flows, data usage. Provides guidance to the team in achieving the project goals/milestones. Contribute as an expert to multiple delivery teams, defining best practices, building reusable components, capability building, aligning industry trends and actively engaging with wider data communities. Hands on coding experience in Big Data Technologies like Hadoop, KAFKA, Hive, Spark , Flink, Storm etc Experience in any 2 of the cloud services (GCP / AWS / Azure) Hands on in building real time & batch ETL/ELT solution using open source technology like Spark/Flink/Storm/Kafka Streaming. Hands on in creating data model (ER and Dimension Model) to help data consumers to create high performance consumption layer. Hands on experience in Cloud Data Lake Implementation preferably GCP only. Strong experience in Python & SQL. Investigate new technologies, data modelling methods and information management systems to determine which ones should be incorporated onto data architectures, and develop implementation timelines and milestones. Required Skill Experience ins database concepts like OLTP , OLAP, Start & Snowflake Schema, Normalization and Demoralization etc. Any experience in SAP will have added advantage. Any experience in Talend & Apache Nifi will add advantage Familiar with open source workflow management software like Airflow/Oozie. Experience creating/supporting production software/systems and a proven track record of identifying and resolving performance bottlenecks for production systems. Experience in running multiple project in parallel following both waterfall and Agile Experience in managing Engineering team and own Architecture and Best practices. Maintaining data quality by introducing data governance/validation framework. Participates in the design of the information architecture: supports projects, reviews information elements including models, glossary, flows, data usage
Posted 3 months ago
10 - 20 years
16 - 30 Lacs
Gurgaon
Hybrid
About the Company : Headquartered in California, U.S.A., GSPANN provides consulting and IT services to global clients. We help clients transform how they deliver business value by helping them optimize their IT capabilities, practices, and operations with our experience in retail, high-technology, and manufacturing. With five global delivery centers and 1900+ employees, we provide the intimacy of a boutique consultancy with the capabilities of a large IT services firm. Job Position BigData Architect Experience 10 +Yrs Technical Skill Set :- BigData, GCP, Kafka, ETL, SQL (Tech Architect with 30% of PM) Location Hyderabad, Pune, Gurgaon Role and Responsibilities Participates in the design of the architecture, supports projects, reviews information elements including models, glossary, flows, data usage. Provides guidance to the team in achieving the project goals/milestones. Contribute as an expert to multiple delivery teams, defining best practices, building reusable components, capability building, aligning industry trends and actively engaging with wider data communities. Hands on coding experience in Big Data Technologies like Hadoop, KAFKA, Hive, Spark , Flink, Storm etc Experience in any 2 of the cloud services (GCP / AWS / Azure) Hands on in building real time & batch ETL/ELT solution using open source technology like Spark/Flink/Storm/Kafka Streaming. Hands on in creating data model (ER and Dimension Model) to help data consumers to create high performance consumption layer. Hands on experience in Cloud Data Lake Implementation preferably GCP only. Strong experience in Python & SQL. Investigate new technologies, data modelling methods and information management systems to determine which ones should be incorporated onto data architectures, and develop implementation timelines and milestones. Required Skill Experience ins database concepts like OLTP , OLAP, Start & Snowflake Schema, Normalization and Demoralization etc. Any experience in SAP will have added advantage. Any experience in Talend & Apache Nifi will add advantage Familiar with open source workflow management software like Airflow/Oozie. Experience creating/supporting production software/systems and a proven track record of identifying and resolving performance bottlenecks for production systems. Experience in running multiple project in parallel following both waterfall and Agile Experience in managing Engineering team and own Architecture and Best practices. Maintaining data quality by introducing data governance/validation framework. Participates in the design of the information architecture: supports projects, reviews information elements including models, glossary, flows, data usage
Posted 3 months ago
5 - 10 years
20 - 30 Lacs
Pune
Hybrid
About the Company : Headquartered in California, U.S.A., GSPANN provides consulting and IT services to global clients. We help clients transform how they deliver business value by helping them optimize their IT capabilities, practices, and operations with our experience in retail, high-technology, and manufacturing. With five global delivery centers and 1900+ employees, we provide the intimacy of a boutique consultancy with the capabilities of a large IT services firm. Role: Big Data Developers/Lead Work Location: Bangalore (CV Raman Nagar), Pune, Hyderabad, Gurugram, Noida Experience : 5+ Years Technical Skill: BigData, AWS, redshift, snowflake, Spark, Python, Scala, & SQL Roles and Responsibilities 5+ Years of Big Data development experience with minimum 2 years hands-on in Java Hands-on experience of API development (from application / software engineering perspective) Advanced level experience (5+ years) building Real time streaming and batch systems using Apache Spark and Kafka in Java programming language. Experience with any NoSQL stores (HBase, Cassandra, MongoDB, Influx DB). Solid understanding of secure application development methodologies Experience in developing microservices using spring framework is a plus Capable of working as an individual contributor and within team too Design, build & maintain efficient, reusable & reliable code Experience in Hadoop based technologies Java, Hive, Pig, Map Reduce, Spark, python/Scala , Azure Should be able to understand complex architectures and be comfortable working with multiple teams Excellent communication, client engagement and client management skills are strongly preferred. Minimum Bachelors degree in Computer Science, Engineering, Business Information Systems, or related field. If the above profile suits you then request, to share your updated profile with below HR details: Full Name- Email Id- Phone No- Total years of experience - Relevant experience Bigdata- Relevant experience AWS- Relevant experience in Snowflake- Relevant experience in Redshift- Rating on SQL (out of5 )- Any other Technology- Notice period - CTC- ECTC- Current company- Current location: Preferred location: Any offers, If yes, Pls mention- Interview availability; Pls mention the Date and Time Revert with your confirmation
Posted 3 months ago
5 - 10 years
20 - 30 Lacs
Bengaluru
Hybrid
About the Company : Headquartered in California, U.S.A., GSPANN provides consulting and IT services to global clients. We help clients transform how they deliver business value by helping them optimize their IT capabilities, practices, and operations with our experience in retail, high-technology, and manufacturing. With five global delivery centers and 1900+ employees, we provide the intimacy of a boutique consultancy with the capabilities of a large IT services firm. Role: Big Data Developers/Lead Work Location: Bangalore (CV Raman Nagar), Pune, Hyderabad, Gurugram, Noida Experience : 5+ Years Technical Skill: BigData, AWS, redshift, snowflake, Spark, Python, Scala, & SQL Roles and Responsibilities 5+ Years of Big Data development experience with minimum 2 years hands-on in Java Hands-on experience of API development (from application / software engineering perspective) Advanced level experience (5+ years) building Real time streaming and batch systems using Apache Spark and Kafka in Java programming language. Experience with any NoSQL stores (HBase, Cassandra, MongoDB, Influx DB). Solid understanding of secure application development methodologies Experience in developing microservices using spring framework is a plus Capable of working as an individual contributor and within team too Design, build & maintain efficient, reusable & reliable code Experience in Hadoop based technologies Java, Hive, Pig, Map Reduce, Spark, python/Scala , Azure Should be able to understand complex architectures and be comfortable working with multiple teams Excellent communication, client engagement and client management skills are strongly preferred. Minimum Bachelors degree in Computer Science, Engineering, Business Information Systems, or related field. If the above profile suits you then request, to share your updated profile with below HR details: Full Name- Email Id- Phone No- Total years of experience - Relevant experience Bigdata- Relevant experience AWS- Relevant experience in Snowflake- Relevant experience in Redshift- Rating on SQL (out of5 )- Any other Technology- Notice period - CTC- ECTC- Current company- Current location: Preferred location: Any offers, If yes, Pls mention- Interview availability; Pls mention the Date and Time Revert with your confirmation
Posted 3 months ago
5 - 10 years
20 - 30 Lacs
Hyderabad
Hybrid
About the Company : Headquartered in California, U.S.A., GSPANN provides consulting and IT services to global clients. We help clients transform how they deliver business value by helping them optimize their IT capabilities, practices, and operations with our experience in retail, high-technology, and manufacturing. With five global delivery centers and 1900+ employees, we provide the intimacy of a boutique consultancy with the capabilities of a large IT services firm. Role: Big Data Developers/Lead Work Location: Bangalore (CV Raman Nagar), Pune, Hyderabad, Gurugram, Noida Experience : 5+ Years Technical Skill: BigData, AWS, redshift, snowflake, Spark, Python, Scala, & SQL Roles and Responsibilities 5+ Years of Big Data development experience with minimum 2 years hands-on in Java Hands-on experience of API development (from application / software engineering perspective) Advanced level experience (5+ years) building Real time streaming and batch systems using Apache Spark and Kafka in Java programming language. Experience with any NoSQL stores (HBase, Cassandra, MongoDB, Influx DB). Solid understanding of secure application development methodologies Experience in developing microservices using spring framework is a plus Capable of working as an individual contributor and within team too Design, build & maintain efficient, reusable & reliable code Experience in Hadoop based technologies Java, Hive, Pig, Map Reduce, Spark, python/Scala , Azure Should be able to understand complex architectures and be comfortable working with multiple teams Excellent communication, client engagement and client management skills are strongly preferred. Minimum Bachelors degree in Computer Science, Engineering, Business Information Systems, or related field. If the above profile suits you then request, to share your updated profile with below HR details: Full Name- Email Id- Phone No- Total years of experience - Relevant experience Bigdata- Relevant experience AWS- Relevant experience in Snowflake- Relevant experience in Redshift- Rating on SQL (out of5 )- Any other Technology- Notice period - CTC- ECTC- Current company- Current location: Preferred location: Any offers, If yes, Pls mention- Interview availability; Pls mention the Date and Time Revert with your confirmation
Posted 3 months ago
5 - 10 years
18 - 30 Lacs
Pune
Hybrid
About the Company : `Headquartered in California, U.S.A., GSPANN provides consulting and IT services to global clients. We help clients transform how they deliver business value by helping them optimize their IT capabilities, practices, and operations with our experience in retail, high-technology, and manufacturing. With five global delivery centers and 1900+ employees, we provide the intimacy of a boutique consultancy with the capabilities of a large IT services firm. Please find the Details below: Job Position (Title) Bigdata Engineers / Leads Experience Required 4 to 10 Yrs Location- Bangalore, Hyderabad, Pune Technical Skill Requirements - Bigdata, Any Cloud (AWS/Azure/GCP), SQL, Pyspark, Spark, Python, Hive, Airflow Apply Link: Untitled form Pls share the below details also along with your profile: Full Name: Email Id: Contact No: Total years of experience - Relevant experience - Bigdata- Cloud- Rating on SQL : Any other Technology- Notice period - CTC- ECTC- Current company- Current location: Preferred location: Any offers, If yes, Pls mention- Interview availability for face to face OR Virtual?-Pls mention
Posted 3 months ago
5 - 10 years
18 - 30 Lacs
Bengaluru
Hybrid
About the Company : `Headquartered in California, U.S.A., GSPANN provides consulting and IT services to global clients. We help clients transform how they deliver business value by helping them optimize their IT capabilities, practices, and operations with our experience in retail, high-technology, and manufacturing. With five global delivery centers and 1900+ employees, we provide the intimacy of a boutique consultancy with the capabilities of a large IT services firm. Please find the Details below: Job Position (Title) Bigdata Engineers / Leads Experience Required 4 to 10 Yrs Location- Bangalore, Hyderabad, Pune Technical Skill Requirements - Bigdata, Any Cloud (AWS/Azure/GCP), SQL, Pyspark, Spark, Python, Hive, Airflow Apply Link: Untitled form Pls share the below details also along with your profile: Full Name: Email Id: Contact No: Total years of experience - Relevant experience - Bigdata- Cloud- Rating on SQL : Any other Technology- Notice period - CTC- ECTC- Current company- Current location: Preferred location: Any offers, If yes, Pls mention- Interview availability for face to face OR Virtual?-Pls mention
Posted 3 months ago
5 - 10 years
18 - 30 Lacs
Hyderabad
Hybrid
About the Company : `Headquartered in California, U.S.A., GSPANN provides consulting and IT services to global clients. We help clients transform how they deliver business value by helping them optimize their IT capabilities, practices, and operations with our experience in retail, high-technology, and manufacturing. With five global delivery centers and 1900+ employees, we provide the intimacy of a boutique consultancy with the capabilities of a large IT services firm. Please find the Details below: Job Position (Title) Bigdata Engineers / Leads Experience Required 4 to 10 Yrs Location- Bangalore, Hyderabad, Pune Technical Skill Requirements - Bigdata, Any Cloud (AWS/Azure/GCP), SQL, Pyspark, Spark, Python, Hive, Airflow Apply Link: Untitled form Pls share the below details also along with your profile: Full Name: Email Id: Contact No: Total years of experience - Relevant experience - Bigdata- Cloud- Rating on SQL : Any other Technology- Notice period - CTC- ECTC- Current company- Current location: Preferred location: Any offers, If yes, Pls mention- Interview availability for face to face OR Virtual?-Pls mention
Posted 3 months ago
4 - 9 years
18 - 30 Lacs
Pune
Hybrid
About the Company : `Headquartered in California, U.S.A., GSPANN provides consulting and IT services to global clients. We help clients transform how they deliver business value by helping them optimize their IT capabilities, practices, and operations with our experience in retail, high-technology, and manufacturing. With five global delivery centers and 1900+ employees, we provide the intimacy of a boutique consultancy with the capabilities of a large IT services firm. Please find the Details below: Job Position (Title) Bigdata Engineers / Leads Experience Required 4 to 10 Yrs Location Bangalore (CV Raman Nagar, Baghmane Road) Technical Skill Requirements - Bigdata, Any Cloud (AWS/Azure/GCP), SQL, Pyspark, Spark, Python, Hive, Airflow Pls share the below details also along with your profile: Full Name: Email Id: Contact No: Total years of experience - Relevant experience - Bigdata- Cloud- Rating on SQL : Any other Technology- Notice period - CTC- ECTC- Current company- Current location: Preferred location: Any offers, If yes, Pls mention- Interview availability for face to face OR Virtual?-Pls mention
Posted 3 months ago
4 - 9 years
18 - 30 Lacs
Bengaluru
Hybrid
About the Company : `Headquartered in California, U.S.A., GSPANN provides consulting and IT services to global clients. We help clients transform how they deliver business value by helping them optimize their IT capabilities, practices, and operations with our experience in retail, high-technology, and manufacturing. With five global delivery centers and 1900+ employees, we provide the intimacy of a boutique consultancy with the capabilities of a large IT services firm. Please find the Details below: Job Position (Title) Bigdata Engineers / Leads Experience Required 4 to 10 Yrs Location Bangalore (CV Raman Nagar, Baghmane Road) Technical Skill Requirements - Bigdata, Any Cloud (AWS/Azure/GCP), SQL, Pyspark, Spark, Python, Hive, Airflow Pls share the below details also along with your profile: Full Name: Email Id: Contact No: Total years of experience - Relevant experience - Bigdata- Cloud- Rating on SQL : Any other Technology- Notice period - CTC- ECTC- Current company- Current location: Preferred location: Any offers, If yes, Pls mention- Interview availability for face to face OR Virtual?-Pls mention
Posted 3 months ago
4 - 9 years
18 - 30 Lacs
Hyderabad
Hybrid
About the Company : `Headquartered in California, U.S.A., GSPANN provides consulting and IT services to global clients. We help clients transform how they deliver business value by helping them optimize their IT capabilities, practices, and operations with our experience in retail, high-technology, and manufacturing. With five global delivery centers and 1900+ employees, we provide the intimacy of a boutique consultancy with the capabilities of a large IT services firm. Please find the Details below: Job Position (Title) Bigdata Engineers / Leads Experience Required 4 to 10 Yrs Location Bangalore (CV Raman Nagar, Baghmane Road) Technical Skill Requirements - Bigdata, Any Cloud (AWS/Azure/GCP), SQL, Pyspark, Spark, Python, Hive, Airflow Pls share the below details also along with your profile: Full Name: Email Id: Contact No: Total years of experience - Relevant experience - Bigdata- Cloud- Rating on SQL : Any other Technology- Notice period - CTC- ECTC- Current company- Current location: Preferred location: Any offers, If yes, Pls mention- Interview availability for face to face OR Virtual?-Pls mention
Posted 3 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2