Home
Jobs

109 Impala Jobs - Page 3

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5 - 8 years

6 - 10 Lacs

Pune

Work from Office

Naukri logo

About The Role Role Purpose The purpose of the role is to support process delivery by ensuring daily performance of the Production Specialists, resolve technical escalations and develop technical capability within the Production Specialists. ? Do Oversee and support process by reviewing daily transactions on performance parameters Review performance dashboard and the scores for the team Support the team in improving performance parameters by providing technical support and process guidance Record, track, and document all queries received, problem-solving steps taken and total successful and unsuccessful resolutions Ensure standard processes and procedures are followed to resolve all client queries Resolve client queries as per the SLA’s defined in the contract Develop understanding of process/ product for the team members to facilitate better client interaction and troubleshooting Document and analyze call logs to spot most occurring trends to prevent future problems Identify red flags and escalate serious client issues to Team leader in cases of untimely resolution Ensure all product information and disclosures are given to clients before and after the call/email requests Avoids legal challenges by monitoring compliance with service agreements ? Handle technical escalations through effective diagnosis and troubleshooting of client queries Manage and resolve technical roadblocks/ escalations as per SLA and quality requirements If unable to resolve the issues, timely escalate the issues to TA & SES Provide product support and resolution to clients by performing a question diagnosis while guiding users through step-by-step solutions Troubleshoot all client queries in a user-friendly, courteous and professional manner Offer alternative solutions to clients (where appropriate) with the objective of retaining customers’ and clients’ business Organize ideas and effectively communicate oral messages appropriate to listeners and situations Follow up and make scheduled call backs to customers to record feedback and ensure compliance to contract SLA’s ? Build people capability to ensure operational excellence and maintain superior customer service levels of the existing account/client Mentor and guide Production Specialists on improving technical knowledge Collate trainings to be conducted as triage to bridge the skill gaps identified through interviews with the Production Specialist Develop and conduct trainings (Triages) within products for production specialist as per target Inform client about the triages being conducted Undertake product trainings to stay current with product features, changes and updates Enroll in product specific and any other trainings per client requirements/recommendations Identify and document most common problems and recommend appropriate resolutions to the team Update job knowledge by participating in self learning opportunities and maintaining personal networks ? Deliver NoPerformance ParameterMeasure1ProcessNo. of cases resolved per day, compliance to process and quality standards, meeting process level SLAs, Pulse score, Customer feedback, NSAT/ ESAT2Team ManagementProductivity, efficiency, absenteeism3Capability developmentTriages completed, Technical Test performance Mandatory Skills: Big Data. Experience5-8 Years. Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention. Come to Wipro. Realize your ambitions. Applications from people with disabilities are explicitly welcome.

Posted 1 month ago

Apply

5 - 10 years

20 - 35 Lacs

Bengaluru

Work from Office

Naukri logo

Position - Pyspark Developer Experience - 5+ yrs Location - Bangalore Notice Period - Immediate - 30 days Roles & Responsibilities: 5+ years of experience as a Data Engineer, with a strong focus on PySpark and the Cloudera Data Platform. PySpark: Advanced proficiency in PySpark, including working with RDDs, Data Frames, and optimization techniques. Cloudera Data Platform: Strong experience with Cloudera Data Platform (CDP) components, including Cloudera Manager, Hive, Impala, HDFS, and HBase. Data Warehousing: Knowledge of data warehousing concepts, ETL best practices, and experience with SQL-based tools (e.g., Hive, Impala). Big Data Technologies: Familiarity with Hadoop, Kafka, and other distributed computing tools. Orchestration and Scheduling: Experience with Apache Oozie, Airflow, or similar orchestration frameworks. Scripting and Automation: Strong scripting skills in Linux. Data Pipeline Development: Design, develop, and maintain highly scalable and optimized ETL pipelines using PySpark on the Cloudera Data Platform, ensuring data integrity and accuracy. Data Ingestion: Implement and manage data ingestion processes from a variety of sources (e.g., relational databases, APIs, file systems) to the data lake or data warehouse on CDP. Data Transformation and Processing: Use PySpark to process, cleanse, and transform large datasets into meaningful formats that support analytical needs and business requirements. Performance Optimization: Conduct performance tuning of PySpark code and Cloudera components, optimizing resource utilization and reducing runtime of ETL processes. Data Quality and Validation: Implement data quality checks, monitoring, and validation routines to ensure data accuracy and reliability throughout the pipeline. Automation and Orchestration: Automate data workflows using tools like Apache Oozie, Airflow, or similar orchestration tools within the Cloudera ecosystem. Monitoring and Maintenance: Monitor pipeline performance, troubleshoot issues, and perform routine maintenance on the Cloudera Data Platform and associated data processes. Collaboration: Work closely with other data engineers, analysts, product managers, and other stakeholders to understand data requirements and support various data-driven initiatives. Documentation: Maintain thorough documentation of data engineering processes, code, and pipeline configurations.

Posted 1 month ago

Apply

12 - 16 years

35 - 40 Lacs

Bengaluru

Work from Office

Naukri logo

As AWS Data Engineer at organization, you will play a crucial role in the design, development, and maintenance of our data infrastructure. Your work will empower data-driven decision-making and contribute to the success of our data-driven initiatives You will design and maintain scalable data pipelines using AWS data analytical resources, enabling efficient data processing and analytics. Key Responsibilities: - Highly experiences in developing ETL pipelines using AWS Glue and EMR with PySpark/Scala. - Utilize AWS services (S3, Glue, Lambda, EMR, Step Functions) for data solutions. - Design scalable data models for analytics and reporting. - Implement data validation, quality, and governance practices. - Optimize Spark jobs for cost and performance efficiency. - Automate ETL workflows with AWS Step Functions and Lambda. - Collaborate with data scientists and analysts on data needs. - Maintain documentation for data architecture and pipelines. - Experience with Open source bigdata file formats such as Iceberg or delta or Hundi - Desirable to have experience in terraforming AWS data analytical resources. Must-Have Skills: - AWS (S3, Glue, EMR Lambda, EMR), PySpark or Scala, SQL, ETL development. Good-to-Have Skills: - Snowflake, Cloudera Hadoop (HDFS, Hive, Impala), Iceberg

Posted 1 month ago

Apply

12 - 16 years

35 - 40 Lacs

Chennai

Work from Office

Naukri logo

As AWS Data Engineer at organization, you will play a crucial role in the design, development, and maintenance of our data infrastructure. Your work will empower data-driven decision-making and contribute to the success of our data-driven initiatives You will design and maintain scalable data pipelines using AWS data analytical resources, enabling efficient data processing and analytics. Key Responsibilities: - Highly experiences in developing ETL pipelines using AWS Glue and EMR with PySpark/Scala. - Utilize AWS services (S3, Glue, Lambda, EMR, Step Functions) for data solutions. - Design scalable data models for analytics and reporting. - Implement data validation, quality, and governance practices. - Optimize Spark jobs for cost and performance efficiency. - Automate ETL workflows with AWS Step Functions and Lambda. - Collaborate with data scientists and analysts on data needs. - Maintain documentation for data architecture and pipelines. - Experience with Open source bigdata file formats such as Iceberg or delta or Hundi - Desirable to have experience in terraforming AWS data analytical resources. Must-Have Skills: - AWS (S3, Glue, EMR Lambda, EMR), PySpark or Scala, SQL, ETL development. Good-to-Have Skills: - Snowflake, Cloudera Hadoop (HDFS, Hive, Impala), Iceberg

Posted 1 month ago

Apply

12 - 16 years

35 - 40 Lacs

Mumbai

Work from Office

Naukri logo

As AWS Data Engineer at organization, you will play a crucial role in the design, development, and maintenance of our data infrastructure. Your work will empower data-driven decision-making and contribute to the success of our data-driven initiatives You will design and maintain scalable data pipelines using AWS data analytical resources, enabling efficient data processing and analytics. Key Responsibilities: - Highly experiences in developing ETL pipelines using AWS Glue and EMR with PySpark/Scala. - Utilize AWS services (S3, Glue, Lambda, EMR, Step Functions) for data solutions. - Design scalable data models for analytics and reporting. - Implement data validation, quality, and governance practices. - Optimize Spark jobs for cost and performance efficiency. - Automate ETL workflows with AWS Step Functions and Lambda. - Collaborate with data scientists and analysts on data needs. - Maintain documentation for data architecture and pipelines. - Experience with Open source bigdata file formats such as Iceberg or delta or Hundi - Desirable to have experience in terraforming AWS data analytical resources. Must-Have Skills: - AWS (S3, Glue, EMR Lambda, EMR), PySpark or Scala, SQL, ETL development. Good-to-Have Skills: - Snowflake, Cloudera Hadoop (HDFS, Hive, Impala), Iceberg

Posted 1 month ago

Apply

12 - 16 years

35 - 40 Lacs

Kolkata

Work from Office

Naukri logo

As AWS Data Engineer at organization, you will play a crucial role in the design, development, and maintenance of our data infrastructure. Your work will empower data-driven decision-making and contribute to the success of our data-driven initiatives You will design and maintain scalable data pipelines using AWS data analytical resources, enabling efficient data processing and analytics. Key Responsibilities: - Highly experiences in developing ETL pipelines using AWS Glue and EMR with PySpark/Scala. - Utilize AWS services (S3, Glue, Lambda, EMR, Step Functions) for data solutions. - Design scalable data models for analytics and reporting. - Implement data validation, quality, and governance practices. - Optimize Spark jobs for cost and performance efficiency. - Automate ETL workflows with AWS Step Functions and Lambda. - Collaborate with data scientists and analysts on data needs. - Maintain documentation for data architecture and pipelines. - Experience with Open source bigdata file formats such as Iceberg or delta or Hundi - Desirable to have experience in terraforming AWS data analytical resources. Must-Have Skills: - AWS (S3, Glue, EMR Lambda, EMR), PySpark or Scala, SQL, ETL development. Good-to-Have Skills: - Snowflake, Cloudera Hadoop (HDFS, Hive, Impala), Iceberg

Posted 1 month ago

Apply

7 - 11 years

50 - 60 Lacs

Mumbai, Delhi / NCR, Bengaluru

Work from Office

Naukri logo

Role :- Resident Solution ArchitectLocation: RemoteThe Solution Architect at Koantek builds secure, highly scalable big data solutions to achieve tangible, data-driven outcomes all the while keeping simplicity and operational effectiveness in mind This role collaborates with teammates, product teams, and cross-functional project teams to lead the adoption and integration of the Databricks Lakehouse Platform into the enterprise ecosystem and AWS/Azure/GCP architecture This role is responsible for implementing securely architected big data solutions that are operationally reliable, performant, and deliver on strategic initiatives Specific requirements for the role include: Expert-level knowledge of data frameworks, data lakes and open-source projects such as Apache Spark, MLflow, and Delta Lake Expert-level hands-on coding experience in Python, SQL ,Spark/Scala,Python or Pyspark In depth understanding of Spark Architecture including Spark Core, Spark SQL, Data Frames, Spark Streaming, RDD caching, Spark MLib IoT/event-driven/microservices in the cloud- Experience with private and public cloud architectures, pros/cons, and migration considerations Extensive hands-on experience implementing data migration and data processing using AWS/Azure/GCP services Extensive hands-on experience with the Technology stack available in the industry for data management, data ingestion, capture, processing, and curation: Kafka, StreamSets, Attunity, GoldenGate, Map Reduce, Hadoop, Hive, Hbase, Cassandra, Spark, Flume, Hive, Impala, etc Experience using Azure DevOps and CI/CD as well as Agile tools and processes including Git, Jenkins, Jira, and Confluence Experience in creating tables, partitioning, bucketing, loading and aggregating data using Spark SQL/Scala Able to build ingestion to ADLS and enable BI layer for Analytics with strong understanding of Data Modeling and defining conceptual logical and physical data models Proficient level experience with architecture design, build and optimization of big data collection, ingestion, storage, processing, and visualization Responsibilities : Work closely with team members to lead and drive enterprise solutions, advising on key decision points on trade-offs, best practices, and risk mitigationGuide customers in transforming big data projects,including development and deployment of big data and AI applications Promote, emphasize, and leverage big data solutions to deploy performant systems that appropriately auto-scale, are highly available, fault-tolerant, self-monitoring, and serviceable Use a defense-in-depth approach in designing data solutions and AWS/Azure/GCP infrastructure Assist and advise data engineers in the preparation and delivery of raw data for prescriptive and predictive modeling Aid developers to identify, design, and implement process improvements with automation tools to optimizing data delivery Implement processes and systems to monitor data quality and security, ensuring production data is accurate and available for key stakeholders and the business processes that depend on it Employ change management best practices to ensure that data remains readily accessible to the business Implement reusable design templates and solutions to integrate, automate, and orchestrate cloud operational needs and experience with MDM using data governance solutions Qualifications : Overall experience of 12+ years in the IT field Hands-on experience designing and implementing multi-tenant solutions using Azure Databricks for data governance, data pipelines for near real-time data warehouse, and machine learning solutions Design and development experience with scalable and cost-effective Microsoft Azure/AWS/GCP data architecture and related solutions Experience in a software development, data engineering, or data analytics field using Python, Scala, Spark, Java, or equivalent technologies Bachelors or Masters degree in Big Data, Computer Science, Engineering, Mathematics, or similar area of study or equivalent work experience Good to have- - Advanced technical certifications: Azure Solutions Architect Expert, - AWS Certified Data Analytics, DASCA Big Data Engineering and Analytics - AWS Certified Cloud Practitioner, Solutions Architect - Professional Google Cloud Certified Location : - Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune, Remote

Posted 1 month ago

Apply

6 - 11 years

20 - 25 Lacs

Hyderabad

Hybrid

Naukri logo

6+ years of total IT experience 3+ years of experience with Hadoop (Cloudera)/big data technologies Knowledge of the Hadoop ecosystem and Big Data technologies Hands-on experience with the Hadoop eco-system (HDFS, MapReduce, Hive, Pig, Impala, Spark, Kafka, Kudu, Solr) Experience in designing and developing Data Pipelines for Data Ingestion or Transformation using Java Scala or Python. Experience with Spark programming (Pyspark, Scala, or Java) Hands-on experience with Python/Pyspark/Scala and basic libraries for machine learning is required. Proficient in programming in Java or Python with prior Apache Beam/Spark experience a plus. Hand on experience in CI/CD, Scheduling and Scripting Ensure automation through CI/CD across platforms both in cloud and on-premises System level understanding - Data structures, algorithms, distributed storage & compute Can-do attitude on solving complex business problems, good interpersonal and teamwork skills

Posted 1 month ago

Apply

2 - 5 years

14 - 17 Lacs

Hyderabad

Work from Office

Naukri logo

As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs. Your primary responsibilities include: Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Must have 5+ years exp in Big Data -Hadoop Spark -Scala ,Python Hbase, Hive Good to have Aws -S3, athena ,Dynomo DB, Lambda, Jenkins GIT Developed Python and pyspark programs for data analysis. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations.. Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala"

Posted 1 month ago

Apply

5 - 10 years

0 - 1 Lacs

Pune

Work from Office

Naukri logo

Position Overview: Cloud Architect with expertise in Hadoop and Google Cloud Platform (GCP) Data Stack , along with experience in Big Data Architecture and Migration . The ideal candidate should have strong proficiency in GCP Big Data tools , including Hadoop, Hive, HDFS, Impala, Spark, MapReduce, MS SQL, Kafka, and Redis . Familiarity with Cloudera, HBase, MongoDB, MariaDB, and Event Hub is a plus. Key Responsibilities: Design, implement, and optimize Big Data architecture on GCP, and Hadoop ecosystems . Lead data migration projects from on-premise to cloud platforms (GCP). Develop and maintain ETL pipelines using tools like Spark, Hive, and Kafka . Manage Hadoop clusters, HDFS, and related components . Work with data streaming technologies like Kafka and Event Hub for real-time data processing. Optimize SQL and NoSQL databases (MS SQL, Redis, MongoDB, MariaDB, HBase) for high availability and scalability. Collaborate with data scientists, analysts, and DevOps teams to integrate Big Data solutions. Ensure data security, governance, and compliance in cloud and on-premise environments. Required Skills & Experience: 5-10 years of experience as Cloud Architect Strong expertise in Hadoop (HDFS, Hive, Impala, Spark, MapReduce) Hands-on experience with GCP Big Data Services Proficiency in MS SQL, Kafka, Redis for data processing and analytics Experience with Cloudera, HBase, MongoDB, and MariaDB Knowledge of real-time data streaming and event-driven architectures Understanding Big Data security and performance optimization Ability to design and execute data migration strategies Location : Koregaon Park, Pune, Maharashtra (India) Shift Timings : USA Time Zone (06:30 PM IST to 03:30 AM IST)

Posted 2 months ago

Apply

6 - 11 years

15 - 25 Lacs

Chandigarh

Work from Office

Naukri logo

Job Requirement Required Experience, Skills & Competencies: Strong Hands-on experience in implementing Data Lake with technologies like Data Factory (ADF), ADLS, Databricks, Azure Synapse Analytics, Event Hub & Streaming Analytics, Cosmos DB and Purview. Experience of using big data technologies like Hadoop (CDH or HDP), Spark, Airflow, NiFi, Kafka, Hive, HBase or MongoDB, Neo4J, Elastic Search, Impala, Sqoop etc. Strong programming & debugging skills either in Python and Scala/Java. Experience of building REST services is good to have. Experience of supporting BI and Data Science teams in consuming the data in a secure and governed manner. Good understanding and Experience of using CI/CD with Git, Jenkins Azure DevOps. Experience of setting up cloud-computing infrastructure solutions. Hands on Experience/Exposure to NoSQL Databases and Data Modelling in Hive 9+ years of technical experience with at-least 2 years on MS Azure and 2 year on Hadoop (CDH/HDP). B.Tech/B.E from reputed institute preferred.

Posted 2 months ago

Apply

6 - 11 years

15 - 25 Lacs

Salem

Work from Office

Naukri logo

Azure Data Factory ETL Consultant - Pharma exp MUST Job Requirement Required Experience, Skills & Competencies: Strong Hands-on experience in implementing Data Lake with technologies like Data Factory (ADF), ADLS, Databricks, Azure Synapse Analytics, Event Hub & Streaming Analytics, Cosmos DB and Purview. Experience of using big data technologies like Hadoop (CDH or HDP), Spark, Airflow, NiFi, Kafka, Hive, HBase or MongoDB, Neo4J, Elastic Search, Impala, Sqoop etc. Strong programming & debugging skills either in Python and Scala/Java. Experience of building REST services is good to have. Experience of supporting BI and Data Science teams in consuming the data in a secure and governed manner. Good understanding and Experience of using CI/CD with Git, Jenkins Azure DevOps. Experience of setting up cloud-computing infrastructure solutions. Hands on Experience/Exposure to NoSQL Databases and Data Modelling in Hive 9+ years of technical experience with at-least 2 years on MS Azure and 2 year on Hadoop (CDH/HDP). B.Tech/B.E from reputed institute preferred.

Posted 2 months ago

Apply

6 - 11 years

15 - 25 Lacs

Kota

Work from Office

Naukri logo

Required Experience, Skills & Competencies: Strong Hands-on experience in implementing Data Lake with technologies like Data Factory (ADF), ADLS, Databricks, Azure Synapse Analytics, Event Hub & Streaming Analytics, Cosmos DB and Purview. Experience of using big data technologies like Hadoop (CDH or HDP), Spark, Airflow, NiFi, Kafka, Hive, HBase or MongoDB, Neo4J, Elastic Search, Impala, Sqoop etc. Strong programming & debugging skills either in Python and Scala/Java. Experience of building REST services is good to have. Experience of supporting BI and Data Science teams in consuming the data in a secure and governed manner. Good understanding and Experience of using CI/CD with Git, Jenkins Azure DevOps. Experience of setting up cloud-computing infrastructure solutions. Hands on Experience/Exposure to NoSQL Databases and Data Modelling in Hive 9+ years of technical experience with at-least 2 years on MS Azure and 2 year on Hadoop (CDH/HDP). B.Tech/B.E from reputed institute preferred.

Posted 2 months ago

Apply

6 - 11 years

15 - 25 Lacs

Nasik

Work from Office

Naukri logo

Job Requirement Required Experience, Skills & Competencies: Strong Hands-on experience in implementing Data Lake with technologies like Data Factory (ADF), ADLS, Databricks, Azure Synapse Analytics, Event Hub & Streaming Analytics, Cosmos DB and Purview. Experience of using big data technologies like Hadoop (CDH or HDP), Spark, Airflow, NiFi, Kafka, Hive, HBase or MongoDB, Neo4J, Elastic Search, Impala, Sqoop etc. Strong programming & debugging skills either in Python and Scala/Java. Experience of building REST services is good to have. Experience of supporting BI and Data Science teams in consuming the data in a secure and governed manner. Good understanding and Experience of using CI/CD with Git, Jenkins / Azure DevOps. Experience of setting up cloud-computing infrastructure solutions. Hands on Experience/Exposure to NoSQL Databases and Data Modelling in Hive 9+ years of technical experience with at-least 2 years on MS Azure and 2 year on Hadoop (CDH/HDP). B.Tech/B.E from reputed institute preferred.

Posted 2 months ago

Apply

6 - 11 years

15 - 25 Lacs

Agra

Work from Office

Naukri logo

Job Requirement Required Experience, Skills & Competencies: Strong Hands-on experience in implementing Data Lake with technologies like Data Factory (ADF), ADLS, Databricks, Azure Synapse Analytics, Event Hub & Streaming Analytics, Cosmos DB and Purview. Experience of using big data technologies like Hadoop (CDH or HDP), Spark, Airflow, NiFi, Kafka, Hive, HBase or MongoDB, Neo4J, Elastic Search, Impala, Sqoop etc. Strong programming & debugging skills either in Python and Scala/Java. Experience of building REST services is good to have. Experience of supporting BI and Data Science teams in consuming the data in a secure and governed manner. Good understanding and Experience of using CI/CD with Git, Jenkins / Azure DevOps. Experience of setting up cloud-computing infrastructure solutions. Hands on Experience/Exposure to NoSQL Databases and Data Modelling in Hive 9+ years of technical experience with at-least 2 years on MS Azure and 2 year on Hadoop (CDH/HDP). B.Tech/B.E from reputed institute preferred.

Posted 2 months ago

Apply

6 - 11 years

15 - 25 Lacs

Ludhiana

Work from Office

Naukri logo

Required Experience, Skills & Competencies: Strong Hands-on experience in implementing Data Lake with technologies like Data Factory (ADF), ADLS, Databricks, Azure Synapse Analytics, Event Hub & Streaming Analytics, Cosmos DB and Purview. Experience of using big data technologies like Hadoop (CDH or HDP), Spark, Airflow, NiFi, Kafka, Hive, HBase or MongoDB, Neo4J, Elastic Search, Impala, Sqoop etc. Strong programming & debugging skills either in Python and Scala/Java. Experience of building REST services is good to have. Experience of supporting BI and Data Science teams in consuming the data in a secure and governed manner. Good understanding and Experience of using CI/CD with Git, Jenkins Azure DevOps. Experience of setting up cloud-computing infrastructure solutions. Hands on Experience/Exposure to NoSQL Databases and Data Modelling in Hive 9+ years of technical experience with at-least 2 years on MS Azure and 2 year on Hadoop (CDH/HDP). B.Tech/B.E from reputed institute preferred. Pharma exp MUST

Posted 2 months ago

Apply

1 - 5 years

3 - 7 Lacs

Allahabad, Noida

Work from Office

Naukri logo

Feather Thread Corporation is looking for Bigdata administrator to join our dynamic team and embark on a rewarding career journey. Office Management:Oversee general office operations, including maintenance of office supplies, equipment, and facilities Manage incoming and outgoing correspondence, including mail, email, and phone calls Coordinate meetings, appointments, and travel arrangements for staff members as needed Administrative Support:Provide administrative support to management and staff, including scheduling meetings, preparing documents, and organizing files Assist with the preparation of reports, presentations, and other materials for internal and external stakeholders Maintain accurate records and databases, ensuring data integrity and confidentiality Communication and Coordination:Serve as a point of contact for internal and external stakeholders, including clients, vendors, and partners Facilitate communication between departments and team members, ensuring timely and effective information flow Coordinate logistics for company events, meetings, and conferences Documentation and Compliance:Assist with the development and implementation of company policies, procedures, and guidelines Maintain compliance with regulatory requirements and industry standards Ensure proper documentation and record-keeping practices are followed Project Support:Provide support to project teams by assisting with project coordination, documentation, and tracking of tasks and deadlines Collaborate with team members to ensure project deliverables are met on time and within budget

Posted 2 months ago

Apply

2 - 7 years

4 - 8 Lacs

Chennai

Work from Office

Naukri logo

? Do 1. Instrumental in understanding the requirements and design of the product/ software Develop software solutions by studying information needs, studying systems flow, data usage and work processes Investigating problem areas followed by the software development life cycle Facilitate root cause analysis of the system issues and problem statement Identify ideas to improve system performance and impact availability Analyze client requirements and convert requirements to feasible design Collaborate with functional teams or systems analysts who carry out the detailed investigation into software requirements Conferring with project managers to obtain information on software capabilities

Posted 2 months ago

Apply

3 - 7 years

13 - 18 Lacs

Pune

Work from Office

Naukri logo

About The Role : Job Title Technical-Specialist Big Data (PySpark) Developer LocationPune, India Role Description This role is for Engineer who is responsible for design, development, and unit testing software applications. The candidate is expected to ensure good quality, maintainable, scalable, and high performing software applications getting delivered to users in an Agile development environment. Candidate / Applicant should be coming from a strong technological background. The candidate should have goo working experience in Python and Spark technology. Should be hands on and be able to work independently requiring minimal technical/tool guidance. Should be able to technically guide and mentor junior resources in the team. As a developer you will bring extensive design and development skills to enforce the group of developers within the team. The candidate will extensively make use and apply Continuous Integration tools and practices in the context of Deutsche Banks digitalization journey. What we'll offer you As part of our flexible scheme, here are just some of the benefits that youll enjoy. Best in class leave policy. Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your key responsibilities Design and discuss your own solution for addressing user stories and tasks. Develop and unit-test, Integrate, deploy, maintain, and improve software. Perform peer code review. Actively participate into the sprint activities and ceremonies e.g., daily stand-up/scrum meeting, Sprint planning, retrospectives, etc. Apply continuous integration best practices in general (SCM, build automation, unit testing, dependency management) Collaborate with other team members to achieve the Sprint objectives. Report progress/update Agile team management tools (JIRA/Confluence) Manage individual task priorities and deliverables. Responsible for quality of solutions candidate / applicant provides. Contribute to planning and continuous improvement activities & support PO, ITAO, Developers and Scrum Master. Your skills and experience Engineer with Good development experience in Big Data platform for at least 5 years. Hands own experience in Spark (Hive, Impala). Hands own experience in Python Programming language. Preferably, experience in BigQuery , Dataproc , Composer , Terraform , GKE , Cloud SQL and Cloud functions. Experience in set-up, maintenance, and ongoing development of continuous build/ integration infrastructure as a part of DevOps. Create and maintain fully automated CI build processes and write build and deployment scripts. Has experience with development platformsOpenShift/ Kubernetes/Docker configuration and deployment with DevOps tools e.g., GIT, TeamCity, Maven, SONAR Good Knowledge about the core SDLC processes and tools such as HP ALM, Jira, Service Now. Strong analytical skills. Proficient communication skills. Fluent in English (written/verbal). Ability to work in virtual teams and in matrixed organizations. Excellent team player. Open minded and willing to learn business and technology. Keeps pace with technical innovation. Understands the relevant business area. Ability to share information, transfer knowledge to expertise the team members. How we'll support you Training and development to help you excel in your career. Coaching and support from experts in your team. A culture of continuous learning to aid progression. A range of flexible benefits that you can tailor to suit your needs.

Posted 2 months ago

Apply

5 - 9 years

7 - 11 Lacs

Chennai

Work from Office

Naukri logo

? Technically sound to lead discussion in the space of data analytics, business intelligence and DevOps support to data science team. Hands-on experience with Dataiku DSS for data preparation, including workflows, automation and model deployment Experience leading design discovery meetings with end-users is preferred Strong experience with data modelling, including designing schemas and writing complex SQL queries. Excellent troubleshooting, testing and validation skills for data pipelines in Dataiku. Solid experience of Dataiku connectivity with HDFS, Hadoop, Spark, Impala, Oracle, Teradata, Snowflake and other Cloud systems. Ability to work with huge datasets. Experience working in Python Identify trends, correlations, and key business drivers using Dataiku, SQL, Python. Optimize Dataiku pipelines for improved efficiency and performance Deploy and monitor ML models within Dataiku Have a good understanding of data management systems and processes within the ecosystem. She/he should be able to coach, guide and mentor junior members in the team Provide support and training to users on Dataiku best practices and functionalities. Experience working in an Agile or fast paced and dynamic environment is a plus. Must have worked in continuous integration and continuous delivery (CI/CD) environment. Ability to work with minimum supervision, collaborate with geographically dispersed teams. Outstanding written and verbal communication skills Knowledge of Financial Products is a PLUS Dedication to fostering an inclusive culture and value diverse perspectives.

Posted 2 months ago

Apply

0 - 2 years

5 - 8 Lacs

Pune

Work from Office

Naukri logo

About The Role : Job Title Big Data AVP (Resume Your Resume) Location Pune Resume your Resume Program description Whether youve raised a family, set up your own business, or travelled the world - not everyone follows the same life and career trajectory. Our Resume your Rsum India internships are designed to provide opportunities to professionals who are looking to return to work after an extended career break. Throughout the 3-month traineeship you will have the chance to refresh your skills, make new connections and potentially secure a full-time opportunity upon completing the programme. If you have a background in Finance, Operations, Risk or Technology, please apply here. Role Description: A technology-oriented developer is needed to join the Portfolio Analyser team within the Enterprise Risk Technology space, to assist with the implementation of a strategic regulatory risk reporting platform to cater to various programs like Historical Simulation, Counterparty Credit Risk and Stress Testing scenarios. Portfolio Analyzer is the in-memory data analytics platform to generate various Risk metrics for the bank. A strong technologist, self-starter, who is comfortable with independent engagement across functions such as Business Analysis, and Project Management, is expected. Enterprise Risk Technology (ERT) is the technology partner to the Risk divisions of Credit Risk, Market Risk and Non-Financial Risk. This includes definition of the IT strategy and provision of solutions to allow Risk to manage all aspects of risk from the analysis of counterparty credit risk to the protection of the Bank's infrastructure and information. You will work as part of a cross-functional agile delivery team. You will bring an innovative approach to software development, focusing on using the latest technologies and practices, as part of a relentless focus on business value. You will be someone who sees engineering as team activity, with a predisposition to open code, open discussion and creating a supportive, collaborative environment. You will be ready to contribute to all stages of software delivery, from initial analysis right through to production support. Description: This role is for a developer with strong core application or system programming skills in Scala, java and good exposure to concepts and/or technology across the broader spectrum. Enterprise Risk Technology covers a variety of existing systems and green-field projects. A Full stack Hadoop development experience with Scala development A Full stack Java development experience covering Core Java (including JDK 1.8) and good understanding of design patterns. Requirements: Strong hands on development in Java technologies. Strong hands on development in Hadoop technologies like Spark, Scala and experience on Avro. Participation in product feature design and documentation Requirement break-up, ownership and implantation. Product BAU deliveries and Level 3 production defects fixes. Qualifications & Experience Degree holder in numerate subject Hands on Experience on Hadoop, Spark, Scala, Impala, Avro and messaging like Kafka Experience across a core compiled language Java Proficiency in Java related frameworks like Springs, Hibernate, JPA Hands on experience in JDK 1.8 and strong skillset covering Collections, Multithreading with experience working on Distributed applications. Strong hands-on development track record with end to end development cycle involvement Good exposure to computational concepts Good communication and interpersonal skills Working knowledge of risk and derivatives pricing (optional) Proficiency in SQL (PL/SQL), data modelling. Understanding of Hadoop architecture and Scala program language is a good to have. Advantageous: Understanding of middlewares like Solace is an advantage. Understanding of NoSQL is an added advantage Experience of Data Analytics platforms is advantageous. Banking experience, particularly in risk What we'll offer you: Please be aware there are regional differences to DB benefits and you will need to check the correct package per advert. As part of our flexible scheme, here are just some of the benefits that youll enjoy Best in class leave policy Gender neutral parental leaves 100% reimbursement under child care assistance benefit (gender neutral) Flexible working arrangements Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above How we'll support you Training and development to help you excel in your career Flexible working to assist you balance your personal priorities Coaching and support from experts in your team A culture of continuous learning to aid progression A range of flexible benefits that you can tailor to suit your needs Our values define the working environment we strive to create diverse, supportive and welcoming of different views. We embrace a culture reflecting a variety of perspectives, insights and backgrounds to drive innovation. We build talented and diverse teams to drive business results and encourage our people to develop to their full potential. Talk to us about flexible work arrangements and other initiatives we offer. We promote good working relationships and encourage high standards of conduct and work performance. We welcome applications from talented people from all cultures, countries, races, genders, sexual orientations, disabilities, beliefs and generations and are committed to providing a working environment free from harassment, discrimination and retaliation. Click to find out more about our diversity and inclusion policy and initiatives.

Posted 2 months ago

Apply

3 - 7 years

3 - 7 Lacs

Karnataka

Work from Office

Naukri logo

Description Detailed JD RTIM Pega CDH 8.8 Multi App Infinity 24.1 Java Restful API oAuth 1. Understanding the NBA requirements and the complete CDH architecture 2. Review of the Conceptual Design Detailed Design and estimations. 3. Reviewing and contributing to the deployment activities and practices 4. Contributing to overall technical solution and putting it to practise. 5. Contributing to the requirement discussion with the Subject matter expertise in relation to Pega CDH 6. Experience in Pega CDH v8.8 multi app or 24.1 and retail banking domain is preferred. 7. Conducting peer code reviews 8. Excellent communications skills Named Job Posting? (if Yes - needs to be approved by SCSC) Additional Details Global Grade B Level To Be Defined Named Job Posting? (if Yes - needs to be approved by SCSC) No Remote work possibility No Global Role Family 60236 (P) Software Engineering Local Role Name 6362 Software Developer Local Skills 5700 Pega Languages RequiredEnglish Role Rarity To Be Defined

Posted 2 months ago

Apply

3 - 7 years

1 - 5 Lacs

Telangana

Work from Office

Naukri logo

Location Chennai and Hyderbad preferred but customer is willing to take resources from Hyderabad Experience 5 to 8 yrs ( U3 ). Exp - 5- 10 Yrs Location - Hyderabad / Chennai Location Proven experience as a development data engineer or similar role, with ETL background. Experience with data integration / ETL best practices and data quality principles. Play a crucial role in ensuring the quality and reliability of the data by designing, implementing, and executing comprehensive testing. By going over the User Stories build the comprehensive code base and business rules for testing and validation of the data. Knowledge of continuous integration and continuous deployment (CI/CD) pipelines. Familiarity with Agile/Scrum development methodologies. Excellent analytical and problem solving skills. Strong communication and collaboration skills. Experience with big data technologies (Hadoop, Spark, Hive).

Posted 2 months ago

Apply

2 - 6 years

5 - 9 Lacs

Uttar Pradesh

Work from Office

Naukri logo

Proven experience as a development data engineer or similar role, with ETL background. Experience with data integration / ETL best practices and data quality principles. Play a crucial role in ensuring the quality and reliability of the data by designing, implementing, and executing comprehensive testing. By going over the User Stories build the comprehensive code base and business rules for testing and validation of the data. Knowledge of continuous integration and continuous deployment (CI/CD) pipelines. Familiarity with Agile/Scrum development methodologies. Excellent analytical and problem solving skills. Strong communication and collaboration skills. Experience with big data technologies (Hadoop, Spark, Hive)

Posted 2 months ago

Apply

5 - 8 years

5 - 9 Lacs

Bengaluru

Work from Office

Naukri logo

Job Title Big Data Analyst Responsibilities A day in the life of an Infoscion As part of the Infosys delivery team, your primary role would be to interface with the client for quality assurance, issue resolution and ensuring high customer satisfaction. You will understand requirements, create and review designs, validate the architecture and ensure high levels of service offerings to clients in the technology domain. You will participate in project estimation, provide inputs for solution delivery, conduct technical risk planning, perform code reviews and unit test plan reviews. You will lead and guide your teams towards developing optimized high quality code deliverables, continual knowledge management and adherence to the organizational guidelines and processes. You would be a key contributor to building efficient programs/ systems and if you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you!If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Technical and Professional Requirements: Primary skills:Technology->Big Data - NoSQL->MongoDB Preferred Skills: Technology->Big Data - NoSQL->MongoDB Additional Responsibilities: Knowledge of more than one technology Basics of Architecture and Design fundamentals Knowledge of Testing tools Knowledge of agile methodologies Understanding of Project life cycle activities on development and maintenance projects Understanding of one or more Estimation methodologies, Knowledge of Quality processes Basics of business domain to understand the business requirements Analytical abilities, Strong Technical Skills, Good communication skills Good understanding of the technology and domain Ability to demonstrate a sound understanding of software quality assurance principles, SOLID design principles and modelling methods Awareness of latest technologies and trends Excellent problem solving, analytical and debugging skills Educational Requirements Bachelor of Engineering Service Line Cloud & Infrastructure Services * Location of posting is subject to business requirements

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies