Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
6.0 - 10.0 years
0 Lacs
noida, uttar pradesh
On-site
As a skilled professional with over 7 years of experience, you will be responsible for reviewing and understanding business requirements to ensure timely completion of development tasks with rigorous testing to minimize defects. Collaborating with a software development team is crucial to implement best practices and enhance the performance of Data applications, meeting client needs effectively. In this role, you will collaborate with various teams within the company and engage with customers to comprehend, translate, define, and design innovative solutions for their business challenges. Your tasks will also involve researching new Big Data technologies to evaluate their maturity and alignment with business and technology strategies. Operating within a rapid and agile development process, you will focus on accelerating speed to market while upholding necessary controls. Your qualifications should include a BE/B.Tech/MCA degree with a minimum of 6 years of IT experience, including 4 years of hands-on experience in design and development using the Hadoop technology stack and various programming languages. Furthermore, you are expected to have proficiency in multiple areas such as Hadoop, HDFS, MR, Spark Streaming, Spark SQL, Spark ML, Kafka/Flume, Apache NiFi, Hortonworks Data Platform, Hive, Pig, Sqoop, NoSQL Databases (HBase, Cassandra, Neo4j, MongoDB), Visualization & Reporting frameworks (D3.js, Zeppelin, Grafana, Kibana, Tableau, Pentaho), Scrapy for web crawling, Elastic Search, Google Analytics data streaming, and Data security protocols (Kerberos, Open LDAP, Knox, Ranger). A strong knowledge of the current technology landscape, industry trends, and experience in Big Data integration with Metadata Management, Data Quality, Master Data Management solutions, structured/unstructured data is essential. Your active participation in the community through articles, blogs, or speaking engagements at conferences will be highly valued in this role.,
Posted 1 week ago
4.0 - 8.0 years
0 Lacs
chennai, tamil nadu
On-site
You have experience in ETL testing and are familiar with Agile methodology. With a minimum of 4-6 years of testing experience in test planning & execution, you possess working knowledge in Database testing. It would be advantageous if you have prior experience in the auditing domain. Your strong application analysis, troubleshooting, and behavioral skills along with extensive experience in manual testing will be valuable. While experience in Automation scripting is not mandatory, it would be beneficial. You are adept at leading discussions with Business, Development, and vendor teams for testing activities such as Defect Coordinator and test scenario reviews. Your excellent verbal and written communication skills enable you to effectively communicate with various stakeholders. You are capable of working independently and collaboratively with onshore and offshore teams. The role requires an experienced ETL developer with proficiency in Big Data technologies like Hadoop. Key Skills Required: - Hadoop (Horton Works), HDFS - Hive, Pig, Knox, Ambari, Ranger, Oozie - TALEND, SSIS - MySQL, MS SQL Server, Oracle - Windows, Linux Being open to working in 2nd shifts (1pm - 10pm) is essential for this role. Your excellent English communication skills will be crucial for effective collaboration. If you are interested, please share your profile on mytestingcareer.com. When responding, kindly include your Current CTC, Expected CTC, Notice Period, Current Location, and Contact number.,
Posted 1 month ago
4.0 - 5.0 years
4 - 7 Lacs
Pune
Work from Office
Role & responsibilities About the Role We are seeking a proactive and technically strong Site Reliability Engineer (SRE) to ensure the stability, performance, and scalability of our Data Engineering Platform. You will work on cutting-edge technologies including Cloudera Hadoop, Spark, Airflow, NiFi, and Kubernetesensuring high availability and driving automation to support massive-scale data workloads, especially in the telecom domain. Key Responsibilities • Ensure platform uptime and application health as per SLOs/KPIs • Monitor infrastructure and applications using ELK, Prometheus, Zabbix, etc. • Debug and resolve complex production issues, performing root cause analysis • Automate routine tasks and implement self-healing systems • Design and maintain dashboards, alerts, and operational playbooks • Participate in incident management, problem resolution, and RCA documentation • Own and update SOPs for repeatable processes • Collaborate with L3 and Product teams for deeper issue resolution • Support and guide L1 operations team • Conduct periodic system maintenance and performance tuning • Respond to user data requests and ensure timely resolution • Address and mitigate security vulnerabilities and compliance issues Preferred candidate profile Hands-on with Spark, Hive, Cloudera Hadoop, Kafka, Ranger, Strong Linux fundamentals and scripting (Python, Shell), Experience with Apache NiFi, Airflow, Yarn, and Zookeeper, Proficient in monitoring and observability tools: ELK Stack, Prometheus, Loki, Working knowledge of Kubernetes, Docker, Jenkins CI/CD pipelines, Strong SQL skills (Oracle/Exadata preferred), Familiarity with DataHub, DataMesh, and security best practices is a plus
Posted 1 month ago
7.0 - 10.0 years
6 - 7 Lacs
Navi Mumbai, SBI Belapur
Work from Office
ISA Non captive RTH-Y Note: 1.This position requires the candidate to work from the office starting from day one clinet office. 2.Ensure that you perform basic validation and gauge the interest level of the candidate before uploading their profile to our system. 3.Candidate Band will be count as per their relevant experience. We will not entertain lesser experience profile for higher band. 4. Candidate full BGV is required before onboarding the candidate. 5. If required will regularize the candidate after 6months. Hence 6 months NOC is required from the DOJ. Mode of Interview: Face to Face (Mandatory). **JOB DESCRIPTION** Total Years of Experience : 7-10 Years Relevant Years of Experience : 7-10 Years Mandatory Skills : Cloudera DBA Detailed JD : Key Responsibilities: Provision and manage Cloudera clusters (CDP Private Cloud Base) Monitor cluster health, performance, and resource utilization Implement security (Kerberos, Ranger, TLS), HA, and backupstrategies Handle patching, upgrades, and incident response Collaborate with engineering and data teams to support workloads Skills Required: Strong hands-on with Cloudera Manager, Ambari, HDFS, Hive, Impala, Spark Linux administration and scripting skills (Shell, Python) Experience with Kerberos, Ranger, and audit/compliance setups Exposure to Cloudera Support and ticketing processes
Posted 2 months ago
10.0 - 20.0 years
15 - 30 Lacs
pune, mumbai (all areas)
Hybrid
Job Title: Network Architect (Network Traffic Intelligence & Flow Data Systems) Location : Pune, India (with Travel to Onsite) Experience Required : 8+ years in network traffic monitoring and flow data systems, with 2+ years hands-on experience in configuring and deploying nProbe Cento in high-throughput environments. Overview : We are seeking a specialist with deep expertise in network traffic probes , specifically nProbe Cento , to support the deployment, configuration, and integration of flow record generation systems. The consultant will work closely with Kafka developers, solution architects, and network teams to ensure accurate, high-performance flow data capture and export. This role is critical to ensure the scalability, observability, and compliance of the network traffic record infrastructure. Key Responsibilities : Design and document the end-to-end architecture for network traffic record systems, including flow ingestion, processing, storage, and retrieval. Deploy and configure nProbe Cento on telecom-grade network interfaces. Tune probe performance using PF_RING ZC drivers for high-speed traffic capture. Configure IPFIX/NetFlow export and integrate with Apache Kafka for real-time data streaming. Set up DPI rules to identify application-level traffic (e.g., popular messaging and social media applications). Align flow record schema with Detail Record specification. Lead the integration of nProbe Cento, Kafka, Apache Spark, and Cloudera CDP components into a unified data pipeline. Collaborate with Kafka and API teams to ensure compatibility of data formats and ingestion pipelines. Define interface specifications, deployment topologies, and data schemas for flow records and detail records. Monitor probe health, performance, and packet loss; implement logging and alerting mechanisms. Collaborate with security teams to implement data encryption, access control, and compliance with regulatory standards. Guide development and operations teams through SIT/UAT, performance tuning, and production rollout. Provide documentation, training, and handover materials for long-term operational support. Required Skills & Qualifications : Proven hands-on experience with nProbe Cento in production environments. Strong understanding of IPFIX, NetFlow, sFlow, and flow-based monitoring principles. Experience with Cloudera SDX, Ranger, Atlas, and KMS for data governance and security. Familiarity with HashiCorp Vault for secrets management. Strong understanding of network packet brokers (e.g., Gigamon, Ixia) and traffic aggregation strategies. Proven ability to design high-throughput , fault-tolerant, and cloud-native architectures. Experience with Kafka integration , including topic configuration and message formatting. Familiarity with DPI technologies and application traffic classification. Proficiency in Linux system administration, shell scripting, and network interface tuning . Knowledge of telecom network interfaces and traffic tapping strategies . Experience with PF_RING, ntopng , and related ntop tools (preferred). Ability to work independently and collaboratively with cross-functional technical teams. Excellent documentation and communication skills. Certifications in Cloudera, Kafka, or cloud platforms (e.g., AWS Architect, GCP Data Engineer) will be advantageous Preferred candidate profile
Posted Date not available
8.0 - 12.0 years
25 - 35 Lacs
pune
Work from Office
Experience Required : 8+ years in network traffic monitoring and flow data systems, with 2+ years hands-on experience in configuring and deploying nProbe Cento in high-throughput environments. Overview : We are seeking a specialist with deep expertise in network traffic probes , specifically nProbe Cento , to support the deployment, configuration, and integration of flow record generation systems. The consultant will work closely with Kafka developers, solution architects, and network teams to ensure accurate, high-performance flow data capture and export. This role is critical to ensure the scalability, observability, and compliance of the network traffic record infrastructure. Role & responsibilities Design and document the end-to-end architecture for network traffic record systems, including flow ingestion, processing, storage, and retrieval. Deploy and configure nProbe Cento on telecom-grade network interfaces. Tune probe performance using PF_RING ZC drivers for high-speed traffic capture. Configure IPFIX/NetFlow export and integrate with Apache Kafka for real-time data streaming. Set up DPI rules to identify application-level traffic (e.g., popular messaging and social media applications). Align flow record schema with Detail Record specification. Lead the integration of nProbe Cento, Kafka, Apache Spark, and Cloudera CDP components into a unified data pipeline. Collaborate with Kafka and API teams to ensure compatibility of data formats and ingestion pipelines. Define interface specifications, deployment topologies, and data schemas for flow records and detail records. Monitor probe health, performance, and packet loss; implement logging and alerting mechanisms. Collaborate with security teams to implement data encryption, access control, and compliance with regulatory standards. Guide development and operations teams through SIT/UAT, performance tuning, and production rollout. Provide documentation, training, and handover materials for long-term operational support. Preferred candidate profile Proven hands-on experience with nProbe Cento in production environments. Strong understanding of IPFIX, NetFlow, sFlow, and flow-based monitoring principles. Experience with Cloudera SDX, Ranger, Atlas, and KMS for data governance and security. Familiarity with HashiCorp Vault for secrets management. Strong understanding of network packet brokers (e.g., Gigamon, Ixia) and traffic aggregation strategies. Proven ability to design high-throughput , fault-tolerant, and cloud-native architectures. Experience with Kafka integration , including topic configuration and message formatting. Familiarity with DPI technologies and application traffic classification. Proficiency in Linux system administration, shell scripting, and network interface tuning . Knowledge of telecom network interfaces and traffic tapping strategies . Experience with PF_RING, ntopng , and related ntop tools (preferred). Ability to work independently and collaboratively with cross-functional technical teams. Excellent documentation and communication skills.
Posted Date not available
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
67493 Jobs | Dublin
Wipro
26746 Jobs | Bengaluru
Accenture in India
21683 Jobs | Dublin 2
EY
20113 Jobs | London
Uplers
14352 Jobs | Ahmedabad
Bajaj Finserv
13841 Jobs |
IBM
13289 Jobs | Armonk
Accenture services Pvt Ltd
12869 Jobs |
Amazon
12463 Jobs | Seattle,WA
Amazon.com
12066 Jobs |