Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
7.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
The senior Product Manager holds the responsibility for designing, developing, and overseeing activities related to a specific product or a group of products. This oversight encompasses everything from defining the product and planning its development to production and go-to-market strategies. Additionally, the Product Manager is tasked with crafting the product roadmap necessary to achieve bookings, client NPS, and gross margin targets associated with their component. To facilitate organic growth, the product manager collaborates with internal stakeholders, clients, and prospects to identify new product capability requirements. They maintain close collaboration with their development teams to ensure the successful creation and introduction of these new capabilities to the market. Furthermore, the Product Manager takes charge of testing and implementing these fresh features with clients and actively promotes future growth to a broader audience of Clearwater clients and prospects. Responsibilities: Team Span: responsible for handling a team of 20-50 Developers. Prioritizes decisions across products. Establishes alignment on the product roadmap among multiple development teams. Exerts influence on shaping the company's roadmap. Efficiently leads the development of cross-product capabilities. Contributes to the formulation of the department's development and training plan. Advocates for a culture of communication throughout the organization. Is recognized as an industry expert and frequently represents CW on industry forum panels. Proficiently evaluates opportunities in uncharted territory. Independently identifies, assesses, and potentially manages partnership relationships with external parties. Delivers leadership and expertise to our continually expanding workforce. Required Skills: Domain Knowledge: Strong understanding of the alternative investments ecosystem, including (but not limited to) limited partnerships, mortgage loans, direct loans, private equity, and other non-traditional asset classes. AI / GenAI Exposure (Preferred): Experience in AI or Gen AI-based projects, particularly in building platforms or solutions using Generative AI technologies, will be considered a strong advantage. Proven track record as a Product Manager (Ideal but not vital) that owns all aspects of a successful product throughout its lifecycle in a B2B environment. Knowledge of investments and investment accounting (Very important). Exemplary interpersonal, communication, and project management skills. Excellent team and relationship building abilities, with both internal and external parties (engineers, business stakeholders, partners, etc.). Ability to work well under pressure, multitask, and maintain keen attention to detail. Strong leadership skills, including ability to influence via diplomacy and tact. Experience working with Cloud Platforms (AWS/Azure/GCP). Ability to work with relational and NoSQL databases. Strong computer skills, including proficiency in Microsoft Office. Excellent attention to detail and strong documentation skills. Outstanding verbal and written communication skills. Strong organizational and interpersonal skills. Exceptional problem-solving abilities. Education and Experience: Bachelor's/master's degree in engineering or a related field. 7+ years of relevant experience. Professional experience in building distributed software systems, specializing in big data and NoSQL. database technologies (Hadoop, Spark, DynamoDB, HBase, Hive, Cassandra, Vertica). Experience working with indexing systems such as elastic search, SOLR/Lucene. Experience working with messaging systems such as Kafka/SQS/SNS.
Posted 2 weeks ago
4.0 years
0 Lacs
Indore, Madhya Pradesh, India
On-site
Job Summary We are looking for a skilled Database Engineer to design, build, and maintain reliable database systems that support our applications and data infrastructure. The ideal candidate will have strong technical expertise in database architecture, data modeling, and performance tuning, along with hands-on experience in both SQL and NoSQL systems. Location : Indore Job Type : Full-Time Experience : 4+ Years Key Responsibilities Design and implement scalable and high-performing database architectures Build and optimize complex queries, stored procedures, and indexing strategies Collaborate with backend engineers and data teams to model and structure databases that meet application requirements Perform data migrations, transformations, and integrations across environments Ensure data consistency, integrity, and availability across distributed systems Develop and maintain ETL pipelines and real-time data flows Monitor database performance and implement tuning improvements Automate repetitive database tasks and deploy schema changes Assist with database security practices and access control policies Support production databases and troubleshoot incidents or outages Required Skills And Qualifications Strong experience in relational databases like PostgreSQL, MySQL, MS SQL Server, or Oracle Proficiency in writing optimized SQL queries and performance tuning Experience with NoSQL databases like MongoDB, Cassandra, DynamoDB, or Redis Solid understanding of database design principles, normalization, and data warehousing Strong expertise in Oracle and GoldenGate. Experience with database platforms such as Vertica, Couchbase Capella, or CockroachDB. Hands-on experience with ETL pipelines, data transformation, and scripting (e.g., Python, Bash) Familiarity with version control systems (e.g., Git) and DevOps tools (e.g., Docker, Kubernetes, Jenkins) Knowledge of cloud database services (e.g., AWS RDS, Google Cloud SQL, Azure SQL Database) Experience with data backup, disaster recovery, and high availability setups (ref:hirist.tech)
Posted 2 weeks ago
10.0 - 12.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Essential Services: Role & Location fungibility At ICICI Bank, we believe in serving our customers beyond our role definition, product boundaries, and domain limitations through our philosophy of customer 360-degree. In essence, this captures our belief in serving the entire banking needs of our customers as One Bank, One Team. To achieve this, employees at ICICI Bank are expected to be role and location-fungible with the understanding that Banking is an essential service. The role description gives you an overview of the responsibilities; it is only directional and guiding in nature. About the Role: As a Data Warehouse Architect, you will be responsible for managing and enhancing data warehouse that manages large volume of customer-life cycle data flowing in from various applications within guardrails of risk and compliance. You will be managing the day-to-day operations of the data warehouse i.e. Vertica. In this role, you will manage a team of data warehouse engineers to develop data modelling, design ETL data pipeline, manage the issues and upgrades, fine-tune performance, migrate, govern and maintain security framework of the data warehouse. This role enables the Bank to maintain huge data sets in a structured manner that is amenable for data intelligence. The data warehouse supports numerous information systems used by various business groups to derive insights. As a natural progression, the data warehouse will be gradually migrated to Data Lake enabling better analytical advantage. The role holder will also be responsible for guiding the team towards this migration. Key Responsibilities: Data Pipeline Design: Responsible for designing and developing ETL data pipelines that can help in organising large volumes of data. Use of data warehousing technologies to ensure that the data warehouse is efficient, scalable, and secure. Issue Management: Responsible for ensuring that the data warehouse is running smoothly. Monitor system performance, diagnose and troubleshoot issues, and make necessary changes to optimize system performance. Collaboration: Collaborate with cross-functional teams to implement upgrades, migrations and continuous improvements. Data Integration and Processing : Responsible for processing, cleaning, and integrating large data sets from various sources to ensure that the data is accurate, complete, and consistent. Data Modelling : Responsible for designing and implementing data modelling solutions to ensure that the organization’s data is properly structured and organized for analysis. Key Qualifications & Skills: Education Qualification: B.E./B. Tech. in Computer Science, Information Technology or equivalent domain with 10 to 12 years of experience and at least 5 years or relevant work experience in Datawarehouse/mining/BI/MIS. Experience in Data Warehousing: Knowledge on ETL and data technologies like OLTP, OLAP (Oracle / MSSQL). Data Modelling, Data Analysis and Visualization experience (Analytical tools experience like Power BI / SAS / ClickView / Tableau etc.). Good to have exposure to Azure Cloud Data platform services like COSMOS, Azure Data Lake, Azure Synapse, and Azure Data factory. Certification: Azure certified DP 900, PL 300, DP 203 or any other Data platform/Data Analyst certifications. About the Business Group: The Technology Group at ICICI Bank is at the forefront of our operations and offerings, which is focused on leveraging state-of-the-art technology to provide customer-centric solutions. This group plays a pivotal role in our vision of the transition from Bank to Bank Tech. Further, the group offers round-the-clock support to our entire banking ecosystem. In our persistent efforts to offer products and solutions that truly benefit customers, we believe unlocking the potential of technology in every single engagement would go a long way in creating customer delight. In this endeavour, we also tirelessly ensure all our processes, systems, and infrastructure are very well within the guardrails of data security, privacy, and relevant regulations.
Posted 2 weeks ago
5.0 - 10.0 years
7 - 11 Lacs
Bengaluru
Work from Office
Senior Software Engineer Data Were seeking a Senior Software Engineer or a Lead Software Engineer to join one of our Data Layer teams. As the name implies, the Data Layer is at the core of all things data at Zeta. Our responsibilities include: Developing and maintaining the Zeta Identity Graph platform, which collects billions of behavioural, demographic, locations and transactional signals to power people-based marketing. Ingesting vast amounts of identity and event data from our customers and partners. Facilitating data transfers across systems. Ensuring the integrity and health of our datasets. And much more. As a member of this team, the data engineer will be responsible for designing and expanding our existing data infrastructure, enabling easy access to data, supporting complex data analyses, and automating optimization workflows for business and marketing operations. Essential Responsibilities: As a Senior Software Engineer or a Lead Software Engineer, your responsibilities will include: Building, refining, tuning, and maintaining our real-time and batch data infrastructure Daily use technologies such as Spark, Airflow, Snowflake, Hive, Scylla, Django, FastAPI, etc. Maintaining data quality and accuracy across production data systems Working with Data Engineers to optimize data models and workflows Working with Data Analysts to develop ETL processes for analysis and reporting Working with Product Managers to design and build data products Working with our DevOps team to scale and optimize our data infrastructure Participate in architecture discussions, influence the road map, take ownership and responsibility over new projects Participating in on-call rotation in their respective time zones (be available by phone or email in case something goes wrong) Desired Characteristics: Minimum 5-10 years of software engineering experience. Proven long term experience and enthusiasm for distributed data processing at scale, eagerness to learn new things. Expertise in designing and architecting distributed low latency and scalable solutions in either cloud and on-premises environment. Exposure to the whole software development lifecycle from inception to production and monitoring Fluency in Python or solid experience in Scala, Java Proficient with relational databases and Advanced SQL Expert in usage of services like Spark and Hive Experience with web frameworks such as Flask, Django Experience in adequate usage of any scheduler such as Apache Airflow, Apache Luigi, Chronos etc. Experience in Kafka or any other stream message processing solutions. Experience in adequate usage of cloud services (AWS) at scale Experience in agile software development processes Excellent interpersonal and communication skills Nice to have: Experience with large scale / multi-tenant distributed systems Experience with columnar / NoSQL databases Vertica, Snowflake, HBase, Scylla, Couchbase Experience in real team streaming frameworks Flink, Storm Experience in open table formats such as Iceberg, Hudi or Deltalake
Posted 2 weeks ago
7.0 - 12.0 years
12 - 16 Lacs
Bengaluru
Work from Office
We are looking for lead or principal software engineers to join our Data Cloud team. Our Data Cloud team is responsible for the Zeta Identity Graph platform, which captures billions of behavioural, demographic, environmental, and transactional signals, for people-based marketing. As part of this team, the data engineer will be designing and growing our existing data infrastructure to democratize data access, enable complex data analyses, and automate optimization workflows for business and marketing operations. Job Description: Essential Responsibilities: As a Lead or Principal Data Engineer, your responsibilities will include: Building, refining, tuning, and maintaining our real-time and batch data infrastructure Daily use technologies such as HDFS, Spark, Snowflake, Hive, HBase, Scylla, Django, FastAPI, etc. Maintaining data quality and accuracy across production data systems Working with Data Engineers to optimize data models and workflows Working with Data Analysts to develop ETL processes for analysis and reporting Working with Product Managers to design and build data products Working with our DevOps team to scale and optimize our data infrastructure Participate in architecture discussions, influence the road map, take ownership and responsibility over new projects Participating in 24/7 on-call rotation (be available by phone or email in case something goes wrong) Desired Characteristics: Minimum 7 years of software engineering experience. Proven long term experience and enthusiasm for distributed data processing at scale, eagerness to learn new things. Expertise in designing and architecting distributed low latency and scalable solutions in either cloud and onpremises environment. Exposure to the whole software development lifecycle from inception to production and monitoring Fluency in Python or solid experience in Scala, Java Proficient with relational databases and Advanced SQL Expert in usage of services like Spark, HDFS, Hive, HBase Experience in adequate usage of any scheduler such as Apache Airflow, Apache Luigi, Chronos etc. Experience in adequate usage of cloud services (AWS) at scale Experience in agile software development processes Excellent interpersonal and communication skills Nice to have: Experience with large scale / multi-tenant distributed systems Experience with columnar / NoSQL databases Vertica, Snowflake, HBase, Scylla, Couchbase Experience in real team streaming frameworks Flink, Storm Experience with web frameworks such as Flask, Django
Posted 2 weeks ago
6.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Minimum qualifications: Bachelor's degree in Computer Science or related technical field or equivalent practical experience. 6 years of experience as a technical sales engineer in a cloud computing environment or a customer-facing role. Experience with Apache Spark and analytic warehouse solutions (e.g., Teradata, Netezza, Vertica, SQL-Server, and Big Data technologies). Experience implementing analytics systems architecture. Preferred qualifications: Master's degree in Computer Science or a related technical field. Experience with technical sales or professional consulting in cloud computing, data, information life-cycle management and Big Data. Experience in data warehousing, data lakes, batch/real-time processing and Extract, Transform, and Load (ETL) workflow including architecture design, implementing, tuning and schema design. Experience with coding languages like Python, JavaScript, C++, Scala, R, or Go. Knowledge of Linux, Web 2.0 development platforms, solutions, and related technologies like HTTP, Basic/NTLM,sessions, XML/XSLT/XHTML/HTML. Understanding of DNS, TCP, Firewalls, Proxy Servers, DMZ, Load Balancing, VPN, VPC. About The Job The Google Cloud Platform team helps customers transform and build what's next for their business — all with technology built in the cloud. Our products are developed for security, reliability and scalability, running the full stack from infrastructure to applications to devices and hardware. Our teams are dedicated to helping our customers — developers, small and large businesses, educational institutions and government agencies — see the benefits of our technology come to life. As part of an entrepreneurial team in this rapidly growing business, you will play a key role in understanding the needs of our customers and help shape the future of businesses of all sizes use technology to connect with customers, employees and partners. Google Cloud accelerates every organization’s ability to digitally transform its business and industry. We deliver enterprise-grade solutions that leverage Google’s cutting-edge technology, and tools that help developers build more sustainably. Customers in more than 200 countries and territories turn to Google Cloud as their trusted partner to enable growth and solve their most critical business problems. Responsibilities Support local sales teams in pursuing business opportunities by engaging customers to address data life-cycle aspects. Collaborate with business teams to identify business and technical requirements, conduct full technical discovery and architect client solutions. Lead technical projects, including technology advocacy, bid response support, product briefings, proof-of-concept work and co-ordinating technical resources. Leverage Google Cloud Platform products to demonstrate and prototype integrations in customer/partner environments.Travel for meetings, technical reviews, on-site delivery activities as needed. Deliver compelling product messaging to highlight the Google Cloud Platform value proposition through whiteboard and slide presentations, product demonstrations, white papers and Request For Information (RFI) response documents. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form .
Posted 2 weeks ago
8.0 years
0 Lacs
Tamil Nadu, India
On-site
Job Title: Data Engineer About VXI VXI Global Solutions is a BPO leader in customer service, customer experience, and digital solutions. Founded in 1998, the company has 40,000 employees in more than 40 locations in North America, Asia, Europe, and the Caribbean. We deliver omnichannel and multilingual support, software development, quality assurance, CX advisory, and automation & process excellence to the world’s most respected brands. VXI is one of the fastest growing, privately held business services organizations in the United States and the Philippines, and one of the few US-based customer care organizations in China. VXI is also backed by private equity investor Bain Capital. Our initial partnership ran from 2012 to 2016 and was the beginning of prosperous times for the company. During this period, not only did VXI expand our footprint in the US and Philippines, but we also gained ground in the Chinese and Central American markets. We also acquired Symbio, expanding our global technology services offering and enhancing our competitive position. In 2022, Bain Capital re-invested in the organization after completing a buy-out from Carlyle. This is a rare occurrence in the private equity space and shows the level of performance VXI delivers for our clients, employees, and shareholders. With this recent investment, VXI has started on a transformation to radically improve the CX experience though an industry leading generative AI product portfolio that spans hiring, training, customer contact, and feedback. Job Description: We are seeking talented and motivated Data Engineers to join our dynamic team and contribute to our mission of harnessing the power of data to drive growth and success. As a Data Engineer at VXI Global Solutions, you will play a critical role in designing, implementing, and maintaining our data infrastructure to support our customer experience and management initiatives. You will collaborate with cross-functional teams to understand business requirements, architect scalable data solutions, and ensure data quality and integrity. This is an exciting opportunity to work with cutting-edge technologies and shape the future of data-driven decision-making at VXI Global Solutions. Responsibilities: Design, develop, and maintain scalable data pipelines and ETL processes to ingest, transform, and store data from various sources. Collaborate with business stakeholders to understand data requirements and translate them into technical solutions. Implement data models and schemas to support analytics, reporting, and machine learning initiatives. Optimize data processing and storage solutions for performance, scalability, and cost-effectiveness. Ensure data quality and integrity by implementing data validation, monitoring, and error handling mechanisms. Collaborate with data analysts and data scientists to provide them with clean, reliable, and accessible data for analysis and modeling. Stay current with emerging technologies and best practices in data engineering and recommend innovative solutions to enhance our data capabilities. Requirements: Bachelor's degree in Computer Science, Engineering, or a related field. Proven 8+ years' experience as a data engineer or similar role Proficiency in SQL, Python, and/or other programming languages for data processing and manipulation. Experience with relational and NoSQL databases (e.g., SQL Server, MySQL, Postgres, Cassandra, DynamoDB, MongoDB, Oracle), data warehousing (e.g., Vertica, Teradata, Oracle Exadata, SAP Hana), and data modeling concepts. Strong understanding of distributed computing frameworks (e.g., Apache Spark, Apache Flink, Apache Storm) and cloud-based data platforms (e.g., AWS Redshift, Azure, Google BigQuery, Snowflake) Familiarity with data visualization tools (e.g., Tableau, Power BI, Looker, Apache Superset) and data pipeline tools (e.g. Airflow, Kafka, Data Flow, Cloud Data Fusion, Airbyte, Informatica, Talend) is a plus. Understanding of data and query optimization, query profiling, and query performance monitoring tools and techniques. Solid understanding of ETL/ELT processes, data validation, and data security best practices Experience in version control systems (Git) and CI/CD pipelines. Excellent problem-solving skills and attention to detail. Strong communication and collaboration skills to work effectively with cross-functional teams. Join VXI Global Solutions and be part of a dynamic team dedicated to driving innovation and delivering exceptional customer experiences. Apply now to embark on a rewarding career in data engineering with us!
Posted 2 weeks ago
7.0 - 15.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Role: Data Analyst Experience: 7-15 Years Skill: Datawarehouse Concepts ETL tool-Informatica is must Python-Moderate Advance SQL Data Visualization tools-Power BI/MSTR Finance Domain JD: 7-9 years of experience as a Data Analyst, with at least 5 years supporting Finance within the insurance industry. Hands-on experience with Vertica/Teradata for querying, performance optimization, and large-scale data analysis. Advanced SQL skills: proficiency in Python is a strong plus. Proven ability to write detailed source-to-target mapping documents and collaborate with technical teams on data integration. Experience working in hybrid onshore-offshore team environments. Deep understanding of data modeling concepts and experience working with relational and dimensional models. Strong communication skills with the ability to clearly explain technical concepts to non-technical audiences. A strong understanding of statistical concepts, probability and accounting standards, financial statements (balance sheet, income statement, cash flow statement), and financial ratios. Strong understanding of life insurance products and business processes across the policy lifecycle. Quantitative Finance: Understanding of financial modeling, risk management, and derivatives.
Posted 2 weeks ago
5.0 - 8.0 years
3 - 6 Lacs
Hyderabad
Work from Office
Role Purpose The purpose of the role is to support process delivery by ensuring daily performance of the Production Specialists, resolve technical escalations and develop technical capability within the Production Specialists. Do Oversee and support process by reviewing daily transactions on performance parameters Review performance dashboard and the scores for the team Support the team in improving performance parameters by providing technical support and process guidance Record, track, and document all queries received, problem-solving steps taken and total successful and unsuccessful resolutions Ensure standard processes and procedures are followed to resolve all client queries Resolve client queries as per the SLAs defined in the contract Develop understanding of process/ product for the team members to facilitate better client interaction and troubleshooting Document and analyze call logs to spot most occurring trends to prevent future problems Identify red flags and escalate serious client issues to Team leader in cases of untimely resolution Ensure all product information and disclosures are given to clients before and after the call/email requests Avoids legal challenges by monitoring compliance with service agreements Handle technical escalations through effective diagnosis and troubleshooting of client queries Manage and resolve technical roadblocks/ escalations as per SLA and quality requirements If unable to resolve the issues, timely escalate the issues to TA & SES Provide product support and resolution to clients by performing a question diagnosis while guiding users through step-by-step solutions Troubleshoot all client queries in a user-friendly, courteous and professional manner Offer alternative solutions to clients (where appropriate) with the objective of retaining customers and clients business Organize ideas and effectively communicate oral messages appropriate to listeners and situations Follow up and make scheduled call backs to customers to record feedback and ensure compliance to contract SLAs Build people capability to ensure operational excellence and maintain superior customer service levels of the existing account/client Mentor and guide Production Specialists on improving technical knowledge Collate trainings to be conducted as triage to bridge the skill gaps identified through interviews with the Production Specialist Develop and conduct trainings (Triages) within products for production specialist as per target Inform client about the triages being conducted Undertake product trainings to stay current with product features, changes and updates Enroll in product specific and any other trainings per client requirements/recommendations Identify and document most common problems and recommend appropriate resolutions to the team Update job knowledge by participating in self learning opportunities and maintaining personal networks Mandatory Skills: HP Vertica. Experience:5-8 Years.
Posted 3 weeks ago
9.0 - 12.0 years
12 - 17 Lacs
Bengaluru
Work from Office
Your Impact: The ESM R&D team is seeking an experienced Python Developer, to join our Global R&D team to deliver innovative enterprise software solutions by working in a fast paced challenging and enriching environment. This is a high-growth business, and our solutions are used by enterprise class highly demanding customers across the globe. We are using a Microservices based architecture composed of multiple services running on Kubernetes using Docker Containers. As a lead software systems engineer, You will have to design and develop new product capabilities by working with the System Architect and a team of Engineers and other architects. You will contribute as a team member and take responsibility for own work commitments and take part in project functional problem-solving. You will make decisions based on established practices. You will work under general guidance with progress reviewed on a regular basis. You will also be involved in handling customer incidents (CPE), understanding customer use cases, designing & implementing, and troubleshooting and debugging of software programs. What the role offers: Produce high quality code according to design specifications. Software design/coding for a functional requirement, ensure quality and adherence to company standards. Utilize analytical skills to troubleshoot and fix complex code defects. Work across teams and functional roles to ensure interoperability among other products, including training and consultation. Provide status updates to stakeholders and escalates issues when necessary. Participate in the software development process from design to release in an Agile Development Framework. Design enhancements, updates, and programming changes for portions and subsystems of the software Analyses design and determines coding, programming, and integration activities required based on general objectives and knowledge of overall architecture of product or solution. Current Product Engineering (CPE) based on customer submitted incidents. Experience in troubleshooting and providing solutions for customer issues in a complex environment. Excellent team player and focus on collaboration activities. Ability to take up other duties as assigned. Provide guidance and mentoring to less-experienced team members. What you need to succeed: Bachelor's or Master's engineering degree in Computer Science, Information Systems, or equivalent from premier institutes. 9-12 years of overall software development experience, with at least 2+ recent years of experience in developing python applications on a large-scale environment. Fundamentally good programming and debugging skills Working knowledge in Python and Core Java Programming skills Working knowledge in Docker/Container technologies, Kubernetes, Helm Knowledge on XML, JSON and processing them programmatically. User or Administration knowledge on Linux Operating System Database user level Knowledge, preferably PostgreSQL, Vertica and Oracle DB. Should be capable of writing and debugging SQL queries. Exposure to Cloud technologies usage and deployments would be good (AWS, GCP, Azure etc.) Working experience in Agile environment or Scaled Agile (SAFe) Strong Knowledge on Object oriented design and Data Structures. Ability to work independently in a cross functional distributed team culture with focus on teamwork. Experience of technically mentoring and guiding junior engineers Strong Communication, analytical and problem-solving skills. Understanding on CI-CD/build tools like GIT, Maven, Gradle, Jenkins Knowledge and experience in IT Operations Management Domain.
Posted 3 weeks ago
7.0 - 9.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Role:-Data Analyst / Data Engineer Exp:- 7-11 Yrs Location:-Hyderabad Primary Skills:- ETL,Informatica,Python, SQL,BI tools and Investment Domain Please share your resumes to jyothsna.g@technogenindia.com , Job Description:- The Minimum Qualifications Education: Bachelor’s or Master’s degree in Data Science, Statistics, Mathematics, Computer Science, Actuarial Science, or related field. Experience: 7-9 years of experience as a Data Analyst, with at least 5 years supporting Finance within the insurance industry. Hands-on experience with Vertica/Teradata for querying, performance optimization, and large-scale data analysis. Advanced SQL skills: proficiency in Python is a strong plus. Proven ability to write detailed source-to-target mapping documents and collaborate with technical teams on data integration. Experience working in hybrid onshore-offshore team environments. Deep understanding of data modelling concepts and experience working with relational and dimensional models. Strong communication skills with the ability to clearly explain technical concepts to non-technical audiences. A strong understanding of statistical concepts, probability and accounting standards, financial statements (balance sheet, income statement, cash flow statement), and financial ratios. Strong understanding of life insurance products and business processes across the policy lifecycle. Investment Principles: Knowledge of different asset classes, investment strategies, and financial markets. Quantitative Finance: Understanding of financial modelling, risk management, and derivatives. Regulatory Framework: Awareness of relevant financial regulations and compliance requirements. The Ideal Qualifications Technical Skills: Proven track record of Analytical and Problem-Solving skills. A solid understanding of Financial Accounting Systems and knowledge of accounting principles, reporting and budgeting Strong data analysis skills for extracting insights from financial data Proficiency in data visualization tools and reporting software is also important. Experience integrating financial systems with actuarial, policy administration, and claims platforms. Familiarity with actuarial processes, reinsurance, or regulatory reporting requirements. Experience with General Ledger systems such as SAP and forecasting tools like Anaplan. Soft Skills: Exceptional communication and interpersonal skills. Ability to influence and motivate teams without direct authority. Excellent time management and organizational skills, with the ability to prioritize multiple initiatives. What to Expect as Part of MassMutual and the Team Regular meetings with the Corporate Technology leadership team Focused one-on-one meetings with your manager Access to mentorship opportunities Access to learning content on Degreed and other informational platforms Your ethics and integrity will be valued by a company with a strong and stable ethical business with industry leading pay and benefits
Posted 3 weeks ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Opentext - The Information Company OpenText is a global leader in information management, where innovation, creativity, and collaboration are the key components of our corporate culture. As a member of our team, you will have the opportunity to partner with the most highly regarded companies in the world, tackle complex issues, and contribute to projects that shape the future of digital transformation. AI-First. Future-Driven. Human-Centered. At OpenText, AI is at the heart of everything we do—powering innovation, transforming work, and empowering digital knowledge workers. We're hiring talent that AI can't replace to help us shape the future of information management. Join us. OPENTEXT OpenText enables the digital world by simplifying, transforming, and accelerating enterprise information needs, on premises or in the cloud. We embrace all things digital and are committed to being the Best Place to Work for our Employees in over 140 locations around the world. We obsess over our customers to ensure they are wildly successful in embracing the Digital World. Our customers entrust us with their most important information; we need to be their most trusted partner. What we do, we do well. What we create, we do purposefully to impact the world. If you believe in this and are passionate about enabling the Digital World, then to let OpenText turn your career vision into reality. The Opportunity The OpenText Vertica provides a state-of-the-art Big Data Analytics platform that handles petabytes of data. It is a commercially successful, high performance, distributed database. Every industry is finding ways to benefit from data analytics. We continue to engineer our product to be flexible so that it supports all of them. Vertica is a recognized leader in analytics powering some of the world’s most data driven organizations like Uber, Wayfair, Intuit, Cerner, and more. Our columnar, MPP, distributed database delivers unprecedented speed, petabyte scale, with analytics and machine learning functions built into the core. You Are Great At Develop and maintain test strategies and test cases while performing all testing activities including functional, integration and regression testing Design/Develop/Maintain the Automation Framework. Also setup regular execution test-beds for continuous delivery Identify, isolate, regress and communicate bugs effectively and efficiently Evaluate and communicate test coverage, *red flags- and anomalies to the Scrum team to aid in making the decision to certify releases. Work with a diverse set of enterprise applications Perform business/requirements analysis and identify requirements traceability Have expertise in Scrum and be able to help team members create automation strategies for new features Conducting Performance and scalability assessments on products, integration of products, and solutions in varying deployment architectures Responsibility for design and directing the creation of realistic and high-end test environments. These will include use of appropriate data sets, optimal configuration of all products and supporting infrastructure components Ensuring complete understanding and effective documentation of performance requirements test plans, and analysis methodologies to design product usage scenarios that, when run over a volume of requests, determine the responsiveness and scalability of the given operations on the given deployment architecture Experience to choose the most effective load test clients and be well versed in resource monitoring tools and techniques in a wide variety of environments Demonstration of exceptional abilities to diagnose and troubleshoot product and environmental issues as they arise What It Takes Bachelor’s degree in software engineering, computer science or equivalent with 8+ years of experience with test-driven, behaviour-driven, or acceptance test-driven developments Experience in testing complex enterprise level applications Experience strong working knowledge on public clouds. AWS is an advantage. Experience in quality automation and continuous integration/continuous deployment in at least one major automation framework/language (e.g. Java, TestNG, Selenium, , Selenide, REST Assured, Python, Terratest) Black Box / Grey Box testing experience in testing product API Knowledge of bug tracking tools such as Bugzilla and/or Jira Solid testing experience with SQL and database technologies (e.g., SQL Server, MySQL, Oracle, etc. Experience on ML and data analytics is a plus. Experience with Jenkins or similar CI systems Experience testing software in both Linux systems. Good understanding of performance and security aspects of software development Experience with SOA, Web Services and SOAP is desirable OpenText's efforts to build an inclusive work environment go beyond simply complying with applicable laws. Our Employment Equity and Diversity Policy provides direction on maintaining a working environment that is inclusive of everyone, regardless of culture, national origin, race, color, gender, gender identification, sexual orientation, family status, age, veteran status, disability, religion, or other basis protected by applicable laws. If you need assistance and/or a reasonable accommodation due to a disability during the application or recruiting process, please contact us at hr@opentext.com. Our proactive approach fosters collaboration, innovation, and personal growth, enriching OpenText's vibrant workplace.
Posted 3 weeks ago
8.0 years
8 - 10 Lacs
Hyderābād
On-site
OPENTEXT - THE INFORMATION COMPANY OpenText is a global leader in information management, where innovation, creativity, and collaboration are the key components of our corporate culture. As a member of our team, you will have the opportunity to partner with the most highly regarded companies in the world, tackle complex issues, and contribute to projects that shape the future of digital transformation. AI-First. Future-Driven. Human-Centered. At OpenText, AI is at the heart of everything we do—powering innovation, transforming work, and empowering digital knowledge workers. We're hiring talent that AI can't replace to help us shape the future of information management. Join us. OPENTEXTOpenText enables the digital world by simplifying, transforming, and accelerating enterprise information needs, on premises or in the cloud. We embrace all things digital and are committed to being the Best Place to Work for our Employees in over 140 locations around the world.We obsess over our customers to ensure they are wildly successful in embracing the Digital World. Our customers entrust us with their most important information; we need to be their most trusted partner. What we do, we do well. What we create, we do purposefully to impact the world. If you believe in this and are passionate about enabling the Digital World, then to let OpenText turn your career vision into reality.The Opportunity: The OpenText Vertica provides a state-of-the-art Big Data Analytics platform that handles petabytes of data. It is a commercially successful, high performance, distributed database. Every industry is finding ways to benefit from data analytics. We continue to engineer our product to be flexible so that it supports all of them. Vertica is a recognized leader in analytics powering some of the world’s most data driven organizations like Uber, Wayfair, Intuit, Cerner, and more. Our columnar, MPP, distributed database delivers unprecedented speed, petabyte scale, with analytics and machine learning functions built into the core. You are great at: Develop and maintain test strategies and test cases while performing all testing activities including functional, integration and regression testing Design/Develop/Maintain the Automation Framework. Also setup regular execution test-beds for continuous delivery Identify, isolate, regress and communicate bugs effectively and efficiently Evaluate and communicate test coverage, *red flags- and anomalies to the Scrum team to aid in making the decision to certify releases. Work with a diverse set of enterprise applications Perform business/requirements analysis and identify requirements traceability Have expertise in Scrum and be able to help team members create automation strategies for new features Conducting Performance and scalability assessments on products, integration of products, and solutions in varying deployment architectures Responsibility for design and directing the creation of realistic and high-end test environments. These will include use of appropriate data sets, optimal configuration of all products and supporting infrastructure components Ensuring complete understanding and effective documentation of performance requirements test plans, and analysis methodologies to design product usage scenarios that, when run over a volume of requests, determine the responsiveness and scalability of the given operations on the given deployment architecture Experience to choose the most effective load test clients and be well versed in resource monitoring tools and techniques in a wide variety of environments Demonstration of exceptional abilities to diagnose and troubleshoot product and environmental issues as they arise What it takes: Bachelor’s degree in software engineering, computer science or equivalent with 8+ years of experience with test-driven, behaviour-driven, or acceptance test-driven developments Experience in testing complex enterprise level applications Experience strong working knowledge on public clouds. AWS is an advantage. Experience in quality automation and continuous integration/continuous deployment in at least one major automation framework/language (e.g. Java, TestNG, Selenium, , Selenide, REST Assured, Python, Terratest) Black Box / Grey Box testing experience in testing product API Knowledge of bug tracking tools such as Bugzilla and/or Jira Solid testing experience with SQL and database technologies (e.g., SQL Server, MySQL, Oracle, etc. Experience on ML and data analytics is a plus. Experience with Jenkins or similar CI systems Experience testing software in both Linux systems. Good understanding of performance and security aspects of software development Experience with SOA, Web Services and SOAP is desirable OpenText's efforts to build an inclusive work environment go beyond simply complying with applicable laws. Our Employment Equity and Diversity Policy provides direction on maintaining a working environment that is inclusive of everyone, regardless of culture, national origin, race, color, gender, gender identification, sexual orientation, family status, age, veteran status, disability, religion, or other basis protected by applicable laws. If you need assistance and/or a reasonable accommodation due to a disability during the application or recruiting process, please contact us athr@opentext.com. Our proactive approach fosters collaboration, innovation, and personal growth, enriching OpenText's vibrant workplace.
Posted 3 weeks ago
6.0 - 11.0 years
10 - 18 Lacs
Bengaluru
Remote
We are looking for experienced DBAs worked on multiple database technologies and cloud migration projects. 6+ years of experience working on SQL/NoSQL/Data warehouse platforms on on-premise and cloud (AWS, Azure & GCP) Provide expert-level guidance on cloud adoption, data migration strategies, and digital transformation projects Strong understanding of RDBMS, NoSQL, Datawarehouse, In Memory and Data Lake architecture, features, and functionalities Proficiency in SQL and data manipulation techniques. Experience with data loading and unloading tools and techniques. Expertise in Data Access Management, Database reliability & scalability and Administer, configure, and optimize database resources and services across the organization Ensure high availability, replication, and failover strategies Implement serverless database architectures for cost-effective, scalable storage Key Responsibilities Strong proficiency in Database administration of one or more databases (Snowflake, BigQuery, Amazon Redshift, Teradata, SAP HANA, Oracle, PostgreSQL, MySQL, SQL Server, Cassandra, MongoDB, Neo4j, Cloudera, Micro Focus, IBM DB2, Elasticsearch, DynamoDB, Azure synapse ) Plan and Execute the On-Prem Database/Analysis Services/Reporting Services/Integration Services Migration to AWS/Azure/GCP Develop automation scripts using Python, Shell Scripting, or Terraform for streamlined database operations. Provide technical guidance and mentoring to junior DBAs and data engineers. Hands-on experience with data modelling, ETL/ELT processes, and data integration tools. Monitoring and optimizing the performance of virtual warehouses, queries, and overall system performance. Optimize database performance through query tuning, indexing, and configuration. Manage replication, backups, and disaster recovery for high availability. Troubleshoot and resolve database issues, including performance bottlenecks, errors, and downtime. Collaborate with the infrastructure team to configure, manage, and monitor PostgreSQL in cloud environments (AWS, GCP, or Azure). Provide on-call support for critical database operations and incidents Provide Level 3 and 4 technical support, troubleshooting complex issues. Participate in cross-functional teams for database design and optimization.
Posted 3 weeks ago
6.0 - 11.0 years
10 - 18 Lacs
Bengaluru
Remote
We are looking for experienced DBAs worked on multiple database technologies and cloud migration projects for our clients worldwide. 6+ years of experience working on SQL/NoSQL/Data warehouse platforms on on-premise and cloud (AWS, Azure & GCP) Provide expert-level guidance on cloud adoption, data migration strategies, and digital transformation projects Strong understanding of RDBMS, NoSQL, Datawarehouse, In Memory and Data Lake architecture, features, and functionalities Proficiency in SQL and data manipulation techniques. Experience with data loading and unloading tools and techniques. Expertise in Data Access Management, Database reliability & scalability and Administer, configure, and optimize database resources and services across the organization Ensure high availability, replication, and failover strategies Implement serverless database architectures for cost-effective, scalable storage Key Responsibilities Strong proficiency in Database administration of one or more databases (Snowflake, BigQuery, Amazon Redshift, Teradata, SAP HANA, Oracle, PostgreSQL, MySQL, SQL Server, Cassandra, MongoDB, Neo4j, Cloudera, Micro Focus, IBM DB2, Elasticsearch, DynamoDB, Azure synapse ) Plan and Execute the On-Prem Database/Analysis Services/Reporting Services/Integration Services Migration to AWS/Azure/GCP Develop automation scripts using Python, Shell Scripting, or Terraform for streamlined database operations. Provide technical guidance and mentoring to junior DBAs and data engineers. Hands-on experience with data modelling, ETL/ELT processes, and data integration tools. Monitoring and optimizing the performance of virtual warehouses, queries, and overall system performance. Optimize database performance through query tuning, indexing, and configuration. Manage replication, backups, and disaster recovery for high availability. Troubleshoot and resolve database issues, including performance bottlenecks, errors, and downtime. Collaborate with the infrastructure team to configure, manage, and monitor PostgreSQL in cloud environments (AWS, GCP, or Azure). Provide on-call support for critical database operations and incidents Provide Level 3 and 4 technical support, troubleshooting complex issues. Participate in cross-functional teams for database design and optimization.
Posted 3 weeks ago
7.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Role:-Data Analyst/Data Engineer Exp:- 7-14 Yrs Location:-Hyderabad Primary Skills:- ETL,Informatica,Python,SQL,BI tools and Investment Domain Please share your resumes to rajamahender.n@technogenindia.com , Job Description:- •7-9 years of experience with data analytics, data modeling, and database design. •3+ years of coding and scripting (Python, Java, Scala) and design experience. •3+ years of experience with Spark framework. •5+ Experience with ELT methodologies and tools. •5+ years mastery in designing, developing, tuning and troubleshooting SQL. •Knowledge of Informatica Power center and Informatica IDMC. •Knowledge of distributed, column- orientated technology to create high-performant database technologies like - Vertica, Snowflake. •Strong data analysis skills for extracting insights from financial data •Proficiency in reporting tools (e.g., Power BI, Tableau).
Posted 4 weeks ago
4.0 - 8.0 years
8 - 12 Lacs
Pune
Work from Office
Template Job Title - Decision Science Practitioner Analyst S&C GN Management Level :Senior Analyst Location:Bangalore/ Kolkata Must have skills: Collibra Data Quality - data profiling, anomaly detection, reconciliation, data validation, Python, SQL Good to have skills: PySpark, Kubernetes, Docker, Git Job Summary : We are seeking a highly skilled and motivated Data Science cum Data Engineer Senior Analyst to lead innovative projects and drive impactful solutions in domains such as Consumer Tech , Enterprise Tech , and Semiconductors . This role combines hands-on technical expertise , and client delivery management to execute cutting-edge projects in data science & data engineering Key Responsibilities Data Science and Engineering Implement and manage end to end Data Quality frameworks using Collibra Data Quality (CDQ). This includes requirement gathering from the client, code development on SQL, Unit testing, Client demos, User acceptance testing, documentation etc. Work extensively with business users, data analysts, and other stakeholders to understand data quality requirements and business use cases. Developdata validation, profiling, anomaly detection, and reconciliation processes. WriteSQL queries for simple to complex data quality checks . Python, and PySpark scripts to support data transformation and data ingestion. Deploy and manage solutions on Kubernetes workloads for scalable execution. Maintain comprehensivetechnical documentation of Data Quality processes and implemented solutions. Work in an Agile environment , leveraging Jira for sprint planning and task management. Troubleshoot data quality issues and collaborate with engineering teams for resolution. Provide insights forcontinuous improvement in data governance and quality processes. Build and manage robust data pipelines using Pyspark and Python to read and write from databases such as Vertica and PostgreSQL. Optimize and maintain existing pipelines for performance and reliability. Build custom solutions using Python, including FastAPI applications and plugins for Collibra Data Quality. Oversee the infrastructure of the Collibra application in Kubernetes environment, perform upgrades when required, and troubleshoot and resolve any Kubernetes issues that may affect the application's operation. Deploy and manage solutions, optimize resources for deployments in Kubernetes, including writing YAML files and managing configurations Build and deploy Docker images for various use cases, ensuring efficient and reusable solutions. Collaboration and Training Communicate effectively with stakeholders to align technical implementations with business objectives. Provide training and guidance to stakeholders on Collibra Data Quality usage and help them build and implement data quality rules. Version Control and Documentation Use Git for version control to manage code and collaborate effectively. Document all implementations, including data quality workflows, data pipelines, and deployment processes, ensuring easy reference and knowledge sharing. Database and Data Model Optimization Design and optimize data models for efficient storage and retrieval. Required Qualifications Experience:4+ years in data science Education:B tech, M tech in Computer Science, Statistics, Applied Mathematics, or related field Industry Knowledge:Preferred experience in Consumer Tech, Enterprise Tech, or Semiconductors but not mandatory Technical Skills Programming: Proficiency in Python , SQL for data analysis and transformation. Tools :Hands-on experience with Collibra Data Quality (CDQ) or similar Data Quality tools (e.g., Informatica DQ, Talend, Great Expectations, Ataccama, etc.). Experience working with Kubernetes workloads. Experience with Agile methodologies and task tracking using Jira. Preferred Skills Strong analytical and problem-solving skills with a results-oriented mindset. Good communication, stakeholder management & requirement gathering capabilities. Additional Information: - The ideal candidate will possess a strong educational background in quantitative discipline and experience in working with Hi-Tech clients - This position is based at our Bengaluru (preferred) and Kolkata office. About Our Company | AccentureQualification Experience: 4+ years in data science Educational Qualification:B tech, M tech in Computer Science, Statistics, Applied Mathematics, or related field
Posted 4 weeks ago
5.0 - 9.0 years
2 - 9 Lacs
Hyderābād
On-site
Job Description: Experience: Typically requires 5-9 years experience. Role & Responsibilities Works directly with the client user community and business analysts to define and document data requirements for data integration and business intelligence applications. Determines and documents data mapping rules for movement of medium to high complexity data between applications. Adheres to and promotes the use of data administration standards. Supports data selection, extraction, and cleansing for corporate applications, including data warehouse and data marts. Creates and sustains processes, tools, and on-going support structures and processes. Investigates and resolves data issues across platforms and applications, including discrepancies of definition, format and function. Creates and populates meta-data into repositories. May create data models, including robust data definitions, which may be entity-relationship-attribute models, star, or dimensional models. May also create data flow diagrams and process models and integrates models across functional areas and platforms. Works closely with DBAs to transition logical models to physical implementation. May be responsible for employing data mining techniques to achieve data synchronization, redundancy elimination, source identification, data reconciliation, and problem root cause analysis. May also be responsible for quality control and auditing of databases, resolving data problems, and analyzing system changes for quality assurance. Required Skills: Full life-cycle experience on enterprise software development projects. Experience with Snowflake, Databricks, Hadoop and fluent with SQL, Postgres SQL, Vertica, Evenhub, Goldengate, Mongo DB and data analysis techniques. Experience with AI/ML & Python would be an added advantage. Experience in any of the databases SQL (MySQL, Postgres SQL) and NoSQL (Mongo DB, Cassandra, Azure Cosmos DB), Distributed Databases or Big Data (Apache Spark, Cloudera, Vertica), Databricks Snowflake Certification would be an added advantage. Extensive experience in ETL, shell or python scripting, data modelling, analysis, and preparation Experience in Unix/Linux system, files systems, shell scripting. Good to have knowledge on any cloud platforms like AWS, Azure, Snowflake, etc. Good to have experience in BI Reporting tools – Power BI or Tableau Good problem-solving and analytical skills used to resolve technical problems. Ability to work independently but must be a team player. Should be able to drive business decisions and take ownership of their work. Experience in presentation design, development, delivery, and good communication skills to present analytical results and recommendations for action-oriented data driven decisions and associated operational and financial impacts. Sharp technical troubleshooting skills. Experience in presentation design, development, delivery, and communication skills to present analytical results and recommendations for action-oriented data-driven decisions and associated operational and financial impacts. Keep up-to-date with developments in technology and industry norms can help you to produce higher-quality results. Flexible to work from office 3 days (in a week) from 1 pm to 10 pm #SoftwareEngineering Weekly Hours: 40 Time Type: Regular Location: IND:KA:Bengaluru / Innovator Building, Itpb, Whitefield Rd - Adm: Intl Tech Park, Innovator Bldg It is the policy of AT&T to provide equal employment opportunity (EEO) to all persons regardless of age, color, national origin, citizenship status, physical or mental disability, race, religion, creed, gender, sex, sexual orientation, gender identity and/or expression, genetic information, marital status, status with regard to public assistance, veteran status, or any other characteristic protected by federal, state or local law. In addition, AT&T will provide reasonable accommodations for qualified individuals with disabilities. AT&T is a fair chance employer and does not initiate a background check until an offer is made.
Posted 1 month ago
5.0 - 9.0 years
3 - 11 Lacs
Bengaluru
On-site
Job Description: Experience: Typically requires 5-9 years experience. Role & Responsibilities Works directly with the client user community and business analysts to define and document data requirements for data integration and business intelligence applications. Determines and documents data mapping rules for movement of medium to high complexity data between applications. Adheres to and promotes the use of data administration standards. Supports data selection, extraction, and cleansing for corporate applications, including data warehouse and data marts. Creates and sustains processes, tools, and on-going support structures and processes. Investigates and resolves data issues across platforms and applications, including discrepancies of definition, format and function. Creates and populates meta-data into repositories. May create data models, including robust data definitions, which may be entity-relationship-attribute models, star, or dimensional models. May also create data flow diagrams and process models and integrates models across functional areas and platforms. Works closely with DBAs to transition logical models to physical implementation. May be responsible for employing data mining techniques to achieve data synchronization, redundancy elimination, source identification, data reconciliation, and problem root cause analysis. May also be responsible for quality control and auditing of databases, resolving data problems, and analyzing system changes for quality assurance. Required Skills: Full life-cycle experience on enterprise software development projects. Experience with Snowflake, Databricks, Hadoop and fluent with SQL, Postgres SQL, Vertica, Evenhub, Goldengate, Mongo DB and data analysis techniques. Experience with AI/ML & Python would be an added advantage. Experience in any of the databases SQL (MySQL, Postgres SQL) and NoSQL (Mongo DB, Cassandra, Azure Cosmos DB), Distributed Databases or Big Data (Apache Spark, Cloudera, Vertica), Databricks Snowflake Certification would be an added advantage. Extensive experience in ETL, shell or python scripting, data modelling, analysis, and preparation Experience in Unix/Linux system, files systems, shell scripting. Good to have knowledge on any cloud platforms like AWS, Azure, Snowflake, etc. Good to have experience in BI Reporting tools – Power BI or Tableau Good problem-solving and analytical skills used to resolve technical problems. Ability to work independently but must be a team player. Should be able to drive business decisions and take ownership of their work. Experience in presentation design, development, delivery, and good communication skills to present analytical results and recommendations for action-oriented data driven decisions and associated operational and financial impacts. Sharp technical troubleshooting skills. Experience in presentation design, development, delivery, and communication skills to present analytical results and recommendations for action-oriented data-driven decisions and associated operational and financial impacts. Keep up-to-date with developments in technology and industry norms can help you to produce higher-quality results. Flexible to work from office 3 days (in a week) from 1 pm to 10 pm #SoftwareEngineering Weekly Hours: 40 Time Type: Regular Location: IND:KA:Bengaluru / Innovator Building, Itpb, Whitefield Rd - Adm: Intl Tech Park, Innovator Bldg It is the policy of AT&T to provide equal employment opportunity (EEO) to all persons regardless of age, color, national origin, citizenship status, physical or mental disability, race, religion, creed, gender, sex, sexual orientation, gender identity and/or expression, genetic information, marital status, status with regard to public assistance, veteran status, or any other characteristic protected by federal, state or local law. In addition, AT&T will provide reasonable accommodations for qualified individuals with disabilities. AT&T is a fair chance employer and does not initiate a background check until an offer is made. Job ID R-73689 Date posted 07/04/2025 Benefits Your needs? Met. Your wants? Considered. Take a look at our comprehensive benefits. Paid Time Off Tuition Assistance Insurance Options Discounts Training & Development
Posted 1 month ago
5.0 - 9.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description: Experience: Typically requires 5-9 years experience. Role & Responsibilities Works directly with the client user community and business analysts to define and document data requirements for data integration and business intelligence applications. Determines and documents data mapping rules for movement of medium to high complexity data between applications. Adheres to and promotes the use of data administration standards. Supports data selection, extraction, and cleansing for corporate applications, including data warehouse and data marts. Creates and sustains processes, tools, and on-going support structures and processes. Investigates and resolves data issues across platforms and applications, including discrepancies of definition, format and function. Creates and populates meta-data into repositories. May create data models, including robust data definitions, which may be entity-relationship-attribute models, star, or dimensional models. May also create data flow diagrams and process models and integrates models across functional areas and platforms. Works closely with DBAs to transition logical models to physical implementation. May be responsible for employing data mining techniques to achieve data synchronization, redundancy elimination, source identification, data reconciliation, and problem root cause analysis. May also be responsible for quality control and auditing of databases, resolving data problems, and analyzing system changes for quality assurance. Required Skills: Full life-cycle experience on enterprise software development projects. Experience with Snowflake, Databricks, Hadoop and fluent with SQL, Postgres SQL, Vertica, Evenhub, Goldengate, Mongo DB and data analysis techniques. Experience with AI/ML & Python would be an added advantage. Experience in any of the databases SQL (MySQL, Postgres SQL) and NoSQL (Mongo DB, Cassandra, Azure Cosmos DB), Distributed Databases or Big Data (Apache Spark, Cloudera, Vertica), Databricks Snowflake Certification would be an added advantage. Extensive experience in ETL, shell or python scripting, data modelling, analysis, and preparation Experience in Unix/Linux system, files systems, shell scripting. Good to have knowledge on any cloud platforms like AWS, Azure, Snowflake, etc. Good to have experience in BI Reporting tools – Power BI or Tableau Good problem-solving and analytical skills used to resolve technical problems. Ability to work independently but must be a team player. Should be able to drive business decisions and take ownership of their work. Experience in presentation design, development, delivery, and good communication skills to present analytical results and recommendations for action-oriented data driven decisions and associated operational and financial impacts. Sharp technical troubleshooting skills. Experience in presentation design, development, delivery, and communication skills to present analytical results and recommendations for action-oriented data-driven decisions and associated operational and financial impacts. Keep up-to-date with developments in technology and industry norms can help you to produce higher-quality results. Flexible to work from office 3 days (in a week) from 1 pm to 10 pm #SoftwareEngineering Weekly Hours: 40 Time Type: Regular Location: IND:KA:Bengaluru / Innovator Building, Itpb, Whitefield Rd - Adm: Intl Tech Park, Innovator Bldg It is the policy of AT&T to provide equal employment opportunity (EEO) to all persons regardless of age, color, national origin, citizenship status, physical or mental disability, race, religion, creed, gender, sex, sexual orientation, gender identity and/or expression, genetic information, marital status, status with regard to public assistance, veteran status, or any other characteristic protected by federal, state or local law. In addition, AT&T will provide reasonable accommodations for qualified individuals with disabilities. AT&T is a fair chance employer and does not initiate a background check until an offer is made. JobCategory:BigData
Posted 1 month ago
5.0 - 10.0 years
4 - 6 Lacs
Noida
On-site
Key Responsibilities: 1. Develop and maintain ETL jobs using Talend Data Integration suite for batch and real-time data processing. 2. Write and optimize complex SQL scripts, queries, and analytical functions for data transformation and validation. 3. Recreate, enhance, and troubleshoot stored procedures, functions, and packages in Oracle and Vertica environments. 4. Perform impact analysis and data lineage tracking for changes in source systems and downstream dependencies. 5. Migrate legacy ETL workflows and SQL logic to Talend-based frameworks. 6. Implement data quality and data profiling checks to ensure reliability and accuracy of ingested data. 7. Support data reconciliation and validation efforts between source and target systems. 8. Collaborate with DBAs and infrastructure teams to optimize data storage, indexing, and partitioning strategies. 9. To participate in code reviews, version control, and deployment automation for ETL scripts and database code. 10. Troubleshoot and resolve production data issues and support root cause analysis. Education and Certifications: Mandatory – Bachelor's degree (B.Tech /B.E.) Technical Skills, Knowledge & Abilities Proficient in using Talend ETL suite for data integration and transformation. Deep working knowledge of Oracle, Microsoft SQL Server, and PostgreSQL databases. Exposure to Vertica as a data warehouse solution. Expertise in SQL query development for data extraction, manipulation, and analysis. Solid understanding of designing and maintaining stored procedures across multiple RDBMS. Knowledge of data security, integration, and interoperability best practices in data engineering. Familiarity with data warehousing concepts, including OLAP and data cubes. Experience with metadata management and implementing data quality frameworks. Programming skills in Python, SQL, or Java for building data workflows and automation scripts. Work Experience: 5-10 years of relevant experience
Posted 1 month ago
0 years
10 - 14 Lacs
Pune, Maharashtra, India
On-site
Company: IGS Website: Visit Website Business Type: Enterprise Company Type: Service Business Model: B2B Funding Stage: Series C Industry: IT Services Salary Range: ₹ 10-14 Lacs PA Job Description About the Role: We're seeking a detail-oriented and driven ETL & BI Testing Specialist to join our growing team. In this role, you'll be responsible for validating complex data processes, ensuring data accuracy across ETL pipelines, and driving excellence in Business Intelligence testing. You’ll work closely with cross-functional teams in a fast-paced, agile environment. Key Responsibilities Perform backend testing on complex ETL workflows and data warehouse systems. Validate BI reports and dashboards, ensuring data consistency and business logic accuracy. Develop and execute advanced SQL queries to support testing needs. Identify, troubleshoot, and document data issues within large datasets. Utilize tools like JIRA for test management and defect tracking. Collaborate within Agile teams to ensure high-quality deliverables. Key Skills & Qualifications Strong expertise in SQL, with the ability to write and analyze complex queries. Hands-on experience in ETL/Data Warehouse testing and BI reporting validation. Experience testing reports built with Tableau or similar tools. Familiarity with database systems such as Vertica, Oracle, or Teradata. Proficient in using test and defect management tools, especially JIRA. Solid understanding of SDLC and Agile methodologies. Strong analytical, problem-solving, and communication skills. What We Offer Work on real-world, large-scale data testing projects. Exposure to modern BI and ETL ecosystems. A collaborative culture that values innovation and precision. Opportunities for learning and career advancement in data quality and BI testing. If you’re passionate about data accuracy and love solving complex data puzzles, we’d love to hear from you.
Posted 1 month ago
6.0 - 11.0 years
6 - 9 Lacs
Gurugram
Work from Office
The Business Intelligence (BI) Specialist is responsible for the design, development, implementation, management and support of mission- critical enterprise BI reporting and Extract, Transform, Load (ETL) processes and environments. Job Description Exposure to one or more implementations using OBIEE Development and Administration. Must have 6+ Years Development experience in PL/SQL. Experience in developing OBIEE Repository at three layers (Physical, Business model and Presentation Layers), Interactive Dashboards and drill down capabilities using global and Filters and Security Setups. Must have 3+ year of experience in Data Modeling, ETL Development (Preferably OWB), Etl and BI Tools installation and configuration & Oracle APEX. Experience in developing OBIEE Analytics Interactive Dashboards with Drill-down capabilities using global and local Filters, OBIEE Security setup (users/ group,access/ query privileges), configuring OBIEE Analytics Metadata objects (Subject Area, Table, Column), Presentation Services/ Web Catalog objects (Dashboards,Pages, Folders, Reports). Hands on development experience on OBIEE (version 11g or higher), Data Modelling. Experience in installing and configuring Oracle OBIEE in multiple life cycle environments.. Experience creating system architecture design documentation.. Experience presenting system architectures to management and technical stakeholders. Technical and Functional Understanding of Oracle OBIEE Technologies. Good knowledge of OBIEE Admin, best practices, DWBI implementation challenges. Understanding and knowledge of Data warehouse . Must have OBIEE Certification on version 11g or higher. Experience with ETL tools. Experience on HP Vertica. Domain knowledge on Supply Chain, Retail, Manufacturing.. Developing architectural solutions utilizing OBIEE. Work with project management to provide effort estimates and timelines. Interact with Business and IT team members to move the project forward on a daily basis.. Lead the development of OBIEE dashboard and reports . Work with Internal stakeholder and development teams during project lifecycle.
Posted 1 month ago
4.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Description Template Job Title - Decision Science Practitioner Analyst S&C GN Management Level : Senior Analyst Location: Bangalore/ Kolkata Must have skills: Collibra Data Quality - data profiling, anomaly detection, reconciliation, data validation, Python, SQL Good to have skills: PySpark, Kubernetes, Docker, Git Job Summary: We are seeking a highly skilled and motivated Data Science cum Data Engineer Senior Analyst to lead innovative projects and drive impactful solutions in domains such as Consumer Tech , Enterprise Tech , and Semiconductors . This role combines hands-on technical expertise , and client delivery management to execute cutting-edge projects in data science & data engineering Key Responsibilities Data Science and Engineering Implement and manage end to end Data Quality frameworks using Collibra Data Quality (CDQ). This includes – requirement gathering from the client, code development on SQL, Unit testing, Client demos, User acceptance testing, documentation etc. Work extensively with business users, data analysts, and other stakeholders to understand data quality requirements and business use cases. Develop data validation, profiling, anomaly detection, and reconciliation processes. Write SQL queries for simple to complex data quality checks. Python, and PySpark scripts to support data transformation and data ingestion. Deploy and manage solutions on Kubernetes workloads for scalable execution. Maintain comprehensive technical documentation of Data Quality processes and implemented solutions. Work in an Agile environment, leveraging Jira for sprint planning and task management. Troubleshoot data quality issues and collaborate with engineering teams for resolution. Provide insights for continuous improvement in data governance and quality processes. Build and manage robust data pipelines using Pyspark and Python to read and write from databases such as Vertica and PostgreSQL. Optimize and maintain existing pipelines for performance and reliability. Build custom solutions using Python, including FastAPI applications and plugins for Collibra Data Quality. Oversee the infrastructure of the Collibra application in Kubernetes environment, perform upgrades when required, and troubleshoot and resolve any Kubernetes issues that may affect the application's operation. Deploy and manage solutions, optimize resources for deployments in Kubernetes, including writing YAML files and managing configurations Build and deploy Docker images for various use cases, ensuring efficient and reusable solutions. Collaboration and Training Communicate effectively with stakeholders to align technical implementations with business objectives. Provide training and guidance to stakeholders on Collibra Data Quality usage and help them build and implement data quality rules. Version Control and Documentation Use Git for version control to manage code and collaborate effectively. Document all implementations, including data quality workflows, data pipelines, and deployment processes, ensuring easy reference and knowledge sharing. Database and Data Model Optimization Design and optimize data models for efficient storage and retrieval. Required Qualifications Experience: 4+ years in data science Education: B tech, M tech in Computer Science, Statistics, Applied Mathematics, or related field Industry Knowledge: Preferred experience in Consumer Tech, Enterprise Tech, or Semiconductors but not mandatory Technical Skills Programming: Proficiency in Python, SQL for data analysis and transformation. Tools : Hands-on experience with Collibra Data Quality (CDQ) or similar Data Quality tools (e.g., Informatica DQ, Talend, Great Expectations, Ataccama, etc.). Experience working with Kubernetes workloads. Experience with Agile methodologies and task tracking using Jira. Preferred Skills Strong analytical and problem-solving skills with a results-oriented mindset. Good communication, stakeholder management & requirement gathering capabilities. Additional Information: The ideal candidate will possess a strong educational background in quantitative discipline and experience in working with Hi-Tech clients This position is based at our Bengaluru (preferred) and Kolkata office. About Our Company | Accenture Experience: 4+ years in data science Educational Qualification: B tech, M tech in Computer Science, Statistics, Applied Mathematics, or related field
Posted 1 month ago
8.0 years
0 Lacs
Navi Mumbai, Maharashtra, India
On-site
Requirements: Bachelor’s degree in computers or related field. 8+ years of IT experience with strong understanding of software development lifecycle (SDLC), experience with various testing methodologies, proficiency in test management tools, and familiarity with automation frameworks. 6+ years of experience in software quality assurance, with a proven track record of leading testing efforts on complex Data Warehousing/ETL/Business Intelligence projects. 5+ years of experience in SQL and database technologies (Vertica and/or Big Query is a plus). Experience with Python, Unix Shell Scripting, and ETL Tool (like Informatica or Pentaho) is a plus. Experience with Visualization Tools like Tableau is a plus. Excellent written and verbal communication skills with the ability to interact with both technical and non-technical stakeholders. Experience with You Track and Agile development methodologies. Strong understanding of technical concepts and the ability to learn new technologies quickly. Proficiency in using documentation tools and software such as Visio, Microsoft Word, Confluence or similar. Experience with version control systems such as SVN and Git. Ability to work independently and as part of a team. Strong attention to detail and accuracy. Ability to manage multiple projects and meet deadlines. Strong organizational and time management skills. Ability to adapt to changing requirements and priorities. Responsibilities: Lead requirements gathering sessions with key stakeholders to understand business needs and translate them into testable scenarios. Collaborate with and across Agile teams to Analyze business requirements, create detailed test plans, and develop comprehensive test cases to cover all functionalities and edge cases. Perform various types of testing including functional, regression, integration, performance, usability, and accessibility testing, documenting results, and reporting defects. Ensure data quality, data accuracy and integrity across multiple sources and systems. Identify, log, prioritize, and track defects through the bug tracking system, ensuring timely resolution and verification. Triage defects, troubleshoot issues, identify root causes, and propose solutions. Utilize advanced SQL for testing, data analysis, and to support reporting needs. Mentor junior QA analysts, review test cases, and contribute to the overall quality assurance process improvement initiatives. Clearly communicate testing progress, findings, and risks to project teams, including developers, product managers, and business stakeholders. Work with Infrastructure/systems team and developers to ensure all modules are up-to- date and are compatible with the code. Identify opportunities for process optimization, automation, and system improvements.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France