Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
4 - 6 years
6 - 8 Lacs
Gurgaon
Work from Office
BI Specialist, OBIEE Developer BeamstacksMG Road, Gurgaon. The Business Intelligence (BI) Specialist is responsible for the design, development, implementation, management and support of missioncritical enterprise BI reporting and Extract, Transform, Load (ETL) processes and environments. Exposure to one or more implementations using OBIEE Development and Administration. Must have 6 Years Development experience in PL/SQL. Experience in developing OBIEE Repository at three layers (Physical, Business model and Presentation Layers), Interactive Dashboards and drill down capabilities using global and Filters and Security Setups. Must have 3 year of experience in Data Modeling, ETL Development (Preferably OWB), Etl and BI Tools installation and configuration & Oracle APEX. Experience in developing OBIEE Analytics Interactive Dashboards with Drill-down capabilities using global and local Filters, OBIEE Security setup (users/ group,access/ query privileges), configuring OBIEE Analytics Metadata objects (Subject Area, Table, Column), Presentation Services/ Web Catalog objects (Dashboards,Pages, Folders, Reports). Hands on development experience on OBIEE (version 11g or higher), Data Modelling. Experience in installing and configuring Oracle OBIEE in multiple life cycle environments. Experience creating system architecture design documentation. Experience presenting system architectures to management and technical stakeholders. Technical and Functional Understanding of Oracle OBIEE Technologies. Good knowledge of OBIEE Admin, best practices, DWBI implementation challenges. Understanding and knowledge of Data warehouse. Must have OBIEE Certification on version 11g or higher. Experience with ETL tools. Experience on HP Vertica. Domain knowledge on Supply Chain, Retail, Manufacturing. Developing architectural solutions utilizing OBIEE. Work with project management to provide effort estimates and timelines. Interact with Business and IT team members to move the project forward on a daily basis. Lead the development of OBIEE dashboard and reports. Work with Internal stakeholder and development teams during project lifecycle
Posted 3 months ago
5 - 9 years
8 - 13 Lacs
Bengaluru
Work from Office
Your Impact: Sr. Technical Support Specialists are responsible for providing exceptional technical support on OpenText products. As a Senior Technical Support Specialist, you will reproduce, troubleshoot, and resolve customer issues. Youll identify defects and escalate to OpenText Product Engineering, and test software patches for customers. You will be recognized by your peers as an expert in your chosen product area. This position offers you an opportunity to learn exciting technologies and exercise critical and creative thinking. Our strong team-based environment ensures that our team members support each other to deliver excellent Customer Experience. What the role offers: 5+ years experience in a technical support environment. Flexible to provide on-call / outside business support hours as, and when, needed. A Science /Technology Engineering or bachelors degree preferred. Strong analytical and critical thinking skills. Strong verbal and written communication skills. Proven experience working in a fluid environment that is ever growing and changing. Ability to multi-task and prioritize work effectively. Strong attention to detail and the ability to grasp concepts quickly with a thirst for knowledge. What you need to succeed: Hands-on experience troubleshooting Windows/Linux Operating Systems. Strong troubleshooting skills, diagnostic analysis using traces, dumps and other tools, and hypothesis formulation and testing. Database knowledge - PostgreSQL, Oracle, MS SQL, Vertica Network and security protocols like TPC/IP, HTTP, TLS/SSL, REST API, SOAP and SAML Virtualization Skills VMware, Hyper-V Experience on Cloud technologies AWS, Azure or Google Cloud Good Scripting knowledge - Perl, Python, Shell Must be familiar with HA and DR setup. Experience in Docker/Kubernetes Experience in Web Service/Java script Identity Management, Access Management, Data Security, Application Security SIEM applications. Experience on Cloud technologies AWS, Azure Familiarity with containerization tools like Docker or Kubernetes Experience in Docker/Kubernetes is a plus.workplace.
Posted 3 months ago
3 - 5 years
7 - 12 Lacs
Bengaluru
Work from Office
OPENTEXT OpenText is a global leader in information management, where innovation, creativity, and collaboration are the key components of our corporate culture. As a member of our team, you will have the opportunity to partner with the most highly regarded companies in the world, tackle complex issues, and contribute to projects that shape the future of digital transformation. Your Impact: You will have to design and develop new product capabilities by working with the System Architect and a team of Engineers and other architects. You will contribute as a team member and take responsibility for own work commitments and take part in project / functional problem-solving. You will make decisions based on established practices. You will work under general guidance with progress reviewed on a regular basis. You will also be involved in handling customer incidents (CPE), understanding customer use cases, designing & implementing, and troubleshooting and debugging of software programs. What the role offers: Produce high quality code according to design specifications. Software design/coding for a functional requirement, ensure quality and adherence to company standards. Utilize analytical skills to troubleshoot and fix complex code defects. Work across teams and functional roles to ensure interoperability among other products, including training and consultation. Provide status updates to stakeholders and escalates issues when necessary. Participate in the software development process from design to release in an Agile Development Framework. Design enhancements, updates, and programming changes for portions and subsystems of the software Analyses design and determines coding, programming, and integration activities required based on general objectives and knowledge of overall architecture of product or solution. Current Product Engineering (CPE) based on customer submitted incidents. Experience in troubleshooting and providing solutions for customer issues in a complex environment. Excellent team player and focus on collaboration activities. Ability to take up other duties as assigned. Provide guidance and mentoring to less-experienced team members. What you need to succeed: Bachelor's or Master's engineering degree in Computer Science, Information Systems, or equivalent from premier institutes. 3-5 years of overall software development experience, with at least 2+ recent years of experience in developing python applications on a large-scale environment. Fundamentally good programming and debugging skills Working knowledge in Python and Core Java Programming skills Working knowledge in Docker/Container technologies, Kubernetes, Helm Working knowledge on Virtualization technologies Knowledge on XML, JSON and processing them programmatically. User or Administration knowledge on Linux Operating System Database user level Knowledge, preferably PostgreSQL, Vertica and Oracle DB. Should be capable of writing and debugging SQL queries. Exposure to Cloud technologies usage and deployments would be good (AWS, GCP, Azure etc.) Working experience in Agile environment or Scaled Agile (SAFe) Strong Knowledge on Object oriented design and Data Structures. Ability to work independently in a cross functional distributed team culture with focus on teamwork. Experience of technically mentoring and guiding junior engineers Strong Communication, analytical and problem-solving skills. Understanding on CI-CD/build tools like GIT, Maven, Gradle, Jenkins Knowledge and experience in IT Operations Management Domain.workplace.
Posted 3 months ago
5 - 10 years
8 - 13 Lacs
Bengaluru
Work from Office
Your Impact: Sr. Technical Support Specialists are responsible for providing exceptional technical support on OpenText products. As a Senior Technical Support Specialist, you will reproduce, troubleshoot, and resolve customer issues. Youll identify defects and escalate to OpenText Product Engineering, and test software patches for customers. You will be recognized by your peers as an expert in your chosen product area. This position offers you an opportunity to learn exciting technologies and exercise critical and creative thinking. Our strong team-based environment ensures that our team members support each other to deliver excellent Customer Experience. What the role offers: 5+ years experience in a technical support environment. Flexible to provide on-call / outside business support hours as, and when, needed. A Science /Technology Engineering or bachelors degree preferred. Strong analytical and critical thinking skills. Strong verbal and written communication skills. Proven experience working in a fluid environment that is ever growing and changing. Ability to multi-task and prioritize work effectively. Strong attention to detail and the ability to grasp concepts quickly with a thirst for knowledge. What you need to succeed: Hands-on experience troubleshooting Windows/Linux Operating Systems. Strong troubleshooting skills, diagnostic analysis using traces, dumps and other tools, and hypothesis formulation and testing. Database knowledge - PostgreSQL, Oracle, MS SQL, Vertica Network and security protocols like TPC/IP, HTTP, TLS/SSL, REST API, SOAP and SAML Virtualization Skills VMware, Hyper-V Experience on Cloud technologies AWS, Azure or Google Cloud Good Scripting knowledge - Perl, Python, Shell Must be familiar with HA and DR setup. Experience in Docker/Kubernetes Experience in Web Service/Java script
Posted 3 months ago
8 - 13 years
18 - 27 Lacs
Bengaluru
Work from Office
About Persistent We are a trusted Digital Engineering and Enterprise Modernization partner, combining deep technical expertise and industry experience to help our clients anticipate what’s next. Our offerings and proven solutions create a unique competitive advantage for our clients by giving them the power to see beyond and rise above. We work with many industry-leading organizations across the world including 12 of the 30 most innovative US companies, 80% of the largest banks in the US and India, and numerous innovators across the healthcare ecosystem. Our growth trajectory continues, as we reported $1,231M annual revenue (16% Y-o-Y). Along with our growth, we’ve onboarded over 4900 new employees in the past year, bringing our total employee count to over 23,500+ people located in 19 countries across the globe. Persistent Ltd. is dedicated to fostering diversity and inclusion in the workplace. We invite applications from all qualified individuals, including those with disabilities, and regardless of gender or gender preference. We welcome diverse candidates from all backgrounds. For more details please login to www.persistent.com About The Position We are looking for a Data Architect with creativity and results-oriented critical thinking to meet complex challenges and develop new strategies for acquiring, analyzing, modeling and storing data. In this role you will guide the company into the future and utilize the latest technology and information management methodologies to meet our requirements for effective logical data modeling, metadata management and database warehouse domains. You will be working with experts in a variety of industries, including computer science and software development, as well as department heads and senior executives to integrate new technologies and refine system performance. We reward dedicated performance with exceptional pay and benefits, as well as tuition reimbursement and career growth opportunities. What You?ll Do Define data retention policies Monitor performance and advise any necessary infrastructure changes Mentor junior engineers and work with other architects to deliver best in class solutions Implement ETL / ELT process and orchestration of data flows Recommend and drive adoption of newer tools and techniques from the big data ecosystem Expertise You?ll Bring 10+ years in industry, building and managing big data systems Building, monitoring, and optimizing reliable and cost-efficient pipelines for SaaS is a must Building stream-processing systems, using solutions such as Storm or Spark-Streaming Dealing and integrating with data storage systems like SQL and NoSQL databases, file systems and object storage like s3 Reporting solutions like Pentaho, PowerBI, Looker including customizations Developing high concurrency, high performance applications that are database-intensive and have interactive, browser-based clients Working with SaaS based data management products will be an added advantage Proficiency and expertise in Cloudera / Hortonworks Spark HDF and NiFi RDBMS, NoSQL like Vertica, Redshift, Data Modelling with physical design and SQL performance optimization Messaging systems, JMS, Active MQ, Rabbit MQ, Kafka Big Data technology like Hadoop, Spark, NoSQL based data-warehousing solutions Data warehousing, reporting including customization, Hadoop, Spark, Kafka, Core java, Spring/IOC, Design patterns Big Data querying tools, such as Pig, Hive, and Impala Open-source technologies and databases (SQL & NoSQL) Proficient understanding of distributed computing principles Ability to solve any ongoing issues with operating the cluster Scale data pipelines using open-source components and AWS services Cloud (AWS), provisioning, capacity planning and performance analysis at various levels Web-based SOA architecture implementation with design pattern experience will be an added advantage Benefits Competitive salary and benefits package Culture focused on talent development with quarterly promotion cycles and company-sponsored higher education and certifications Opportunity to work with cutting-edge technologies Employee engagement initiatives such as project parties, flexible work hours, and Long Service awards Annual health check-ups Insurance coverage : group term life , personal accident, and Mediclaim hospitalization for self, spouse, two children, and parents Inclusive Environment •We offer hybrid work options and flexible working hours to accommodate various needs and preferences. •Our office is equipped with accessible facilities, including adjustable workstations, ergonomic chairs, and assistive technologies to support employees with physical disabilities. Let's unleash your full potential. See Beyond, Rise Above.
Posted 3 months ago
8 - 13 years
18 - 30 Lacs
Pune
Work from Office
About Persistent We are a trusted Digital Engineering and Enterprise Modernization partner, combining deep technical expertise and industry experience to help our clients anticipate what’s next. Our offerings and proven solutions create a unique competitive advantage for our clients by giving them the power to see beyond and rise above. We work with many industry-leading organizations across the world including 12 of the 30 most innovative US companies, 80% of the largest banks in the US and India, and numerous innovators across the healthcare ecosystem. Our growth trajectory continues, as we reported $1,231M annual revenue (16% Y-o-Y). Along with our growth, we’ve onboarded over 4900 new employees in the past year, bringing our total employee count to over 23,500+ people located in 19 countries across the globe. Persistent Ltd. is dedicated to fostering diversity and inclusion in the workplace. We invite applications from all qualified individuals, including those with disabilities, and regardless of gender or gender preference. We welcome diverse candidates from all backgrounds. For more details please login to www.persistent.com About The Position We are looking for a Data Architect with creativity and results-oriented critical thinking to meet complex challenges and develop new strategies for acquiring, analyzing, modeling and storing data. In this role you will guide the company into the future and utilize the latest technology and information management methodologies to meet our requirements for effective logical data modeling, metadata management and database warehouse domains. You will be working with experts in a variety of industries, including computer science and software development, as well as department heads and senior executives to integrate new technologies and refine system performance. We reward dedicated performance with exceptional pay and benefits, as well as tuition reimbursement and career growth opportunities. What You?ll Do Define data retention policies Monitor performance and advise any necessary infrastructure changes Mentor junior engineers and work with other architects to deliver best in class solutions Implement ETL / ELT process and orchestration of data flows Recommend and drive adoption of newer tools and techniques from the big data ecosystem Expertise You?ll Bring 10+ years in industry, building and managing big data systems Building, monitoring, and optimizing reliable and cost-efficient pipelines for SaaS is a must Building stream-processing systems, using solutions such as Storm or Spark-Streaming Dealing and integrating with data storage systems like SQL and NoSQL databases, file systems and object storage like s3 Reporting solutions like Pentaho, PowerBI, Looker including customizations Developing high concurrency, high performance applications that are database-intensive and have interactive, browser-based clients Working with SaaS based data management products will be an added advantage Proficiency and expertise in Cloudera / Hortonworks Spark HDF and NiFi RDBMS, NoSQL like Vertica, Redshift, Data Modelling with physical design and SQL performance optimization Messaging systems, JMS, Active MQ, Rabbit MQ, Kafka Big Data technology like Hadoop, Spark, NoSQL based data-warehousing solutions Data warehousing, reporting including customization, Hadoop, Spark, Kafka, Core java, Spring/IOC, Design patterns Big Data querying tools, such as Pig, Hive, and Impala Open-source technologies and databases (SQL & NoSQL) Proficient understanding of distributed computing principles Ability to solve any ongoing issues with operating the cluster Scale data pipelines using open-source components and AWS services Cloud (AWS), provisioning, capacity planning and performance analysis at various levels Web-based SOA architecture implementation with design pattern experience will be an added advantage Benefits Competitive salary and benefits package Culture focused on talent development with quarterly promotion cycles and company-sponsored higher education and certifications Opportunity to work with cutting-edge technologies Employee engagement initiatives such as project parties, flexible work hours, and Long Service awards Annual health check-ups Insurance coverage : group term life , personal accident, and Mediclaim hospitalization for self, spouse, two children, and parents Inclusive Environment •We offer hybrid work options and flexible working hours to accommodate various needs and preferences. •Our office is equipped with accessible facilities, including adjustable workstations, ergonomic chairs, and assistive technologies to support employees with physical disabilities. Let's unleash your full potential. See Beyond, Rise Above.
Posted 3 months ago
8 - 13 years
18 - 25 Lacs
Hyderabad
Work from Office
About Persistent We are a trusted Digital Engineering and Enterprise Modernization partner, combining deep technical expertise and industry experience to help our clients anticipate what’s next. Our offerings and proven solutions create a unique competitive advantage for our clients by giving them the power to see beyond and rise above. We work with many industry-leading organizations across the world including 12 of the 30 most innovative US companies, 80% of the largest banks in the US and India, and numerous innovators across the healthcare ecosystem. Our growth trajectory continues, as we reported $1,231M annual revenue (16% Y-o-Y). Along with our growth, we’ve onboarded over 4900 new employees in the past year, bringing our total employee count to over 23,500+ people located in 19 countries across the globe. Persistent Ltd. is dedicated to fostering diversity and inclusion in the workplace. We invite applications from all qualified individuals, including those with disabilities, and regardless of gender or gender preference. We welcome diverse candidates from all backgrounds. For more details please login to www.persistent.com About The Position We are looking for a Data Architect with creativity and results-oriented critical thinking to meet complex challenges and develop new strategies for acquiring, analyzing, modeling and storing data. In this role you will guide the company into the future and utilize the latest technology and information management methodologies to meet our requirements for effective logical data modeling, metadata management and database warehouse domains. You will be working with experts in a variety of industries, including computer science and software development, as well as department heads and senior executives to integrate new technologies and refine system performance. We reward dedicated performance with exceptional pay and benefits, as well as tuition reimbursement and career growth opportunities. What You?ll Do Define data retention policies Monitor performance and advise any necessary infrastructure changes Mentor junior engineers and work with other architects to deliver best in class solutions Implement ETL / ELT process and orchestration of data flows Recommend and drive adoption of newer tools and techniques from the big data ecosystem Expertise You?ll Bring 10+ years in industry, building and managing big data systems Building, monitoring, and optimizing reliable and cost-efficient pipelines for SaaS is a must Building stream-processing systems, using solutions such as Storm or Spark-Streaming Dealing and integrating with data storage systems like SQL and NoSQL databases, file systems and object storage like s3 Reporting solutions like Pentaho, PowerBI, Looker including customizations Developing high concurrency, high performance applications that are database-intensive and have interactive, browser-based clients Working with SaaS based data management products will be an added advantage Proficiency and expertise in Cloudera / Hortonworks Spark HDF and NiFi RDBMS, NoSQL like Vertica, Redshift, Data Modelling with physical design and SQL performance optimization Messaging systems, JMS, Active MQ, Rabbit MQ, Kafka Big Data technology like Hadoop, Spark, NoSQL based data-warehousing solutions Data warehousing, reporting including customization, Hadoop, Spark, Kafka, Core java, Spring/IOC, Design patterns Big Data querying tools, such as Pig, Hive, and Impala Open-source technologies and databases (SQL & NoSQL) Proficient understanding of distributed computing principles Ability to solve any ongoing issues with operating the cluster Scale data pipelines using open-source components and AWS services Cloud (AWS), provisioning, capacity planning and performance analysis at various levels Web-based SOA architecture implementation with design pattern experience will be an added advantage Benefits Competitive salary and benefits package Culture focused on talent development with quarterly promotion cycles and company-sponsored higher education and certifications Opportunity to work with cutting-edge technologies Employee engagement initiatives such as project parties, flexible work hours, and Long Service awards Annual health check-ups Insurance coverage : group term life , personal accident, and Mediclaim hospitalization for self, spouse, two children, and parents Inclusive Environment •We offer hybrid work options and flexible working hours to accommodate various needs and preferences. •Our office is equipped with accessible facilities, including adjustable workstations, ergonomic chairs, and assistive technologies to support employees with physical disabilities. Let's unleash your full potential. See Beyond, Rise Above.
Posted 3 months ago
2 - 6 years
4 - 8 Lacs
Hyderabad
Work from Office
About The Role : Template Job Title - Decision Science Practitioner Analyst S&C GN Management Level :Analyst Location:Bangalore/ Kolkata Must have skills: Collibra Data Quality - data profiling, anomaly detection, reconciliation, data validation, Python, SQL Good to have skills: PySpark, Kubernetes, Docker, Git Job Summary :We are seeking a highly skilled and motivated Data Science cum Data Engineer Senior Analyst to lead innovative projects and drive impactful solutions in domains such as Consumer Tech , Enterprise Tech , and Semiconductors . This role combines hands-on technical expertise , and client delivery management to execute cutting-edge projects in data science & data engineering Key Responsibilities Data Science and Engineering Implement and manage end to end Data Quality frameworks using Collibra Data Quality (CDQ). This includes requirement gathering from the client, code development on SQL, Unit testing, Client demos, User acceptance testing, documentation etc. Work extensively with business users, data analysts, and other stakeholders to understand data quality requirements and business use cases. Develop data validation, profiling, anomaly detection, and reconciliation processes. Write SQL queries for simple to complex data quality checks . Python, and PySpark scripts to support data transformation and data ingestion. Deploy and manage solutions on Kubernetes workloads for scalable execution. Maintain comprehensive technical documentation of Data Quality processes and implemented solutions. Work in an Agile environment , leveraging Jira for sprint planning and task management. Troubleshoot data quality issues and collaborate with engineering teams for resolution. Provide insights for continuous improvement in data governance and quality processes. Build and manage robust data pipelines using Pyspark and Python to read and write from databases such as Vertica and PostgreSQL. Optimize and maintain existing pipelines for performance and reliability. Build custom solutions using Python, including FastAPI applications and plugins for Collibra Data Quality. Oversee the infrastructure of the Collibra application in Kubernetes environment, perform upgrades when required, and troubleshoot and resolve any Kubernetes issues that may affect the applications operation. Deploy and manage solutions, optimize resources for deployments in Kubernetes, including writing YAML files and managing configurations Build and deploy Docker images for various use cases, ensuring efficient and reusable solutions. Collaboration and Training Communicate effectively with stakeholders to align technical implementations with business objectives. Provide training and guidance to stakeholders on Collibra Data Quality usage and help them build and implement data quality rules. Version Control and Documentation Use Git for version control to manage code and collaborate effectively. Document all implementations, including data quality workflows, data pipelines, and deployment processes, ensuring easy reference and knowledge sharing. Database and Data Model Optimization Design and optimize data models for efficient storage and retrieval. Required Qualifications Experience:2+ years in data science Education:B.tech, M.tech in Computer Science, Statistics, Applied Mathematics, or related field Industry Knowledge:Preferred experience in Consumer Tech, Enterprise Tech, or Semiconductors but not mandatory Technical Skills Programming: Proficiency in Python , SQL for data analysis and transformation. Tools :Hands-on experience with Collibra Data Quality (CDQ) or similar Data Quality tools (e.g., Informatica DQ, Talend, Great Expectations, Ataccama, etc.). Experience working with Kubernetes workloads. Experience with Agile methodologies and task tracking using Jira. Preferred Skills Strong analytical and problem-solving skills with a results-oriented mindset. Good communication & requirement gathering capabilities. Additional Information: - The ideal candidate will possess a strong educational background in quantitative discipline and experience in working with Hi-Tech clients - This position is based at our Bengaluru (preferred) and Kolkata office. Qualifications Experience: 2+ years in data science Educational Qualification:B tech, M tech in Computer Science, Statistics, Applied Mathematics, or related field
Posted 3 months ago
4 - 8 years
8 - 10 Lacs
Bengaluru
Work from Office
About The Role : Template Job Title - Decision Science Practitioner Analyst S&C GN Management Level :Senior Analyst Location:Bangalore/ Kolkata Must have skills: Collibra Data Quality - data profiling, anomaly detection, reconciliation, data validation, Python, SQL Good to have skills: PySpark, Kubernetes, Docker, Git Job Summary :We are seeking a highly skilled and motivated Data Science cum Data Engineer Senior Analyst to lead innovative projects and drive impactful solutions in domains such as Consumer Tech , Enterprise Tech , and Semiconductors . This role combines hands-on technical expertise , and client delivery management to execute cutting-edge projects in data science & data engineering Key Responsibilities Data Science and Engineering Implement and manage end to end Data Quality frameworks using Collibra Data Quality (CDQ). This includes requirement gathering from the client, code development on SQL, Unit testing, Client demos, User acceptance testing, documentation etc. Work extensively with business users, data analysts, and other stakeholders to understand data quality requirements and business use cases. Develop data validation, profiling, anomaly detection, and reconciliation processes. Write SQL queries for simple to complex data quality checks . Python, and PySpark scripts to support data transformation and data ingestion. Deploy and manage solutions on Kubernetes workloads for scalable execution. Maintain comprehensive technical documentation of Data Quality processes and implemented solutions. Work in an Agile environment , leveraging Jira for sprint planning and task management. Troubleshoot data quality issues and collaborate with engineering teams for resolution. Provide insights for continuous improvement in data governance and quality processes. Build and manage robust data pipelines using Pyspark and Python to read and write from databases such as Vertica and PostgreSQL. Optimize and maintain existing pipelines for performance and reliability. Build custom solutions using Python, including FastAPI applications and plugins for Collibra Data Quality. Oversee the infrastructure of the Collibra application in Kubernetes environment, perform upgrades when required, and troubleshoot and resolve any Kubernetes issues that may affect the applications operation. Deploy and manage solutions, optimize resources for deployments in Kubernetes, including writing YAML files and managing configurations Build and deploy Docker images for various use cases, ensuring efficient and reusable solutions. Collaboration and Training Communicate effectively with stakeholders to align technical implementations with business objectives. Provide training and guidance to stakeholders on Collibra Data Quality usage and help them build and implement data quality rules. Version Control and Documentation Use Git for version control to manage code and collaborate effectively. Document all implementations, including data quality workflows, data pipelines, and deployment processes, ensuring easy reference and knowledge sharing. Database and Data Model Optimization Design and optimize data models for efficient storage and retrieval. Required Qualifications Experience:4+ years in data science Education:B tech, M tech in Computer Science, Statistics, Applied Mathematics, or related field Industry Knowledge:Preferred experience in Consumer Tech, Enterprise Tech, or Semiconductors but not mandatory Technical Skills Programming: Proficiency in Python , SQL for data analysis and transformation. Tools :Hands-on experience with Collibra Data Quality (CDQ) or similar Data Quality tools (e.g., Informatica DQ, Talend, Great Expectations, Ataccama, etc.). Experience working with Kubernetes workloads. Experience with Agile methodologies and task tracking using Jira. Preferred Skills Strong analytical and problem-solving skills with a results-oriented mindset. Good communication, stakeholder management & requirement gathering capabilities. Additional Information: - The ideal candidate will possess a strong educational background in quantitative discipline and experience in working with Hi-Tech clients - This position is based at our Bengaluru (preferred) and Kolkata office. Qualifications Experience: 4+ years in data science Educational Qualification:B tech, M tech in Computer Science, Statistics, Applied Mathematics, or related field
Posted 3 months ago
3 - 6 years
7 - 12 Lacs
Chennai
Work from Office
Primary Responsibilities: Design and build data pipelines to process terabytes of data Orchestrate in Airflow the data tasks to run on Kubernetes/Hadoop for the ingestion, processing and cleaning of data Create Docker images for various applications and deploy them on Kubernetes Design and build best in class processes to clean and standardize data Troubleshoot production issues in our Elastic Environment Tuning and optimizing data processes Advancing the team"s DataOps culture (CI/CD, Orchestration, Testing, Monitoring) and building out standard development patterns Drive innovation by testing new technology and approaches to continually advance the capability of the data engineering function Drive efficiencies in current engineering processes via standardization and migration of existing on-premises processes to the cloud Ensuring Data Quality - building best in class data quality monitoring that ensure that all data products exceed customer expectations Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Bachelor"s degree in Computer Science or similar Hands-on experience on the following technologies: Developing processes in Spark Writing complex SQL queries Building ETL/data pipelines Related/complementary open-source software platforms and languages (e.g. Scala, Python, Java, Linux) Experience building cloud-native data pipelines on either AWS, Azure or GCP following best practices in cloud deployments Solid DataOps experience (CI/CD, Orchestration, Testing, Monitoring) Good experience handling real-time, near real-time and batch data ingestions Good understanding of Data Modelling techniques i.e. DataVault, Kimble Star Proven excellent understanding of Column-Store RDBMS (DataBricks, Snowflake, Redshift, Vertica, Clickhouse) Proven track record of designing effective data strategies and leveraging modern data architectures that resulted in business value Demonstrated effective interpersonal, influence, collaboration and listening skills Demonstrated solid stakeholder management skills
Posted 3 months ago
3 - 8 years
5 - 10 Lacs
Gurgaon
Work from Office
Project Role : Tech Delivery Subject Matter Expert Project Role Description : Drive innovative practices into delivery, bring depth of expertise to a delivery engagement. Sought out as experts, enhance Accentures marketplace reputation. Bring emerging ideas to life by shaping Accenture and client strategy. Use deep technical expertise, business acumen and fluid communication skills, work directly with a client in a trusted advisor relationship to gather requirements to analyze, design and/or implement technology best practice business changes. Must have skills : Signavio Process Intelligence Good to have skills : NA Minimum 3 year(s) of experience is required Educational Qualification : 15 years of full-time educationB.E/B.Tech Project Role:Sr. Data Engineer Project Role Description:Understand business process & Business Process Mining in Celonis, Signavio and other major tools & perform data engineering to meet business process and application requirements. Must have Skills:Data Engineering/SQL/Vertica SQL and Data Modelling/Data Analytics/ Data Mining/ Process Mining tools like Celonis, Signavio, etc.Machine Learning, Python, PQL and SQL.Strong presentation and communication skills. Good to have skills:Data visualization experience from tools like Power BI, Tableau, etc.Data WarehousingExperience in ETL concepts from tools like Informatica, Talend, etc. is an advantageA basic understanding of any of the process flows will be good - P2P, O2C, R2R. Key Responsibilities:Design, build the data models for Process Mining using tools like Celonis, Signavio, etc.Data source connections to Process Mining tools, required Configurations and Extract the relevant dataBuild the Custom Transformations for Events in Business Process, and implement the Custom KPIs for different Use cases to meet the client requirementsBuild process data visualization in the dashboards within Celonis / Signavio / relevant process mining tools.Perform the Monitoring & Enhancement of the Business Process, including Process Automation. Technical ExperienceMin 4+ years of experience in Data Analytics, Data Mining and Process Mining with good knowledge on various tools available in the market for Process MiningWorking knowledge of SQL and Experience in architecting Data Model is requiredGood understanding & experience in creation of various KPIs used in the business processesHands-on experience on Process Mining tools (Celonis and/or Signavio) is desirableExtract the data and create business objects & event collectorsExtract & create Transformations on Client data, Build customize the Data Model based on client business processCapable of creating Data Model in Signavio and/or Celonis.Capable of creating custom Data Model based on client business process and build KPIs to implement the use-case specific to processes and client requirementsExcellent communication and interpersonal skills, and core consulting skills (ppt, excels) are a must.Hands-on experience on BPML tools (ARIS, Signavio, Mavim etc.) is an advantageHands on experience on building Analysis dashboard.Should be able to create custom connectors.Should know how to make API Calls.Should be able to establish an end-to-end live connectivity – on premise as well as cloud and custom connectors.Should be able to write complex SQL and Signavio Analytical Language (SiGNAL) queries. Professional AttributesUnderstand & implement the customer requirementsStrong presentation and communication skillsBeing autonomous, Quick learner and explorativeAbility to be part of a teamCommitment & responsibility towards the work. Educational Qualification15 years of full-time educationB.E/B.Tech Additional InformationNotice Period:ImmediateWork from OfficeLocation:Pan India Qualifications 15 years of full-time educationB.E/B.Tech
Posted 3 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2